Updates from: 08/02/2022 01:13:34
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md
To set your local account sign-in options at the tenant level:
## Configure your user flow 1. In the left menu of the Azure portal, select **Azure AD B2C**.
-1. Under **Policies**, select **User flows (policies)**.
+1. Under **Policies**, select **User flows**.
1. Select the user flow for which you'd like to configure the sign-up and sign-in experience. 1. Select **Identity providers** 1. Under the **Local accounts**, select one of the following: **Email signup**, **User ID signup**, **Phone signup**, **Phone/Email signup**, or **None**.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 05/23/2022 Last updated : 08/01/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## July 2022
+
+### New articles
+
+- [Configure authentication in a sample React single-page application by using Azure Active Directory B2C](configure-authentication-sample-react-spa-app.md)
+- [Configure authentication options in a React application by using Azure Active Directory B2C](enable-authentication-react-spa-app-options.md)
+- [Enable authentication in your own React Application by using Azure Active Directory B2C](enable-authentication-react-spa-app.md)
+
+### Updated articles
+
+- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)
+- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md)
+- [Page layout versions](page-layout.md)
+- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)
+- [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md)
+- [Localization string IDs](localization-string-ids.md)
+ ## June 2022 ### New articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Billing model for Azure Active Directory B2C](billing.md) - [Configure SAML identity provider options with Azure Active Directory B2C](identity-provider-generic-saml-options.md) - [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md)-- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)-
-## December 2021
-
-### New articles
--- [TOTP display control](display-control-time-based-one-time-password.md)-- [Set up sign-up and sign-in with a SwissID account using Azure Active Directory B2C](identity-provider-swissid.md)-- [Set up sign-up and sign-in with a PingOne account using Azure Active Directory B2C](identity-provider-ping-one.md)-- [Tutorial: Configure Haventec with Azure Active Directory B2C for single step, multifactor passwordless authentication](partner-haventec.md)-- [Tutorial: Acquire an access token for calling a web API in Azure AD B2C](tutorial-acquire-access-token.md)-- [Tutorial: Sign in and sign out users with Azure AD B2C in a Node.js web app](tutorial-authenticate-nodejs-web-app-msal.md)-- [Tutorial: Call a web API protected with Azure AD B2C](tutorial-call-api-with-access-token.md)-
-### Updated articles
--- [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)-- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)-- [Display controls](display-controls.md)-- ['Azure AD B2C: Frequently asked questions (FAQ)'](faq.yml)-- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)-- [Define an Azure AD MFA technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md)-- [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md)-- [String claims transformations](string-transformations.md)
+- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
The following table outlines when an authentication method can be used during a
| Method | Primary authentication | Secondary authentication | |--|:-:|:-:|
-| Windows Hello for Business | Yes | MFA |
+| Windows Hello for Business | Yes | MFA\* |
| Microsoft Authenticator app | Yes | MFA and SSPR | | FIDO2 security key | Yes | MFA | | OATH hardware tokens (preview) | No | MFA and SSPR |
The following table outlines when an authentication method can be used during a
| Voice call | No | MFA and SSPR | | Password | Yes | |
+> \* Windows Hello for Business, by itself, does not serve as a step-up MFA credential. For example, an MFA Challenge from Sign-in Frequency or SAML Request containing forceAuthn=true. Windows Hello for Business can serve as a step-up MFA credential by being used in FIDO2 authentication. This requires users to be enabled for FIDO2 authentication to work sucessfully.
+ All of these authentication methods can be configured in the Azure portal, and increasingly using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview). To learn more about how each authentication method works, see the following separate conceptual articles:
active-directory Concept Mfa Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-data-residency.md
Previously updated : 02/16/2021 Last updated : 08/01/2022
The Azure AD multifactor authentication service has datacenters in the United St
* Multifactor authentication phone calls originate from datacenters in the customer's region and are routed by global providers. Phone calls using custom greetings always originate from data centers in the United States. * General purpose user authentication requests from other regions are currently processed based on the user's location.
-* Push notifications that use the Microsoft Authenticator app are currently processed in regional datacenters based on the user's location. Vendor-specific device services, such as Apple Push Notification Service, might be outside the user's location.
+* Push notifications that use the Microsoft Authenticator app are currently processed in regional datacenters based on the user's location. Vendor-specific device services, such as Apple Push Notification Service or Google Firebase Cloud Messaging, might be outside the user's location.
## Personal data stored by Azure AD multifactor authentication
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
Organizations have to consider permissions management as a central piece of thei
- IT security teams are under increased pressure to ensure access to their expanding cloud estate is secure and compliant. - The inconsistency of cloud providers' native access management models makes it even more complex for Security and Identity to manage permissions and enforce least privilege access policies across their entire environment. ## Key use cases
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
Title: Configure an app's publisher domain
-description: Learn how to configure an application's publisher domain to let users know where their information is being sent.
+description: Learn how to configure an app's publisher domain to let users know where their information is being sent.
-# Configure an application's publisher domain
+# Configure an app's publisher domain
-An applicationΓÇÖs publisher domain informs the users where their information is being sent and acts as an input/prerequisite for [publisher verification](publisher-verification-overview.md). Depending on whether an app is a [multi-tenant app](/azure/architecture/guide/multitenant/overview), when it was registered and it's verified publisher status, either the publisher domain or the verified publisher status will be displayed to the user on the [application's consent prompt](application-consent-experience.md). Multi-tenant applications are applications that support accounts outside of a single organizational directory; for example, support all Azure AD accounts, or support all Azure AD accounts and personal Microsoft accounts.
+An appΓÇÖs publisher domain informs users where their information is being sent. The publisher domain also acts as an input or prerequisite for [publisher verification](publisher-verification-overview.md).
-## New applications
+In an app's [consent prompt](application-consent-experience.md), either the publisher domain or the publisher verification status appears. Which information is shown depends on whether the app is a [multitenant app](/azure/architecture/guide/multitenant/overview), when the app was registered, and the app's publisher verification status.
-When you register a new app, the publisher domain of your app may be set to a default value. The value depends on where the app is registered, particularly whether the app is registered in a tenant and whether the tenant has tenant verified domains.
+A *multitenant app* is an app that supports user accounts that are outside a single organizational directory. For example, a multitenant app might support all Azure Active Directory (Azure AD) work or school accounts, or it might support both Azure AD work or school accounts and personal Microsoft accounts.
-If there are tenant-verified domains, the appΓÇÖs publisher domain will default to the primary verified domain of the tenant. If there are no tenant verified domains (which is the case when the application is not registered in a tenant), the appΓÇÖs publisher domain will be set to null.
+## Understand default publisher domain values
-The following table summarizes the default behavior of the publisher domain value.
+Several factors determine the default value that's set for an app's publisher domain:
-| Tenant-verified domains | Default value of publisher domain |
+- Whether the app is registered in a tenant.
+- Whether a tenant has tenant-verified domains.
+- The app registration date.
+
+### Tenant registration and tenant-verified domains
+
+When you register a new app, the publisher domain of your app might be set to a default value. The default value depends on where the app is registered. The publisher domain value depends especially on whether the app is registered in a tenant and whether the tenant has tenant-verified domains.
+
+If the app has tenant-verified domains, the appΓÇÖs publisher domain defaults to the primary verified domain of the tenant. If the app doesn't have tenant-verified domains and the app isn't registered in a tenant, the appΓÇÖs default publisher domain is null.
+
+The following table uses example scenarios to describe the default values for publisher domain:
+
+| Tenant-verified domain | Default value of publisher domain |
|-|-| | null | null |
-| *.onmicrosoft.com | *.onmicrosoft.com |
-| - *.onmicrosoft.com<br/>- domain1.com<br/>- domain2.com (primary) | domain2.com |
+| `*.onmicrosoft.com` | `*.onmicrosoft.com` |
+| - `*.onmicrosoft.com`<br/>- `domain1.com`<br/>- `domain2.com` (primary) | `domain2.com` |
+
+### App registration date
+
+An app's registration date also determines the app's default publisher domain values.
+
+If your multitenant app was registered *between May 21, 2019, and November 30, 2020*:
+
+- If the app's publisher domain isn't set, or if it's set to a domain that ends in `.onmicrosoft.com`, the app's consent prompt shows *unverified* for the publisher domain value.
+- If the app has a verified app domain, the consent prompt shows the verified domain.
+- If the app is publisher verified, the publisher domain shows a [blue *verified* badge](publisher-verification-overview.md) that indicates the status.
-1. If your multi-tenant was registered between **May 21, 2019 and November 30, 2020**:
- - If the application's publisher domain isn't set, or if it's set to a domain that ends in .onmicrosoft.com, the app's consent prompt will show **unverified** in place of the publisher domain.
- - If the application has a verified app domain, the consent prompt will show the verified domain.
- - If the application is publisher verified, it will show a [blue "verified" badge](publisher-verification-overview.md) indicating the same
-2. If your multi-tenant was registered after **November 30, 2020**:
- - If the application is not publisher verified, the app will show as "**unverified**" in the consent prompt (i.e, no publisher domain related info is shown)
- - If the application is publisher verified, it will show a [blue "verified" badge](publisher-verification-overview.md) indicating the same
-## Grandfathered applications
+If your multitenant was registered *after November 30, 2020*:
-If your app was registered **before May 21, 2019**, your application's consent prompt will not show **unverified** even if you have not set a publisher domain. We recommend that you set the publisher domain value so that users can see this information on your app's consent prompt.
+- If the app isn't publisher verified, the consent prompt for the app shows *unverified*. No publisher domain-related information appears.
+- If the app is publisher verified, the app consent prompt shows a [blue *verified* badge](publisher-verification-overview.md).
-## Configure publisher domain using the Azure portal
+#### Apps created before May 21, 2019
-To set your app's publisher domain, follow these steps.
+If your app was registered *before May 21, 2019*, your app's consent prompt shows *unverified*, even if you haven't set a publisher domain. We recommend that you set the publisher domain value so that users can see this information in your app's consent prompt.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which the app is registered.
-1. Navigate to [Azure Active Directory > App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) to find and select the app that you want to configure.
+## Set a publisher domain in the Azure portal
- Once you've selected the app, you'll see the app's **Overview** page.
-1. Under **Manage**, select the **Branding**.
-1. Find the **Publisher domain** field and select one of the following options:
+To set a publisher domain for your app by using the Azure portal:
- - Select **Configure a domain** if you haven't configured a domain already.
- - Select **Update domain** if a domain has already been configured.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the portal global menu to select the tenant where the app is registered.
+1. In Azure Active Directory, go to [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908). Search for and select the app you want to configure.
+1. In **Overview**, in the resource menu under **Manage**, select **Branding**.
+1. In **Publisher domain**, select one of the following options:
-If your app is registered in a tenant, you'll see two tabs to select from: **Select a verified domain** and **Verify a new domain**.
+ - If you haven't already configured a domain, select **Configure a domain**.
+ - If you have configured a domain, select **Update domain**.
-If your domain isn't registered in the tenant, you'll only see the option to verify a new domain for your application.
+1. If your app is registered in a tenant, next, select from two options:
-### To verify a new domain for your app
+ - **Select a verified domain**
+ - **Verify a new domain**
-1. Create a file named `microsoft-identity-association.json` and paste the following JSON code snippet.
+ If your domain isn't registered in the tenant, only the option to verify a new domain for your app appears.
+
+### Verify a new domain for your app
+
+To verify a new publisher domain for your app:
+
+1. Create a file named *microsoft-identity-association.json*. Copy the following JSON and paste it in the *microsoft-identity-association.json* file:
```json { "associatedApplications": [ {
- "applicationId": "{YOUR-APP-ID-HERE}"
+ "applicationId": "<your-app-id>"
}, {
- "applicationId": "{YOUR-OTHER-APP-ID-HERE}"
+ "applicationId": "<another-app-id>"
} ] } ```
-1. Replace the placeholder *{YOUR-APP-ID-HERE}* with the application (client) ID that corresponds to your app.
-1. Host the file at: `https://{YOUR-DOMAIN-HERE}.com/.well-known/microsoft-identity-association.json`. Replace the placeholder *{YOUR-DOMAIN-HERE}* to match the verified domain.
-1. Click the **Verify and save domain** button.
+1. Replace `<your-app-id>` with the application (client) ID for your app. Use all relevant app IDs if you're verifying a new domain for multiple apps.
+1. Host the file at `https://<your-domain>.com/.well-known/microsoft-identity-association.json`. Replace `<your-domain>` with the name of the verified domain.
+1. Select **Verify and save domain**.
-You're not required to maintain the resources that are used for verification after a domain has been verified. When the verification is finished, you can remove the hosted file.
+You're not required to maintain the resources that are used for verification after you verify a domain. When verification is finished, you can remove the hosted file.
-### To select a verified domain
+### Select a verified domain
-If your tenant has verified domains, select one of the domains from the **Select a verified domain** dropdown.
+If your tenant has verified domains, in the **Select a verified domain** dropdown, select one of the domains.
> [!NOTE]
-> The expected `Content-Type` header that should be returned is `application/json`. You may get an error if you use anything else, like `application/json; charset=utf-8`:
+> The expected `Content-Type` header that should return is `application/json`. If you use any other header, like `application/json; charset=utf-8`, you might see this error message:
> > `Verification of publisher domain failed. Error getting JSON file from https:///.well-known/microsoft-identity-association. The server returned an unexpected content type header value.` >
-## Implications on the app consent prompt
+## Publisher domain and the app consent prompt
-Configuring the publisher domain has an impact on what users see on the app consent prompt. To fully understand the components of the consent prompt, see [Understanding the application consent experiences](application-consent-experience.md).
+Configuring the publisher domain affects what users see in the app consent prompt. For more information about the components of the consent prompt, see [Understand the application consent experience](application-consent-experience.md).
-The following table describes the behavior for applications created before May 21, 2019.
+The following figure shows how publisher domain appears in app consent prompts for apps that were created before May 21, 2019:
-![Table that shows consent prompt behavior for apps created before May 21, 2019.](./media/howto-configure-publisher-domain/old-app-behavior-table.png)
-The behavior for applications created between May 21, 2019 and November 30, 2020 will depend on the publisher domain and the type of application. The following table describes what is shown on the consent prompt with the different combinations of configurations.
+For apps that were created between May 21, 2019, and November 30, 2020, how the publisher domain appears in an app's consent prompt depends on the publisher domain and the type of app. The following figure describes what appears on the consent prompt for different combinations of configurations:
-![Table that shows consent prompt behavior for apps created betweeb May 21, 2019 and Nov 30, 2020.](./media/howto-configure-publisher-domain/new-app-behavior-table.png)
-For multi-tenant applications created after November 30, 2020, only publisher verification status is surfaced in the consent prompt. The following table describes what is shown on the consent prompt depending on whether an app is verified or not. Consent prompt for single tenant applications will remain the same as above.
+For multitenant apps that were created after November 30, 2020, only publisher verification status is shown in an app's consent prompt. The following table describes what appears in a consent prompt depending on whether an app is verified. The consent prompt for single-tenant apps remains the same.
-![Table that shows consent prompt behavior for apps created after Nov 30, 2020.](./media/howto-configure-publisher-domain/new-app-behavior-publisher-verification-table.png)
-## Implications on redirect URIs
+## Publisher domain and redirect URIs
-Applications that sign in users with any work or school account, or personal Microsoft accounts (multi-tenant) are subject to few restrictions when specifying redirect URIs.
+Apps that sign in users by using any work or school account or by using a Microsoft account (multitenant) are subject to a few restrictions in redirect URIs.
### Single root domain restriction
-When the publisher domain value for multi-tenant apps is set to null, apps are restricted to share a single root domain for the redirect URIs. For example, the following combination of values isn't allowed because the root domain, contoso.com, doesn't match fabrikam.com.
+When the publisher domain value for a multitenant app is set to null, the app is restricted to sharing a single root domain for the redirect URIs. For example, the following combination of values isn't allowed because the root domain `contoso.com` doesn't match the root domain `fabrikam.com`.
-```
-"https://contoso.com",
+```json
+"https://contoso.com",
"https://fabrikam.com", ``` ### Subdomain restrictions
-Subdomains are allowed, but you must explicitly register the root domain. For example, while the following URIs share a single root domain, the combination isn't allowed.
+Subdomains are allowed, but you must explicitly register the root domain. For example, although the following URIs share a single root domain, the combination isn't allowed:
-```
+```json
"https://app1.contoso.com", "https://app2.contoso.com", ```
-However, if the developer explicitly adds the root domain, the combination is allowed.
+But if the developer explicitly adds the root domain, the combination is allowed:
-```
+```json
"https://contoso.com", "https://app1.contoso.com", "https://app2.contoso.com", ```
-### Exceptions
+### Restriction exceptions
The following cases aren't subject to the single root domain restriction: -- Single tenant apps, or apps that target accounts in a single directory-- Use of localhost as redirect URIs-- Redirect URIs with custom schemes (non-HTTP or HTTPS)
+- Single-tenant apps or apps that target accounts in a single directory.
+- Use of localhost as redirect URIs.
+- Redirect URIs that have custom schemes (non-HTTP or HTTPS).
## Configure publisher domain programmatically
-Currently, there is no REST API or PowerShell support to configure publisher domain programmatically.
+Currently, you can't use REST API or PowerShell to programmatically set a publisher domain.
+
+## Next steps
+
+- Learn how to [mark an app as publisher verified](mark-app-as-publisher-verified.md).
+- [Troubleshoot](troubleshoot-publisher-verification.md) publisher verification.
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Title: Publisher verification overview
-description: Provides an overview of the publisher verification program for the Microsoft identity platform. Lists the benefits, program requirements, and frequently asked questions. When an application is marked as publisher verified, it means that the publisher has verified their identity using a Microsoft Partner Network account that has completed the verification process and has associated this MPN account with their application registration.
+description: Learn about benefits, program requirements, and frequently asked questions in the publisher verification program for the Microsoft identity platform.
# Publisher verification
-Publisher verification helps admins and end users understand the authenticity of application developers integrating with the Microsoft identity platform.
+Publisher verification gives app users and organization admins information about the authenticity of a developer who publishes an app that integrates with the Microsoft identity platform.
-> [!VIDEO https://www.youtube.com/embed/IYRN2jDl5dc]
+An app that's publisher verified means that the app's publisher has verified their identity with Microsoft. Identity verification includes using a [Microsoft Partner Network (MPN)](https://partner.microsoft.com/membership) account that's been [verified](/partner-center/verification-responses) and associating the MPN account with an app registration.
+
+When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages:
-When an application is marked as publisher verified, it means that the publisher has verified their identity using a [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process and has associated this MPN account with their application registration.
-A blue "verified" badge appears on the Azure AD consent prompt and other screens:
+The following video describes the process:
-![Consent prompt](./media/publisher-verification-overview/consent-prompt.png)
+> [!VIDEO https://www.youtube.com/embed/IYRN2jDl5dc]
-This feature is primarily for developers building multi-tenant apps that leverage [OAuth 2.0 and OpenID Connect](active-directory-v2-protocols.md) with the [Microsoft identity platform](v2-overview.md). These apps can sign users in using OpenID Connect, or they may use OAuth 2.0 to request access to data using APIs like [Microsoft Graph](https://developer.microsoft.com/graph/).
+Publisher verification primarily is for developers who build multitenant apps that use [OAuth 2.0 and OpenID Connect](active-directory-v2-protocols.md) with the [Microsoft identity platform](v2-overview.md). These types of apps can sign in a user by using OpenID Connect, or they can use OAuth 2.0 to request access to data by using APIs like [Microsoft Graph](https://developer.microsoft.com/graph/).
## Benefits
-Publisher verification provides the following benefits:
-- **Increased transparency and risk reduction for customers**- this capability helps customers understand which apps being used in their organizations are published by developers they trust. -- **Improved branding**- a ΓÇ£verifiedΓÇ¥ badge appears on the Azure AD [consent prompt](application-consent-experience.md), Enterprise Apps page, and additional UX surfaces used by end users and admins.
+Publisher verification for an app has the following benefits:
+
+- **Increased transparency and risk reduction for customers**. Publisher verification helps customers identify apps that are published by developers they trust to reduce risk in the organization.
+
+- **Improved branding**. A blue *verified* badge appears in the Azure AD app [consent prompt](application-consent-experience.md), on the enterprise apps page, and in other app elements that users and admins see.
-- **Smoother enterprise adoption**- admins can configure [user consent policies](../manage-apps/configure-user-consent.md), with publisher verification status as one of the primary policy criteria.
+- **Smoother enterprise adoption**. Organization admins can configure [user consent policies](../manage-apps/configure-user-consent.md) that include publisher verification status as primary policy criteria.
> [!NOTE]
-> - Starting in November 2020, end users will no longer be able to grant consent to most newly registered multi-tenant apps without verified publishers if [risk-based step-up consent](../manage-apps/configure-risk-based-step-up-consent.md) is enabled. This will apply to apps that are registered after November 8, 2020, use OAuth2.0 to request permissions beyond basic sign-in and read user profile, and request consent from users in different tenants than the one the app is registered in. A warning will be displayed on the consent screen informing users that these apps are risky and are from unverified publishers.
+> Beginning November 2020, if [risk-based step-up consent](../manage-apps/configure-risk-based-step-up-consent.md) is enabled, users can't consent to most newly registered multitenant apps that *aren't* publisher verified. The policy applies to apps that were registered after November 8, 2020, which use OAuth 2.0 to request permissions that extend beyond the basic sign-in and read user profile, and which request consent from users in tenants that aren't the tenant where the app is registered. In this scenario, a warning appears on the consent screen. The warning informs the user that the app was created by an unverified publisher and that the app is risky to download or install.
## Requirements
-There are a few pre-requisites for publisher verification, some of which will have already been completed by many Microsoft partners. They are:
-- An MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. This MPN account must be the [Partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for your organization. (**NOTE**: It can't be the Partner Location MPN ID. Location MPN IDs aren't currently supported)
+App developers must meet a few requirements to complete the publisher verification process. Many Microsoft partners will have already satisfied these requirements.
-- The application to be publisher verified must be registered using a Azure AD account. Applications registered using a Microsoft personal account aren't supported for publisher verification.
+- The developer must have an MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The MPN account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.
-- The Azure AD tenant where the app is registered must be associated with the Partner Global account. If it's not the primary tenant associated with the PGA, follow the steps to [set up the MPN partner global account as a multi-tenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
+ > [!NOTE]
+ > The MPN account you use for publisher verification can't be your partner location MPN ID. Currently, location MPN IDs aren't supported for the publisher verification process.
-- An app registered in an Azure AD tenant, with a [Publisher Domain](howto-configure-publisher-domain.md) configured.
+- The app that's to be publisher verified must be registered by using an Azure AD work or school account. Apps that are registered by using a Microsoft account can't be publisher verified.
-- The domain of the email address used during MPN account verification must either match the publisher domain configured on the app or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) added to the Azure AD tenant.
+- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the MPN PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
-- The user performing verification must be authorized to make changes to both the app registration in Azure AD and the MPN account in Partner Center.
+- The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set.
- - In Azure AD this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Admin.
+- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant.
- - In Partner Center this user must have of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or a Global Admin (this is a shared role mastered in Azure AD).
-
-- The user performing verification must sign in using [multi-factor authentication](../authentication/howto-mfa-getstarted.md).
+- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center.
-- The publisher agrees to the [Microsoft identity platform for developers Terms of Use](/legal/microsoft-identity-platform/terms-of-use).
+ - In Azure AD, this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Admin.
-Developers who have already met these pre-requisites can get verified in a matter of minutes. If the requirements have not been met, getting set up is free.
+ - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or Global Admin (a shared role that's mastered in Azure AD).
+
+- The user who initiates verification must sign in by using [multifactor authentication](../authentication/howto-mfa-getstarted.md).
-## National Clouds and Publisher Verification
-Publisher verification is currently not supported in national clouds. Applications registered in national cloud tenants can't be publisher-verified at this time.
+- The publisher must consent to the [Microsoft identity platform for developers Terms of Use](/legal/microsoft-identity-platform/terms-of-use).
-## Frequently asked questions
-Below are some frequently asked questions regarding the publisher verification program. For FAQs related to the requirements and the process, see [mark an app as publisher verified](mark-app-as-publisher-verified.md).
+Developers who have already met these requirements can be verified in minutes. No charges are associated with completing the prerequisites for publisher verification.
-- **What information does publisher verification __not__ provide?** When an application is marked publisher verified this does not indicate whether the application or its publisher has achieved any specific certifications, complies with industry standards, adheres to best practices, etc. Other Microsoft programs do provide this information, including [Microsoft 365 App Certification](/microsoft-365-app-certification/overview).
+## Publisher verification in national clouds
-- **How much does this cost? Does it require any license?** Microsoft does not charge developers for publisher verification and it does not require any specific license.
+Publisher verification currently isn't supported in national clouds. Apps that are registered in national cloud tenants can't be publisher verified at this time.
-- **How does this relate to Microsoft 365 Publisher Attestation? What about Microsoft 365 App Certification?** These are complementary programs that developers can use to create trustworthy apps that can be confidently adopted by customers. Publisher verification is the first step in this process, and should be completed by all developers creating apps that meet the above criteria.
+## Frequently asked questions
- Developers who are also integrating with Microsoft 365 can receive additional benefits from these programs. For more information, refer to [Microsoft 365 Publisher Attestation](/microsoft-365-app-certification/docs/attestation) and [Microsoft 365 App Certification](/microsoft-365-app-certification/docs/certification).
+Review frequently asked questions about the publisher verification program. For common questions about requirements and the process, see [Mark an app as publisher verified](mark-app-as-publisher-verified.md).
-- **Is this the same thing as the Azure AD Application Gallery?** No- publisher verification is a complementary but separate program to the [Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). Developers who fit the above criteria should complete the publisher verification process independently of participation in that program.
+- **What does publisher verification *not* tell me about the app or its publisher?** The blue *verified* badge doesn't imply or indicate quality criteria you might look for in an app. For example, you might want to know whether the app or its publisher have specific certifications, comply with industry standards, or adhere to best practices. Publisher verification doesn't give you this information. Other Microsoft programs, like [Microsoft 365 App Certification](/microsoft-365-app-certification/overview), do provide this information.
+
+- **How much does publisher verification cost for the app developer? Does it require a license?** Microsoft doesn't charge developers for publisher verification. No license is required to become a verified publisher.
+
+- **How does publisher verification relate to Microsoft 365 Publisher Attestation and Microsoft 365 App Certification?** [Microsoft 365 Publisher Attestation](/microsoft-365-app-certification/docs/attestation) and [Microsoft 365 App Certification](/microsoft-365-app-certification/docs/certification) are complementary programs that help developers publish trustworthy apps that customers can confidently adopt. Publisher verification is the first step in this process. All developers who create apps that meet the criteria for completing Microsoft 365 Publisher Attestation or Microsoft 365 App Certification should complete publisher verification. The combined programs can give developers who integrate their apps with Microsoft 365 even more benefits.
+
+- **Is publisher verification the same as the Azure Active Directory application gallery?** No. Publisher verification complements the [Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md), but it's a separate program. Developers who fit the publisher verification criteria should complete publisher verification independently of participating in the Azure Active Directory application gallery or other programs.
## Next steps
-* Learn how to [mark an app as publisher verified](mark-app-as-publisher-verified.md).
-* [Troubleshoot](troubleshoot-publisher-verification.md) publisher verification.
+
+- Learn how to [mark an app as publisher verified](mark-app-as-publisher-verified.md).
+- [Troubleshoot](troubleshoot-publisher-verification.md) publisher verification.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 07/04/2022 Last updated : 08/01/2022
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## July 2022
+
+### New articles
+
+- [Configure SAML app multi-instancing for an application in Azure Active Directory](reference-app-multi-instancing.md)
+
+### Updated articles
+
+- [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md)
+- [Application configuration options](msal-client-application-configuration.md)
+- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)
+- [Claims mapping policy type](reference-claims-mapping-policy-type.md)
+- [Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)
+- [Microsoft identity platform access tokens](access-tokens.md)
+- [Single-page application: Sign-in and Sign-out](scenario-spa-sign-in.md)
+- [Tutorial: Add sign-in to Microsoft to an ASP.NET web app](tutorial-v2-asp-webapp.md)
+ ## June 2022 ### Updated articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Single sign-on with MSAL.js](msal-js-sso.md) - [Tutorial: Sign in users and acquire a token for Microsoft Graph in a Node.js & Express web app](tutorial-v2-nodejs-webapp-msal.md) - [What's new for authentication?](reference-breaking-changes.md)-
-## March 2022
-
-### New articles
--- [Secure access control using groups in Azure AD](secure-group-access-control.md)-
-### Updated articles
--- [Authentication flow support in MSAL](msal-authentication-flows.md)-- [Claims mapping policy type](reference-claims-mapping-policy-type.md)-- [Configure an app to trust an external identity provider (preview)](workload-identity-federation-create-trust.md)-- [OAuth 2.0 and OpenID Connect in the Microsoft identity platform](active-directory-v2-protocols.md)-- [Signing key rollover in the Microsoft identity platform](active-directory-signing-key-rollover.md)-- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md)
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
You might get the following error message when you initiate a remote desktop con
![Screenshot of the message that says your account is configured to prevent you from using this device.](./media/howto-vm-sign-in-azure-ad-windows/rbac-role-not-assigned.png)
-Verify that you've [configured Azure RBAC policies](../../virtual-machines/linux/login-using-aad.md) for the VM that grant the user the Virtual Machine Administrator Login or Virtual Machine User Login role.
+Verify that you've [configured Azure RBAC policies](#configure-role-assignments-for-the-vm) for the VM that grant the user the Virtual Machine Administrator Login or Virtual Machine User Login role.
> [!NOTE] > If you're having problems with Azure role assignments, see [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#limits).
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Previously updated : 06/24/2022 Last updated : 08/01/2022
Groups created in | Security group default behavior | Microsoft 365 group defaul
## Make a group available for user self-service
-1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) with an account that's been assigned the Global Administrator or Privileged Role Administrator role for the directory.
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) with an account that's been assigned the Global Administrator or Groups Administrator role for the directory.
1. Select **Groups**, and then select **General** settings.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
# Product names and service plan identifiers for licensing
-When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) or the [Microsoft 365 admin center](https://admin.microsoft.com), you see product names that look something like *Office 365 E3*. When you use PowerShell v1.0 cmdlets, the same product is identified using a specific but less friendly name: *ENTERPRISEPACK*. When using PowerShell v2.0 cmdlets or [Microsoft Graph](/graph/api/resources/subscribedsku), the same product is identified using a GUID value: *6fd2c87f-b296-42f0-b197-1e91e994b900*. The following table lists the most commonly used Microsoft online service products and provides their various ID values. These tables are for reference purposes in Azure Active Directory (Azure AD), part of Microsoft Entra, and are accurate only as of the date when this article was last updated. Microsoft does not plan to update them for newly added services periodically.
+When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) or the [Microsoft 365 admin center](https://admin.microsoft.com), you see product names that look something like *Office 365 E3*. When you use PowerShell v1.0 cmdlets, the same product is identified using a specific but less friendly name: *ENTERPRISEPACK*. When using PowerShell v2.0 cmdlets or [Microsoft Graph](/graph/api/resources/subscribedsku), the same product is identified using a GUID value: *6fd2c87f-b296-42f0-b197-1e91e994b900*. The following table lists the most commonly used Microsoft online service products and provides their various ID values. These tables are for reference purposes in Azure Active Directory (Azure AD), part of Microsoft Entra, and are accurate only as of the date when this article was last updated. Microsoft will continue to make periodic updates to this document.
- **Product name**: Used in management portals - **String ID**: Used by PowerShell v1.0 cmdlets when performing operations on licenses or by the **skuPartNumber** property of the **subscribedSku** Microsoft Graph API
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Microsoft cloud settings let you collaborate with organizations from different M
- Microsoft Azure global cloud and Microsoft Azure Government - Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+> [!NOTE]
+> Microsoft Azure Government includes the Office GCC-High and DoD clouds.
+ To set up B2B collaboration, both organizations configure their Microsoft cloud settings to enable the partner's cloud. Then each organization uses the partner's tenant ID to find and add the partner to their organizational settings. From there, each organization can allow their default cross-tenant access settings apply to the partner, or they can configure partner-specific inbound and outbound settings. After you establish B2B collaboration with a partner in another cloud, you'll be able to: - Use B2B collaboration to invite a user in the partner tenant to access resources in your organization, including web line-of-business apps, SaaS apps, and SharePoint Online sites, documents, and files.
+- Use B2B collaboration to [share Power BI content to a user in the partner tenant](https://docs.microsoft.com/power-bi/enterprise/service-admin-azure-ad-b2b#cross-cloud-b2b).
- Apply Conditional Access policies to the B2B collaboration user and opt to trust device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant. > [!NOTE]
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
When a guest signs in to a resource in a partner organization for the first time
1. The guest reviews the **Review permissions** page describing the inviting organization's privacy statement. A user must **Accept** the use of their information in accordance to the inviting organization's privacy policies to continue.
- ![Screenshot showing the Review permissions page](media/redemption-experience/review-permissions.png)
+ ![Screenshot showing the Review permissions page.](media/redemption-experience/new-review-permissions.png)
> [!NOTE] > For information about how you as a tenant administrator can link to your organization's privacy statement, see [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/active-directory-properties-area.md).
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
If you need to add resources to an access package, you should check whether the
![List of resources in a catalog](./media/entitlement-management-access-package-resources/catalog-resources.png)
-1. If the resources aren't already in the catalog, and you're an administrator or a catalog owner, you can [add resources to a catalog](entitlement-management-catalog-create.md#add-resources-to-a-catalog).
+1. If the resources aren't already in the catalog, and you're an administrator or a catalog owner, you can [add resources to a catalog](entitlement-management-catalog-create.md#add-resources-to-a-catalog). The types of resources you can add are groups, applications, and SharePoint Online sites. For example:
+
+* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. To give users access to an application that uses AD security group memberships, create a new group in Azure AD, configure [group writeback to AD](../hybrid/how-to-connect-group-writeback-v2.md), and [enable that group to be written to AD](../enterprise-users/groups-write-back-portal.md). Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either.
+* Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD. If your application has not yet been integrated with Azure AD, see [govern access for applications in your environment](identity-governance-applications-prepare.md) and [integrate an application with Azure AD](identity-governance-applications-integrate.md).
+* Sites can be SharePoint Online sites or SharePoint Online site collections.
1. If you are an access package manager and you need to add resources to the catalog, you can ask the catalog owner to add them.
If you need to add resources to an access package, you should check whether the
A resource role is a collection of permissions associated with a resource. Resources can be made available for users to request if you add resource roles from each of the catalog's resources to your access package. You can add resource roles that are provided by groups, teams, applications, and SharePoint sites. When a user receives an assignment to an access package, they'll be added to all the resource roles in the access package.
-If you don't want users to receive all of the roles, then you'll need to create multiple access packages in the catalog, with separate access packages for each of the resource roles. You can also mark the access packages as [incompatible](entitlement-management-access-package-incompatible.md) with each other so users can't request access to access packages that would give them excessive access.
+If you want some users to receive different roles than others, then you'll need to create multiple access packages in the catalog, with separate access packages for each of the resource roles. You can also mark the access packages as [incompatible](entitlement-management-access-package-incompatible.md) with each other so users can't request access to access packages that would give them excessive access.
**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
$catalog = New-MgEntitlementManagementAccessPackageCatalog -DisplayName "Marketi
## Add resources to a catalog
-To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites. For example:
+To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites.
-* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either.
-* Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD. For more information on how to select appropriate resources for applications with multiple roles, see [Add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
+* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups.
+
+ * Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. To give a user access to an application that uses AD security group memberships, create a new security group in Azure AD, configure [group writeback to AD](../hybrid/how-to-connect-group-writeback-v2.md), and [enable that group to be written to AD](../enterprise-users/groups-write-back-portal.md), so that the cloud-created group can be used by an AD-based application.
+
+ * Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either, so cannot be added to catalogs.
+
+* Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD.
+
+ * If your application has not yet been integrated with Azure AD, see [govern access for applications in your environment](identity-governance-applications-prepare.md) and [integrate an application with Azure AD](identity-governance-applications-integrate.md).
+
+ * For more information on how to select appropriate resources for applications with multiple roles, see [Add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
* Sites can be SharePoint Online sites or SharePoint Online site collections. > [!NOTE] > Search SharePoint Site by site name or an exact URL as the search box is case sensitive.
active-directory Concept Azure Ad Connect Sync Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-default-configuration.md
The following attribute rules apply:
### Contact out-of-box rules A contact object must satisfy the following to be synchronized:
+* Must have mail attribute value.
* The contact must be mail-enabled. It is verified with the following rules: * `IsPresent([proxyAddresses]) = True)`. The proxyAddresses attribute must be populated. * A primary email address can be found in either the proxyAddresses attribute or the mail attribute. The presence of an \@ is used to verify that the content is an email address. One of these two rules must be evaluated to True.
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"AccelerateToFe
```json "HomeRealmDiscoveryPolicy": {
-"AccelerateToFederatedDomain": true
+ "AccelerateToFederatedDomain": true
} ``` ::: zone-end
New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"AccelerateToFe
```json "HomeRealmDiscoveryPolicy": {
-"AccelerateToFederatedDomain": true
-"PreferredDomain": ["federated.example.edu"]
+ "AccelerateToFederatedDomain": true,
+ "PreferredDomain": [
+ "federated.example.edu"
+ ]
} ``` ::: zone-end
The following policy enables username/password authentication for federated user
```json "EnableDirectAuthPolicy": {
-"AllowCloudPasswordValidation": true
+ "AllowCloudPasswordValidation": true
} ```
Set the HRD policy using Microsoft Graph. See [homeRealmDiscoveryPolicy](/graph/
From the Microsoft Graph explorer window:
-1. Grant the Policy.ReadWrite.ApplicationConfiguration permission under the **Modify permissions** tab.
+1. Grant consent to the *Policy.ReadWrite.ApplicationConfiguration* permission.
1. Use the URL https://graph.microsoft.com/v1.0/policies/homeRealmDiscoveryPolicies
-1. POST the new policy to this URL, or PATCH to /policies/homerealmdiscoveryPolicies/{policyID} if overwriting an existing one.
+1. POST the new policy to this URL, or PATCH to https://graph.microsoft.com/v1.0/policies/homeRealmDiscoveryPolicies/{policyID} if overwriting an existing one.
1. POST or PATCH contents: ```json
From the Microsoft Graph explorer window:
1. To see your new policy and get its ObjectID, run the following query: ```http
- GET policies/homeRealmDiscoveryPolicies
+ GET https://graph.microsoft.com/v1.0/policies/homeRealmDiscoveryPolicies
``` 1. To delete the HRD policy you created, run the query: ```http
- DELETE /policies/homeRealmDiscoveryPolicies/{policy objectID}
+ DELETE https://graph.microsoft.com/v1.0/policies/homeRealmDiscoveryPolicies/{policy objectID}
``` ::: zone-end ## Next steps
-[Prevent sign-in auto-acceleration](prevent-domain-hints-with-home-realm-discovery.md).
+[Prevent sign-in auto-acceleration](prevent-domain-hints-with-home-realm-discovery.md).
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 07/04/2022 Last updated : 08/01/2022
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## July 2022
+
+### New articles
+
+- [Create an enterprise application from a multi-tenant application in Azure Active Directory](create-service-principal-cross-tenant.md)
+- [Deletion and recovery of applications FAQ](delete-recover-faq.yml)
+- [Recover deleted applications in Azure Active Directory FAQs](recover-deleted-apps-faq.md)
+- [Restore an enterprise application in Azure AD](restore-application.md)
+- [SAML Request Signature Verification (Preview)](howto-enforce-signed-saml-authentication.md)
+- [Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access](cloudflare-azure-ad-integration.md)
+- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards](datawiza-azure-ad-sso-oracle-jde.md)
+
+### Updated articles
+
+- [Delete an enterprise application](delete-application-portal.md)
+- [Configure Azure Active Directory SAML token encryption](howto-saml-token-encryption.md)
+- [Review permissions granted to applications](manage-application-permissions.md)
+- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza](datawiza-with-azure-ad.md)
+ ## June 2022 ### Updated articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md) - [Tutorial: Migrate Okta federation to Azure AD-managed authentication](migrate-okta-federation-to-azure-active-directory.md) - [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)-
-## March 2022
-
-### New articles
--- [Overview of admin consent workflow](admin-consent-workflow-overview.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md)-
-### Updated articles
--- [Configure the admin consent workflow](configure-admin-consent-workflow.md)-- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)-- [Integrate F5 BIG-IP with Azure AD](f5-aad-integration.md)-- [Manage app consent policies](manage-app-consent-policies.md)-- [Plan Azure AD My Apps configuration](my-apps-deployment-plan.md)-- [Quickstart: View enterprise applications](view-applications-portal.md)-- [Review admin consent requests](review-admin-consent-requests.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)-- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
ms.devlang:
Previously updated : 02/23/2022 Last updated : 07/27/2022
Managed identities use certificate-based authentication. Each managed identity
In short, yes you can use user assigned managed identities in more than one Azure region. The longer answer is that while user assigned managed identities are created as regional resources the associated [service principal](../develop/app-objects-and-service-principals.md#service-principal-object) (SP) created in Azure AD is available globally. The service principal can be used from any Azure region and its availability is dependent on the availability of Azure AD. For example, if you created a user assigned managed identity in the South-Central region and that region becomes unavailable this issue only impacts [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) activities on the managed identity itself. The activities performed by any resources already configured to use the managed identities wouldn't be impacted.
-### Does managed identities for Azure resources work with Azure Cloud Services?
+### Does managed identities for Azure resources work with Azure Cloud Services (Classic)?
-No, there are no plans to support managed identities for Azure resources in Azure Cloud Services.
+Managed identities for Azure resources donΓÇÖt have support for [Azure Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) at this time. ΓÇ£
### What is the security boundary of managed identities for Azure resources?
active-directory Aws Single Sign On Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-provisioning-tutorial.md
Title: 'Tutorial: Configure AWS single sign-On for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to AWS single sign-On.
+ Title: 'Tutorial: Configure AWS IAM Identity Center(successor to AWS single sign-On) for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to AWS IAM Identity Center.
documentationcenter: ''
Last updated 02/23/2021
-# Tutorial: Configure AWS single sign-On for automatic user provisioning
+# Tutorial: Configure AWS IAM Identity Center for automatic user provisioning
-This tutorial describes the steps you need to perform in both AWS single sign-On and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [AWS single sign-On](https://console.aws.amazon.com/singlesignon) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both AWS IAM Identity Center(successor to AWS single sign-On) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [AWS IAM Identity Center](https://console.aws.amazon.com/singlesignon) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported > [!div class="checklist"]
-> * Create users in AWS single sign-On
-> * Remove users in AWS single sign-On when they no longer require access
-> * Keep user attributes synchronized between Azure AD and AWS single sign-On
-> * Provision groups and group memberships in AWS single sign-On
-> * [single sign-On](aws-single-sign-on-tutorial.md) to AWS single sign-On
+> * Create users in AWS IAM Identity Center
+> * Remove users in AWS IAM Identity Center when they no longer require access
+> * Keep user attributes synchronized between Azure AD and AWS IAM Identity Center
+> * Provision groups and group memberships in AWS IAM Identity Center
+> * [IAM Identity Center](aws-single-sign-on-tutorial.md) to AWS IAM Identity Center
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A SAML connection from your Azure AD account to AWS single sign-On, as described in Tutorial
+* A SAML connection from your Azure AD account to AWS IAM Identity Center, as described in Tutorial
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and AWS single sign-On](../app-provisioning/customize-application-attributes.md).
+3. Determine what data to [map between Azure AD and AWS IAM Identity Center](../app-provisioning/customize-application-attributes.md).
-## Step 2. Configure AWS single sign-On to support provisioning with Azure AD
+## Step 2. Configure AWS IAM Identity Center to support provisioning with Azure AD
-1. Open the [AWS single sign-On](https://console.aws.amazon.com/singlesignon).
+1. Open the [AWS IAM Identity Center](https://console.aws.amazon.com/singlesignon).
2. Choose **Settings** in the left navigation pane
The scenario outlined in this tutorial assumes that you already have the followi
![Screenshot of enabling automatic provisioning.](media/aws-single-sign-on-provisioning-tutorial/automatic-provisioning.png)
-4. In the Inbound automatic provisioning dialog box, copy and save the **SCIM endpoint** and **Access Token** (visible after clicking on Show Token). These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your AWS single sign-On application in the Azure portal.
+4. In the Inbound automatic provisioning dialog box, copy and save the **SCIM endpoint** and **Access Token** (visible after clicking on Show Token). These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your AWS IAM Identity Center application in the Azure portal.
![Screenshot of extracting provisioning configurations.](media/aws-single-sign-on-provisioning-tutorial/inbound-provisioning.png)
-## Step 3. Add AWS single sign-On from the Azure AD application gallery
+## Step 3. Add AWS IAM Identity Center from the Azure AD application gallery
-Add AWS single sign-On from the Azure AD application gallery to start managing provisioning to AWS single sign-On. If you have previously setup AWS single sign-On for SSO, you can use the same application. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add AWS IAM Identity Center from the Azure AD application gallery to start managing provisioning to AWS IAM Identity Center. If you have previously setup AWS IAM Identity Center for SSO, you can use the same application. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 5. Configure automatic user provisioning to AWS single sign-On
+## Step 5. Configure automatic user provisioning to AWS IAM Identity Center
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
-### To configure automatic user provisioning for AWS single sign-On in Azure AD:
+### To configure automatic user provisioning for AWS IAM Identity Center in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **AWS single sign-On**.
+2. In the applications list, select **AWS IAM Identity Center**.
- ![Screenshot of the AWS single sign-On link in the Applications list.](common/all-applications.png)
+ ![Screenshot of the AWS IAM Identity Center link in the Applications list.](common/all-applications.png)
3. Select the **Provisioning** tab.
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your AWS single sign-On **Tenant URL** and **Secret Token** retrieved earlier in Step 2. Click **Test Connection** to ensure Azure AD can connect to AWS single sign-On.
+5. Under the **Admin Credentials** section, input your AWS IAM Identity Center **Tenant URL** and **Secret Token** retrieved earlier in Step 2. Click **Test Connection** to ensure Azure AD can connect to AWS IAM Identity Center.
![Token](common/provisioning-testconnection-tenanturltoken.png)
This section guides you through the steps to configure the Azure AD provisioning
7. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to AWS single sign-On**.
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to AWS IAM Identity Center**.
-9. Review the user attributes that are synchronized from Azure AD to AWS single sign-On in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in AWS single sign-On for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the AWS single sign-On API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to AWS IAM Identity Center in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in AWS IAM Identity Center for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the AWS IAM Identity Center API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to AWS single sign-On**.
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to AWS IAM Identity Center**.
-11. Review the group attributes that are synchronized from Azure AD to AWS single sign-On in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in AWS single sign-On for update operations. Select the **Save** button to commit any changes.
+11. Review the group attributes that are synchronized from Azure AD to AWS IAM Identity Center in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in AWS IAM Identity Center for update operations. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for AWS single sign-On, change the **Provisioning Status** to **On** in the **Settings** section.
+13. To enable the Azure AD provisioning service for AWS IAM Identity Center, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to AWS single sign-On by choosing the desired values in **Scope** in the **Settings** section.
+14. Define the users and/or groups that you would like to provision to AWS IAM Identity Center by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
There are two ways to resolve this
2. Remove the duplicate attributes. For example, having two different attributes being mapped from Azure AD both mapped to "phoneNumber___" on the AWS side would result in the error if both attributes have values in Azure AD. Only having one attribute mapped to a "phoneNumber____ " attribute would resolve the error. ### Invalid characters
-Currently AWS single sign-On is not allowing some other characters that Azure AD supports like tab (\t), new line (\n), return carriage (\r), and characters such as " <|>|;|:% ".
+Currently AWS IAM Identity Center is not allowing some other characters that Azure AD supports like tab (\t), new line (\n), return carriage (\r), and characters such as " <|>|;|:% ".
-You can also check the AWS single sign-On troubleshooting tips [here](https://docs.aws.amazon.com/singlesignon/latest/userguide/azure-ad-idp.html#azure-ad-troubleshooting) for more troubleshooting tips
+You can also check the AWS IAM Identity Center troubleshooting tips [here](https://docs.aws.amazon.com/singlesignon/latest/userguide/azure-ad-idp.html#azure-ad-troubleshooting) for more troubleshooting tips
## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-On with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [What is application access and IAM Identity Center with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
## Next steps
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with AWS Single Sign-On'
-description: Learn how to configure single sign-on between Azure Active Directory and AWS Single Sign-On.
+ Title: 'Tutorial: Azure AD SSO integration with AWS IAM Identity Center (successor to AWS Single Sign-On)'
+description: Learn how to configure single sign-on between Azure Active Directory and AWS IAM Identity Center (successor to AWS Single Sign-On).
Previously updated : 07/15/2022 Last updated : 07/29/2022
-# Tutorial: Azure AD SSO integration with AWS Single Sign-On
+# Tutorial: Azure AD SSO integration with AWS IAM Identity Center
-In this tutorial, you'll learn how to integrate AWS Single Sign-On with Azure Active Directory (Azure AD). When you integrate AWS Single Sign-On with Azure AD, you can:
+In this tutorial, you'll learn how to integrate AWS IAM Identity Center (successor to AWS Single Sign-On) with Azure Active Directory (Azure AD). When you integrate AWS IAM Identity Center with Azure AD, you can:
-* Control in Azure AD who has access to AWS Single Sign-On.
-* Enable your users to be automatically signed-in to AWS Single Sign-On with their Azure AD accounts.
+* Control in Azure AD who has access to AWS IAM Identity Center.
+* Enable your users to be automatically signed-in to AWS IAM Identity Center with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate AWS Single Sign-On with Azure Ac
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* AWS Single Sign-On enabled subscription.
+* AWS IAM Identity Center enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* AWS Single Sign-On supports **SP and IDP** initiated SSO.
+* AWS IAM Identity Center supports **SP and IDP** initiated SSO.
-* AWS Single Sign-On supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
+* AWS IAM Identity Center supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
-## Add AWS Single Sign-On from the gallery
+## Add AWS IAM Identity Center from the gallery
-To configure the integration of AWS Single Sign-On into Azure AD, you need to add AWS Single Sign-On from the gallery to your list of managed SaaS apps.
+To configure the integration of AWS IAM Identity Center into Azure AD, you need to add AWS IAM Identity Center from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **AWS Single Sign-On** in the search box.
-1. Select **AWS Single Sign-On** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **AWS IAM Identity Center** in the search box.
+1. Select **AWS IAM Identity Center** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for AWS Single Sign-On
+## Configure and test Azure AD SSO for AWS IAM Identity Center
-Configure and test Azure AD SSO with AWS Single Sign-On using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS Single Sign-On.
+Configure and test Azure AD SSO with AWS IAM Identity Center using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS IAM Identity Center.
-To configure and test Azure AD SSO with AWS Single Sign-On, perform the following steps:
+To configure and test Azure AD SSO with AWS IAM Identity Center, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure AWS Single Sign-On SSO](#configure-aws-single-sign-on-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create AWS Single Sign-On test user](#create-aws-single-sign-on-test-user)** - to have a counterpart of B.Simon in AWS Single Sign-On that is linked to the Azure AD representation of user.
+1. **[Configure AWS IAM Identity Center SSO](#configure-aws-iam-identity-center-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AWS IAM Identity Center test user](#create-aws-iam-identity-center-test-user)** - to have a counterpart of B.Simon in AWS IAM Identity Center that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **AWS Single Sign-On** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **AWS IAM Identity Center** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
a. Click **Upload metadata file**.
- b. Click on **folder logo** to select metadata file, which is explained to download in **[Configure AWS Single Sign-On SSO](#configure-aws-single-sign-on-sso)** section and click **Add**.
+ b. Click on **folder logo** to select metadata file which is explained to download in **[Configure AWS IAM Identity Center SSO](#configure-aws-iam-identity-center-sso)** section and click **Add**.
![image2](common/browse-upload-metadata.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://portal.sso.<REGION>.amazonaws.com/saml/assertion/<ID>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AWS Single Sign-On Client support team](mailto:aws-sso-partners@amazon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AWS IAM Identity Center Client support team](mailto:aws-sso-partners@amazon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. AWS Single Sign-On application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. AWS IAM Identity Center application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/edit-attribute.png) > [!NOTE]
- > If ABAC is enabled in AWS Single Sign-On, the additional attributes may be passed as session tags directly into AWS accounts.
+ > If ABAC is enabled in AWS IAM Identity Center, the additional attributes may be passed as session tags directly into AWS accounts.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate(Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up AWS Single Sign-On** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up AWS IAM Identity Center** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS Single Sign-On.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS IAM Identity Center.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **AWS Single Sign-On**.
+1. In the applications list, select **AWS IAM Identity Center**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure AWS Single Sign-On SSO
+## Configure AWS IAM Identity Center SSO
-1. To automate the configuration within AWS Single Sign-On, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+1. To automate the configuration within AWS IAM Identity Center, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
![My apps extension](common/install-myappssecure-extension.png)
-2. After adding extension to the browser, click on **Set up AWS Single Sign-On** will direct you to the AWS Single Sign-On application. From there, provide the admin credentials to sign into AWS Single Sign-On. The browser extension will automatically configure the application for you and automate steps 3-10.
+2. After adding extension to the browser, click on **Set up AWS IAM Identity Center** will direct you to the AWS IAM Identity Center application. From there, provide the admin credentials to sign into AWS IAM Identity Center. The browser extension will automatically configure the application for you and automate steps 3-10.
![Setup configuration](common/setup-sso.png)
-3. If you want to setup AWS Single Sign-On manually, in a different web browser window, sign in to your AWS Single Sign-On company site as an administrator.
+3. If you want to setup AWS IAM Identity Center manually, in a different web browser window, sign in to your AWS IAM Identity Center company site as an administrator.
-1. Go to the **Services -> Security, Identity, & Compliance -> AWS Single Sign-On**.
+1. Go to the **Services -> Security, Identity, & Compliance -> AWS IAM Identity Center**.
2. In the left navigation pane, choose **Settings**.
-3. On the **Settings** page, find **Identity source** and click on **Change**.
+3. On the **Settings** page, find **Identity source**, click on **Actions** pull-down menu, and select Change **identity source**.
![Screenshot for Identity source change service](./media/aws-single-sign-on-tutorial/settings.png)
-4. On the Change identity source, choose **External identity provider**.
+4. On the Change identity source page, choose **External identity provider**.
![Screenshot for selecting external identity provider section](./media/aws-single-sign-on-tutorial/external-identity-provider.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot for download and upload metadata section](./media/aws-single-sign-on-tutorial/upload-metadata.png)
- a. In the **Service provider metadata** section, find **AWS SSO SAML metadata** and select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
+ a. In the **Service provider metadata** section, find **AWS SSO SAML metadata**, select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
- b. Copy **AWS SSO Sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
+ b. Copy **AWS access portal sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
- c. In the **Identity provider metadata** section, choose **Browse** to upload the metadata file, which you have downloaded from the Azure portal.
+ c. In the **Identity provider metadata** section, select **Choose file** to upload the metadata file which you have downloaded from the Azure portal.
d. Choose **Next: Review**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
9. Click **Change identity source**.
-### Create AWS Single Sign-On test user
+### Create AWS IAM Identity Center test user
-1. Open the **AWS SSO console**.
+1. Open the **AWS IAM Identity Center console**.
2. In the left navigation pane, choose **Users**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
b. In the **Email address** field, enter the `username@companydomain.extension`. For example, `B.Simon@contoso.com`.
- c. In the **Confirm email address** field, reenter the email address from the previous step.
+ c. In the **Confirm email address** field, re-enter the email address from the previous step.
d. In the First name field, enter `Jane`.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
f. In the Display name field, enter `Jane Doe`.
- g. Choose **Next: Groups**.
+ g. Choose **Next**, and then **Next** again.
> [!NOTE]
- > Make sure the username entered in AWS SSO matches the userΓÇÖs Azure AD sign-in name. This will you help avoid any authentication problems.
+ > Make sure the username entered in AWS IAM Identity Center matches the userΓÇÖs Azure AD sign-in name. This will you help avoid any authentication problems.
5. Choose **Add user**. 6. Next, you will assign the user to your AWS account. To do so, in the left navigation pane of the
-AWS SSO console, choose **AWS accounts**.
+AWS IAM Identity Center console, choose **AWS accounts**.
7. On the AWS Accounts page, select the AWS organization tab, check the box next to the AWS account you want to assign to the user. Then choose **Assign users**. 8. On the Assign Users page, find and check the box next to the user B.Simon. Then choose **Next:
permission set**.
> [!NOTE] > Permission sets define the level of access that users and groups have to an AWS account. To learn more
-about permission sets, see the AWS SSO **Permission Sets** page.
+about permission sets, see the **AWS IAM Identity Center Multi Account Permissions** page.
10. Choose **Finish**. > [!NOTE]
-> AWS Single Sign-On also supports automatic user provisioning, you can find more details [here](./aws-single-sign-on-provisioning-tutorial.md) on how to configure automatic user provisioning.
+> AWS IAM Identity Center also supports automatic user provisioning, you can find more details [here](./aws-single-sign-on-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to AWS Single Sign-On sign-in URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to AWS IAM Identity Center sign-in URL where you can initiate the login flow.
-* Go to AWS Single Sign-On sign-in URL directly and initiate the login flow from there.
+* Go to AWS IAM Identity Center sign-in URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS Single Sign-On for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS IAM Identity Center for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the AWS Single Sign-On tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS Single Sign-On for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the AWS IAM Identity Center tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS IAM Identity Center for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure AWS Single Sign-On you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AWS IAM Identity Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Cisco Umbrella User Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Cisco Umbrella User Management](../app-provisioning/customize-application-attributes.md). ## Step 2. Import ObjectGUID attribute via Azure AD Connect (Optional)
-If you have previously provisioned user identities from on-premise AD to Cisco Umbrella and would now like to provision the same users from Azure AD, you will need to synchronize the ObjectGUID attribute so that previously provisioned identities persist in the Umbrella reporting. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD.
+If your endpoints are running AnyConnect or the Cisco Secure Client version 4.10 MR5 or earlier, you will need to synchronize the ObjectGUID attribute for user identity attribution. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD.
> [!NOTE] > The on-premise Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not
![Screenshot that shows the "Directory extensions" selection page](./media/cisco-umbrella-user-management-provisioning-tutorial/active-directory-connect-directory-extensions.png)
+> [!NOTE]
+> This step is not required if all your endpoints are running Cisco Secure Client or AnyConnect version 4.10 MR6 or higher.
+ ## Step 3. Configure Cisco Umbrella User Management to support provisioning with Azure AD 1. Log in to [Cisco Umbrella dashboard](https://login.umbrella.com ). Navigate to **Deployments** > **Core Identities** > **Users and Groups**.
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
aks Use Cvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-cvm.md
+
+ Title: Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to create Confidential Virtual Machines (CVM) node pools with Azure Kubernetes Service (AKS)
++ Last updated : 08/01/2022+++
+# Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) cluster (Preview)
+
+You can use the generally available [confidential VM sizes (DCav5/ECav5)][cvm-announce] to add a node pool to your AKS cluster with CVM. Confidential VMs with AMD SEV-SNP support bring a new set of security features to protect date-in-use with full VM memory encryption. These features enable node pools with CVM to target the migration of highly sensitive container workloads to AKS without any code refactoring while benefiting from the features of AKS. The nodes in a node pool created with CVM use a customized Ubuntu 20.04 image specially configured for CVM. For more details on CVM, see [Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs][cvm].
+
+Adding a node pool with CVM to your AKS cluster is currently in preview.
++
+## Before you begin
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- [Azure CLI installed](/cli/azure/install-azure-cli).
+- An existing AKS cluster in the *westus*, *eastus*, *westeurope*, or *northeurope* region.
+- The [DCasv5 and DCadsv5-series][cvm-subs-dc] or [ECasv5 and ECadsv5-series][cvm-subs-ec] SKUs available for your subscription.
+
+## Limitations
+
+The following limitations apply when adding a node pool with CVM to AKS:
+
+- You can't use `--enable-fips-image`, ARM64, or Mariner.
+- You can't upgrade an existing node pool to use CVM.
+- The [DCasv5 and DCadsv5-series][cvm-subs-dc] or [ECasv5 and ECadsv5-series][cvm-subs-ec] SKUs must be available for your subscription in the region where the cluster is created.
+
+## Add a node pool with the CVM to AKS
+
+To add a node pool with the CVM to AKS, use `az aks nodepool add` and set `node-vm-size` to `Standard_DCa4_v5`. For example:
+
+```azurecli-interactive
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name cvmnodepool \
+ --node-count 3 \
+ --node-vm-size Standard_DC4as_v5
+```
+
+## Verify the node pool uses CVM
+
+To verify a node pool uses CVM, use `az aks nodepool show` and verify the `vmSize` is `Standard_DCa4_v5`. For example:
+
+```azurecli-interactive
+az aks nodepool show \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name cvmnodepool \
+ --query 'vmSize'
+```
+
+The following example command and output shows the node pool uses CVM:
+
+```output
+az aks nodepool show \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name cvmnodepool \
+ --query 'vmSize'
+
+"Standard_DC4as_v5"
+```
+
+## Remove a node pool with CVM from an AKS cluster
+
+To remove a node pool with CVM from an AKS cluster, use `az aks nodepool delete`. For example:
+
+```azurecli-interactive
+az aks nodepool delete \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name cvmnodepool
+```
+
+## Next steps
+
+In this article, you learned how to add a node pool with CVM to an AKS cluster. For more information about CVM, see [Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs][cvm].
+
+<!-- LINKS - Internal -->
+[cvm]: ../confidential-computing/confidential-node-pool-aks.md
+[cvm-announce]: https://techcommunity.microsoft.com/t5/azure-confidential-computing/azure-confidential-vms-using-sev-snp-dcasv5-ecasv5-are-now/ba-p/3573747
+[cvm-subs-dc]: ../virtual-machines/dcasv5-dcadsv5-series.md
+[cvm-subs-ec]: ../virtual-machines/ecasv5-ecadsv5-series.md
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
A workload may require splitting a cluster's nodes into separate pools for logic
* All subnets assigned to node pools must belong to the same virtual network. * System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
-* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. The `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command will perform an update operation without making any changes, which can recover a cluster stuck in a failed state.
+* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error-out on the agent pool add now though we originally allowed it. The `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command will perform an update operation without making any changes, which can recover a cluster stuck in a failed state.
* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets. * Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged. * Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet].
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 5/23/2022 Last updated : 7/29/2022
At this time, App Service Environment migrations to v3 using the migration featu
- East US 2 - France Central - Germany West Central
+- Japan East
- Korea Central - North Central US - North Europe
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 07/28/2022 Last updated : 07/29/2022
App Service Environment v3 is available in the following regions:
| Region | Normal and dedicated host | Availability zone support | | -- | :-: | :-: |
-| Australia East | x | x |
-| Australia Southeast | x | |
-| Brazil South | x | x |
-| Canada Central | x | x |
-| Canada East | x | |
-| Central India | x | x |
-| Central US | x | x |
-| East Asia | x | x |
-| East US | x | x |
-| East US 2 | x | x |
-| France Central | x | x |
-| Germany West Central | x | x |
-| Japan East | x | x |
-| Korea Central | x | x |
-| North Central US | x | |
-| North Europe | x | x |
-| Norway East | x | x |
-| South Africa North | x | x |
-| South Central US | x | x |
-| Southeast Asia | x | x |
-| Switzerland North | x | |
-| UAE North | x | |
-| UK South | x | x |
-| UK West | x | |
-| West Central US | x | |
-| West Europe | x | x |
-| West US | x | |
-| West US 2 | x | x |
-| West US 3 | x | x |
+| Australia East | x | x |
+| Australia Southeast | x | |
+| Brazil South | x | x |
+| Canada Central | x | x |
+| Canada East | x | |
+| Central India | x | x |
+| Central US | x | x |
+| East Asia | x | x |
+| East US | x | x |
+| East US 2 | x | x |
+| France Central | x | x |
+| Germany West Central | x | x |
+| Japan East | x | x |
+| Korea Central | x | x |
+| North Central US | x | |
+| North Europe | x | x |
+| Norway East | x | x |
+| South Africa North | x | x |
+| South Central US | x | x |
+| Southeast Asia | x | x |
+| Sweden Central | x | x |
+| Switzerland North | x | x |
+| UAE North | x | |
+| UK South | x | x |
+| UK West | x | |
+| West Central US | x | |
+| West Europe | x | x |
+| West US | x | |
+| West US 2 | x | x |
+| West US 3 | x | x |
### Azure Government:
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md
# Azure App Service diagnostics overview
-When youΓÇÖre running a web application, you want to be prepared for any issues that may arise, from 500 errors to your users telling you that your site is down. App Service diagnostics is an intelligent and interactive experience to help you troubleshoot your app with no configuration required. When you do run into issues with your app, App Service diagnostics points out whatΓÇÖs wrong to guide you to the right information to more easily and quickly troubleshoot and resolve the issue.
+When youΓÇÖre running a web application, you want to be prepared for any issues that may arise, from 500 errors to your users telling you that your site is down. App Service diagnostics is an intelligent and interactive experience to help you troubleshoot your app with no configuration required. If you do run into issues with your app, App Service diagnostics points out whatΓÇÖs wrong to guide you to the right information to more easily and quickly troubleshoot and resolve the issue.
Although this experience is most helpful when youΓÇÖre having issues with your app within the last 24 hours, all the diagnostic graphs are always available for you to analyze.
To access App Service diagnostics, navigate to your App Service web app or App S
For Azure Functions, navigate to your function app, and in the top navigation, click on **Platform features**, and select **Diagnose and solve problems** from the **Resource management** section.
-In the App Service diagnostics homepage, you can choose the category that best describes the issue with your app by using the keywords in each homepage tile. Also, this page is where you can find **Diagnostic Tools**. See [Diagnostic tools](#diagnostic-tools).
+In the App Service diagnostics homepage, you can peform a search for a symptom with your app, or choose a diagnostic category that best describes the issue with your app. Next, there is a new feature called Risk Alerts that provides an actionable report to improve your App. Finally, this page is where you can find **Diagnostic Tools**. See [Diagnostic tools](#diagnostic-tools).
-![Homepage](./media/app-service-diagnostics/app-service-diagnostics-homepage-1.png)
+![App Service Diagnose and solve problems homepage with diagnostic search box, Risk Alerts assessments, and Troubleshooting categories for discovering diagnostics for the selected Azure Resource.](./media/app-service-diagnostics/app-service-diagnostics-homepage-1.png)
> [!NOTE] > If your app is down or performing slow, you can [collect a profiling trace](https://azure.github.io/AppService/2018/06/06/App-Service-Diagnostics-Profiling-an-ASP.NET-Web-App-on-Azure-App-Service.html) to identify the root cause of the issue. Profiling is light weight and is designed for production scenarios. >
-## Interactive interface
+## Diagnostic Interface
-Once you select a homepage category that best aligns with your app's problem, App Service diagnostics' interactive interface, Genie, can guide you through diagnosing and solving problem with your app. You can use the tile shortcuts provided by Genie to view the full diagnostic report of the problem category that you are interested. The tile shortcuts provide you a direct way of accessing your diagnostic metrics.
+The homepage for App Service diagnostics offers streamlined diagnostics access using four sections:
-![Tile shortcuts](./media/app-service-diagnostics/tile-shortcuts-2.png)
+- **Ask Genie search box**
+- **Risk Alerts**
+- **Troubleshooting categories**
+- **Popular troubleshooting tools**
-After clicking on these tiles, you can see a list of topics related to the issue described in the tile. These topics provide snippets of notable information from the full report. You can click on any of these topics to investigate the issues further. Also, you can click on **View Full Report** to explore all the topics on a single page.
+## Ask Genie search box
-![Topics](./media/app-service-diagnostics/application-logs-insights-3.png)
+The Genie search box is a quick way to find a diagnostic. The same diagnostic can be found through Troubleshooting categories.
-![View Full Report](./media/app-service-diagnostics/view-full-report-4.png)
+![App Service Diagnose and solve problems Genie search box with a search for availability app issues and a dropdown of diagnostics that match the availability search term, such as Best Practices for Availability and Performance, Web App Down, Web App Slow, High CPU Analysis, Web App Restarted.](./media/app-service-diagnostics/app-service-diagnostics-genie-alerts-search-1.png)
-## Diagnostic report
-After you choose to investigate the issue further by clicking on a topic, you can view more details about the topic often supplemented with graphs and markdowns. Diagnostic report can be a powerful tool for pinpointing the problem with your app.
+## Risk Alerts
+
+The App Service diagnostics homepage performs a series of configuration checks and offers recommendations based on your unique application's configuration.
+
+![App Service Diagnose and solve problems Risk Alerts displays proactive App checks in a tile with a count of problems found and a link to view more details.](./media/app-service-diagnostics/app-service-diagnostics-risk-alerts-1.png)
+
+Recommendations and checks performed can be reviewed by clicking "View more details" link.
+
+![App Service Diagnose and solve problems Risk Alerts right hand panel, with actionable insights tailored for the current Azure Resource App, after clicking View more details hyperlink on the homepage.](./media/app-service-diagnostics/app-service-diagnostics-risk-alerts-details-1.png)
+
+## Troubleshooting categories
+
+Troubleshooting categories group diagnostics for ease of discovery. The following are available:
+
+- **Availability and Performance**
+- **Configuration and Management**
+- **SSL and Domains**
+- **Risk Assessments**
+- **Navigator (Preview)**
+- **Diagnostic Tools**
-![Diagnostic report](./media/app-service-diagnostics/full-diagnostic-report-5.png)
-## Health checkup
+![App Service Diagnose and solve problems Troubleshooting categories list displaying Availability and Performance, Configuration and Management, SSL and Domains, Risk Assessments, Navigator (Preview) and Diagnostic Tools.](./media/app-service-diagnostics/app-service-diagnostics-troubleshooting-categories-1.png)
++
+The tiles or the Troubleshoot link show the available diagnostics for the category. If you were interested in investigating Availability and performance the following diagnostics are offered:
+
+- **Overview**
+- **Web App Down**
+- **Web App Slow**
+- **High CPU Analysis**
+- **Memory Analysis**
+- **Web App Restarted**
+- **Application Change (Preview)**
+- **Application Crashes**
+- **HTTP 4xx Errors**
+- **SNAT Failed Connection Endpoints**
+- **SWAP Effects on Availability**
+- **TCP Connections**
+- **Testing in Production**
+- **WebJob Details**
++
+![App Service Diagnose and solve problems Availability and Performance category homepage, with left hand navigation containing Overview, Web App Down, Web App Slow, High CPU Analysis, Memory Analysis, Web App Restarted, Application Change (Preview), Application Crashes, HTTP 4xx Errors, SNAT Failed connection Endpoint, SNAT Port Exhaustion, Swap Effects on Availability, TCP Connections, Testing in Production, WebJob Details and the default availability dashboard for the last 24 hours of App usage, with a date and time selection interface.](./media/app-service-diagnostics/app-service-diagnostics-availability-and-performance-1.png)
+
+## Diagnostic report
-If you don't know whatΓÇÖs wrong with your app or donΓÇÖt know where to start troubleshooting your issues, the health checkup is a good place to start. The health checkup analyzes your applications to give you a quick, interactive overview that points out whatΓÇÖs healthy and whatΓÇÖs wrong, telling you where to look to investigate the issue. Its intelligent and interactive interface provides you with guidance through the troubleshooting process. Health checkup is integrated with the Genie experience for Windows apps and web app down diagnostic report for Linux apps.
+After you choose to investigate the issue further by clicking on a topic, you can view more details about the topic often supplemented with graphs and markdowns. Diagnostic report can be a powerful tool for pinpointing the problem with your app. The following is the Overview for Availability and Performance:
-### Health checkup graphs
+![App Service Diagnose and solve problems Availability and Performance category homepage with Web App Down diagnostic selected, which displays an availability chart, Organic SLA percentage and Observations and Solutions for problems that were detected.](./media/app-service-diagnostics/full-diagnostic-report-5.png)
-There are four different graphs in the health checkup.
+## Resiliency Score
-- **requests and errors:** A graph that shows the number of requests made over the last 24 hours along with HTTP server errors.-- **app performance:** A graph that shows response time over the last 24 hours for various percentile groups.-- **CPU usage:** A graph that shows the overall percent CPU usage per instance over the last 24 hours. -- **memory usage:** A graph that shows the overall percent physical memory usage per instance over the last 24 hours.
+If you don't know whatΓÇÖs wrong with your app or donΓÇÖt know where to start troubleshooting your issues, the Get Resiliency Score report is a good place to start. Once a Troubleshooting category has been selected the Get Resilience Score report link is available and clicking it produces a PDF document with actionable insights.
-![Health checkup](./media/app-service-diagnostics/health-checkup-6.png)
+![App Service Diagnose and solve problems Resiliency Score report, with a gauge indicating App's resilience score and what App Developer can do to improve resilience of the App.](./media/app-service-diagnostics/app-service-diagnostics-resiliency-report-1.png)
### Investigate application code issues (only for Windows app)
Because many app issues are related to issues in your application code, App Serv
To view Application Insights exceptions and dependencies, select the **web app down** or **web app slow** tile shortcuts.
-### Troubleshooting steps (only for Windows app)
+### Troubleshooting steps
If an issue is detected with a specific problem category within the last 24 hours, you can view the full diagnostic report, and App Service diagnostics may prompt you to view more troubleshooting advice and next steps for a more guided experience.
Post your questions or feedback at [UserVoice](https://feedback.azure.com/d365co
## More resources
-[Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
+[Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
Many of the services such as self-service provisioning, automated backups/restor
## Supported regions
-The following table describes the scenarios that are currently supported for Azure Arc-enabled data services.
-
-|Azure Regions |Direct connected mode |Indirect connected mode |
-||||
-|East US | Available | Available
-|East US 2|Available|Available
-|West US|Available|Available
-|West US 2|Available|Available
-|West US 3|Available|Available
-|North Central US | Available | Available
-|Central US|Available|Available
-|South Central US|Available|Available
-|UK South|Available|Available
-|France Central|Available|Available
-|West Europe |Available |Available
-|North Europe|Available|Available
-|Japan East|Available|Available
-|Korea Central|Available|Available
-|Southeast Asia|Available|Available
-|Australia East|Available|Available
-|Canada Central|Available|Available
+To see the regions that currently support Azure Arc-enabled data services, go to [Azure Products by Region - Azure Arc](https://azure.microsoft.com/global-infrastructure/services/?cdn=disable&products=azure-arc).
## Next steps
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Kublr |1.22.0 / 1.20.12 |v1.1.0_2021-11-02 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
+|Kublr |1.22.3 / 1.22.10 | v1.9.0_2022-07-12 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
### Lenovo
To see how all Azure Arc-enabled components are validated, see [Validation progr
|--|--|--|--|--| | TKGm v1.5.1 | 1.20.5 | v1.4.1_2022-03-08 |15.0.2255.119|postgres 12.3 (Ubuntu 12.3-1)|
-### WindRiver
+### Wind River
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|WindRiver| v1.23.1|v1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) |
+|Wind River Cloud Platform 22.06 | v1.23.1|v1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) |
## Data services validation process
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 07/28/2022 Last updated : 08/01/2022 ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
+ * If you want to connect an OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`:
+
+ ```azurecli-interactive
+ oc adm policy add-scc-to-user privileged -z <service account name> -n <service account namespace>
+ ```
>[!NOTE] > The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported. * A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster.
+* Install [Helm 3](https://helm.sh/docs/intro/install). Ensure that the Helm 3 version is &lt; 3.7.0.
+ ### [Azure PowerShell](#tab/azure-powershell) * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
- * If you want to connect a OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`:
+ * If you want to connect an OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`:
```bash oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-cache-for-redis Cache Redis Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-modules.md
+
+ Title: Using Redis modules with Azure Cache for Redis
+description: You can use Redis modules with your Azure Cache for Redis instances.
+++++ Last updated : 07/26/2022+++
+# Use Redis modules with Azure Cache for Redis
+
+With Azure Cache for Redis, you can use Redis modules as libraries to add more data structures and functionality to the core Redis software. You add the modules at the time you're creating your Enterprise tier cache.
+
+For more information on creating an Enterprise cache, see [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md).
+
+Modules were introduced in open-source Redis 4.0. The modules extend the use-cases of Redis by adding functionality like search capabilities and data structures like **bloom and cuckoo filters**.
+
+## Scope of Redis modules
+
+Some popular modules are available for use in the Enterprise tier of Azure Cache for Redis:
+
+| Module |Basic, Standard, and Premium |Enterprise |Enterprise Flash |
+|||||
+|RediSearch | No | Yes | Yes (preview) |
+|RedisBloom | No | Yes | No |
+|RedisTimeSeries | No | Yes | No |
+|RedisJSON | No | Yes (preview) | Yes (preview) |
+
+Currently, `RediSearch` is the only module that can be used concurrently with active geo-replication.
+
+> [!NOTE]
+> Currently, you can't manually load any modules into Azure Cache for Redis. Manually updating modules version is also not possible.
+>
+
+## Client library support
+
+The standard Redis client libraries have a varying amounts of support for each module. Some modules have specific libraries that add client support. Check the Redis [documentation pages](#modules) for each module to see more detail on which client libraries support them.
+
+## Adding modules to your cache
+
+You must add modules when you create your Enterprise tier cache. To add a module or modules when creating a new cache, use the settings in the Advanced tab of the Enterprise tier caches.
+
+You can add all the available modules or to select only specific modules to install.
++
+> [!IMPORTANT]
+> Modules must be enabled at the time you create an Azure Cache for Redis instance.
+
+For more information, see [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md).
+
+## Modules
+
+The following modules are available when creating a new Enterprise cache.
+
+- [RediSearch](#redisearch)
+- [RedisBloom](#redisbloom)
+- [RedisTimeSeries](#redistimeseries)
+- [RedisJSON](#redisjson)
+
+### RediSearch
+
+The **RediSearch** module adds a real-time search engine to your cache combining low latency performance with powerful search features.
+
+Features include:
+
+- Multi-field queries
+- Aggregation
+- Prefix, fuzzy, and phonetic-based searches
+- Auto-complete suggestions
+- Geo-filtering
+- Boolean queries
+
+Additionally, **RediSearch** can function as a secondary index, expanding your cache beyond a key-value structure and offering more sophisticated queries.
+
+You can use **RediSearch** is used in a wide variety of use-cases, including real-time inventory, enterprise search, and in indexing external databases. [For more information, see the RediSearch documentation page](https://redis.io/docs/stack/search/).
+
+>[!IMPORTANT]
+> The RediSearch module can only be used with the `Enterprise` clustering policy. For more information, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy).
+
+>[!NOTE]
+> The RediSearch module is the only module that can be used with active geo-replication.
+
+### RedisBloom
+
+RedisBloom adds four probabilistic data structures to a Redis server: **bloom filter**, **cuckoo filter**, **count-min sketch**, and **top-k**. Each of these data structures offers a way to sacrifice perfect accuracy in return for higher speed and better memory efficiency.
+
+| **Data structure** | **Description** | **Example application**|
+| ||-|
+| **Bloom and Cuckoo filters** | Tells you if an item is either (a) certainly not in a set or (b) potentially in a set. | Checking if an email has already been sent to a user|
+|**Count-min sketch** | Determines the frequency of events in a stream | Counting how many times an IoT device reported a temperature under 0 degrees Celsius. |
+|**Top-k** | Finds the `k` most frequently seen items | Determine the most frequent words used in War and Peace. (for example, setting k = 50 will return the 50 most common words in the book) |
+
+**Bloom and Cuckoo** filters are similar to each other, but each has a unique set of advantages and disadvantages that are beyond the scope of
+this documentation.
+
+For more information, see [RedisBloom](https://redis.io/docs/stack/bloom/).
+
+### RedisTimeSeries
+
+The **RedisTimeSeries** module adds high-throughput time series capabilities to your cache. This data structure is optimized for high volumes of incoming data and contains features to work with time series data, including:
+
+- Aggregated queries (for example, average, maximum, standard deviation, etc.)
+- Time-based queries (for example, start-time and end-time)
+- Downsampling/decimation
+- Data labeling for secondary indexing
+- Configurable retention period
+
+This module is useful for many applications that involve monitoring streaming data, such as IoT telemetry, application monitoring, and anomaly detection.
+
+For more information, see [RedisTimeSeries](https://redis.io/docs/stack/timeseries/).
+
+### RedisJSON
+
+The **RedisJSON** module adds the capability to store, query, and search JSON-formatted data. This functionality is useful for storing document-like data within your cache.
+
+Features include:
+
+- Full support for the JSON standard
+- Wide range of operations for all JSON data types, including objects, numbers, arrays, and strings
+- Dedicated syntax and fast access to select and update elements inside documents
+
+The **RedisJSON** module is also designed for use with the **RediSearch** module to provide integrated indexing and querying of data within a Redis server. Using both modules together can be a powerful tool to quickly retrieve specific data points within JSON objects.
+
+Some common use-cases for **RedisJSON** include applications such as searching product catalogs, managing user profiles, and caching JSON-structured data.
+
+For more information, see [RedisJSON](https://redis.io/docs/stack/json/).
+
+## Next steps
+
+- [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md)
+- [Client libraries](cache-best-practices-client-libraries.md)
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 07/27/2022 Last updated : 08/01/2022 # What's New in Azure Cache for Redis
+## August 2022
+
+### RedisJSON module available in Azure Cache for Redis Enterprise
+
+The Enterprise and Enterprise Flash tiers of Azure Cache for Redis now support the **RedisJSON** module. This module adds native functionality to store, query, and search JSON-formatted data that allows you to store data more easily in a document-style format in Redis. By using this module, you simplify common use cases like storing product catalog or user profile data.
+
+The **RedisJSON** module implements the community version of the module so you can use your existing knowledge and workstreams. **RedisJSON** is designed for use with the search functionality of **RediSearch**. Using both modules provides integrated indexing and querying of data. For more information, see [RedisJSON](https://aka.ms/redisJSON).
+
+The **RediSearch** module is also now available for Azure Cache for Redis. For more information on using Redis modules in Azure Cache for Redis, see [Use Redis modules with Azure Cache for Redis](cache-redis-modules.md).
+ ## July 2022 ### Redis 6 becomes default for new cache instances
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
To avoid excessive module upgrades on frequent Worker restarts, checking for mod
To learn more, see [Dependency management](functions-reference-powershell.md#dependency-management).
+## PIP\_INDEX\_URL
+
+This setting lets you override the base URL of the Python Package Index, which by default is `https://pypi.org/simple`. Use this setting when you need to run a remote build using custom dependencies that are found in a package index repository compliant with PEP 503 (the simple repository API) or in a local directory that follows the same format.
+
+|Key|Sample value|
+|||
+|PIP\_INDEX\_URL|`http://my.custom.package.repo/simple` |
+
+To learn more, see [`pip` documentation for `--index-url`](https://pip.pypa.io/en/stable/cli/pip_wheel/?highlight=index%20url#cmdoption-i) and using [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
+ ## PIP\_EXTRA\_INDEX\_URL
-The value for this setting indicates a custom package index URL for Python apps. Use this setting when you need to run a remote build using custom dependencies that are found in an extra package index.
+The value for this setting indicates a extra index URL for custom packages for Python apps, to use in addition to the `--index-url`. Use this setting when you need to run a remote build using custom dependencies that are found in an extra package index. Should follow the same rules as --index-url.
|Key|Sample value| ||| |PIP\_EXTRA\_INDEX\_URL|`http://my.custom.package.repo/simple` |
-To learn more, see [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
+To learn more, see [`pip` documentation for `--extra-index-url`](https://pip.pypa.io/en/stable/cli/pip_wheel/?highlight=index%20url#cmdoption-extra-index-url) and [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
## PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES (Preview)
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
When your packages are available from an accessible custom package index, use a
You can also use basic authentication credentials with your extra package index URLs. To learn more, see [Basic authentication credentials](https://pip.pypa.io/en/stable/user_guide/#basic-authentication-credentials) in Python documentation.
+> [!NOTE]
+> If you need to change the base URL of the Python Package Index from the default of `https://pypi.org/simple`, you can do this by [creating an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named [`PIP_INDEX_URL`](functions-app-settings.md#pip_index_url) that points to a different package index URL. Like [`PIP_EXTRA_INDEX_URL`](functions-app-settings.md#pip_extra_index_url), [`PIP_INDEX_URL`](functions-app-settings.md#pip_index_url) is a pip-specific application setting that changes the source for pip to use.
++ #### Installing local packages If your project uses packages that aren't publicly available, you can make them available to your app by putting them in the *\_\_app\_\_/.python_packages* directory. Before publishing, run the following command to install the dependencies locally:
azure-monitor Alerts Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic-portal.md
Title: Create and manage classic metric alerts using Azure Monitor
-description: Learn how to use Azure portal, CLI or PowerShell to create, view and manage classic metric alert rules.
+description: Learn how to use Azure portal or PowerShell to create, view and manage classic metric alert rules.
> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**. >
-Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics cross a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There is an existing newer functionality called Metric alerts which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we will describe how to create, view and manage classic metric alert rules through Azure portal, Azure CLI and PowerShell.
+Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics crosses a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There is an existing newer functionality called Metric alerts, which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we will describe how to create, view and manage classic metric alert rules through Azure portal and PowerShell.
## With Azure portal
After you create an alert, you can select it and do one of the following tasks:
* Edit or delete it. * **Disable** or **Enable** it if you want to temporarily stop or resume receiving notifications for that alert.
-## With Azure CLI
-
-The previous sections described how to create, view and manage metric alert rules using Azure portal. This section will describe how to do the same using cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). Quickest way to start using Azure CLI is through [Azure Cloud Shell](../../cloud-shell/overview.md).
-
-### Get all classic metric alert rules in a resource group
-
-```azurecli
-az monitor alert list --resource-group <group name>
-```
-
-### See details of a particular classic metric alert rule
-
-```azurecli
-az monitor alert show --resource-group <group name> --name <alert name>
-```
-
-### Create a classic metric alert rule
-
-```azurecli
-az monitor alert create --name <alert name> --resource-group <group name> \
- --action email <email1 email2 ...> \
- --action webhook <URI> \
- --target <target object ID> \
- --condition "<METRIC> {>,>=,<,<=} <THRESHOLD> {avg,min,max,total,last} ##h##m##s"
-```
-
-### Delete a classic metric alert rule
-
-```azurecli
-az monitor alert delete --name <alert name> --resource-group <group name>
-```
- ## With PowerShell [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
The following table is a reference to the programmatic interfaces for both class
| Deployment script type | Classic alerts | New metric alerts | | - | -- | -- | |REST API | [microsoft.insights/alertrules](/rest/api/monitor/alertrules) | [microsoft.insights/metricalerts](/rest/api/monitor/metricalerts) |
-|Azure CLI | [az monitor alert](/cli/azure/monitor/metrics/alert) | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
+|Azure CLI | `az monitor alert` | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
|PowerShell | [Reference](/powershell/module/az.monitor/add-azmetricalertrule) | [Reference](/powershell/module/az.monitor/add-azmetricalertrulev2) | | Azure Resource Manager template | [For classic alerts](./alerts-enable-template.md)|[For new metric alerts](./alerts-metric-create-templates.md)|
azure-monitor Autoscale Common Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-metrics.md
Title: Autoscale common metrics
-description: Learn which metrics are commonly used for autoscaling your Cloud Services, Virtual Machines and Web Apps.
+description: Learn which metrics are commonly used for autoscaling your cloud services, virtual machines, and web apps.
Last updated 04/22/2022
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-Azure Monitor autoscaling allows you to scale the number of running instances up or down, based on telemetry data (metrics). This document describes common metrics that you might want to use. In the Azure portal, you can choose the metric of the resource to scale by. However, you can also choose any metric from a different resource to scale by.
+Azure Monitor autoscaling allows you to scale the number of running instances up or down, based on telemetry data, also known as metrics. This article describes common metrics that you might want to use. In the Azure portal, you can choose the metric of the resource to scale by. You can also choose any metric from a different resource to scale by.
-Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md). Other Azure services use different scaling methods.
+Azure Monitor autoscale applies only to [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/), [Azure App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [Azure API Management](../../api-management/api-management-key-concepts.md). Other Azure services use different scaling methods.
## Compute metrics for Resource Manager-based VMs
-By default, Resource Manager-based Virtual Machines and Virtual Machine Scale Sets emit basic (host-level) metrics. In addition, when you configure diagnostics data collection for an Azure VM and VMSS, the Azure diagnostic extension also emits guest-OS performance counters (commonly known as "guest-OS metrics"). You use all these metrics in autoscale rules.
+By default, Azure Resource Manager-based virtual machines and virtual machine scale sets emit basic (host-level) metrics. In addition, when you configure diagnostics data collection for an Azure VM and virtual machine scale sets, the Azure Diagnostics extension also emits guest-OS performance counters. These counters are commonly known as "guest-OS metrics." You use all these metrics in autoscale rules.
-You can use the `Get MetricDefinitions` API/PoSH/CLI to view the metrics available for your VMSS resource.
+You can use the `Get MetricDefinitions` API/PoSH/CLI to view the metrics available for your Virtual Machine Scale Sets resource.
-If you're using VM scale sets and you don't see a particular metric listed, then it is likely *disabled* in your diagnostics extension.
+If you're using virtual machine scale sets and you don't see a particular metric listed, it's likely *disabled* in your Diagnostics extension.
-If a particular metric is not being sampled or transferred at the frequency you want, you can update the diagnostics configuration.
+If a particular metric isn't being sampled or transferred at the frequency you want, you can update the diagnostics configuration.
-If either preceding case is true, then review [Use PowerShell to enable Azure Diagnostics in a virtual machine running Windows](../../virtual-machines/extensions/diagnostics-windows.md) about PowerShell to configure and update your Azure VM Diagnostics extension to enable the metric. That article also includes a sample diagnostics configuration file.
+If either preceding case is true, see [Use PowerShell to enable Azure Diagnostics in a virtual machine running Windows](../../virtual-machines/extensions/diagnostics-windows.md) to configure and update your Azure VM Diagnostics extension to enable the metric. The article also includes a sample diagnostics configuration file.
### Host metrics for Resource Manager-based Windows and Linux VMs
-The following host-level metrics are emitted by default for Azure VM and VMSS in both Windows and Linux instances. These metrics describe your Azure VM, but are collected from the Azure VM host rather than via agent installed on the guest VM. You may use these metrics in autoscaling rules.
+The following host-level metrics are emitted by default for Azure VM and virtual machine scale sets in both Windows and Linux instances. These metrics describe your Azure VM but are collected from the Azure VM host rather than via agent installed on the guest VM. You can use these metrics in autoscaling rules.
- [Host metrics for Resource Manager-based Windows and Linux VMs](../essentials/metrics-supported.md#microsoftcomputevirtualmachines)-- [Host metrics for Resource Manager-based Windows and Linux VM Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)
+- [Host metrics for Resource Manager-based Windows and Linux virtual machine scale sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)
### Guest OS metrics for Resource Manager-based Windows VMs
-When you create a VM in Azure, diagnostics is enabled by using the Diagnostics extension. The diagnostics extension emits a set of metrics taken from inside of the VM. This means you can autoscale off of metrics that are not emitted by default.
+When you create a VM in Azure, diagnostics is enabled by using the Diagnostics extension. The Diagnostics extension emits a set of metrics taken from inside of the VM. This means you can autoscale off of metrics that aren't emitted by default.
You can generate a list of the metrics by using the following command in PowerShell.
Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,U
You can create an alert for the following metrics:
-| Metric Name | Unit |
+| Metric name | Unit |
| | | | \Processor(_Total)\% Processor Time |Percent | | \Processor(_Total)\% Privileged Time |Percent |
You can create an alert for the following metrics:
### Guest OS metrics Linux VMs
-When you create a VM in Azure, diagnostics is enabled by default by using Diagnostics extension.
+When you create a VM in Azure, diagnostics is enabled by default by using the Diagnostics extension.
You can generate a list of the metrics by using the following command in PowerShell.
Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,U
You can create an alert for the following metrics:
-| Metric Name | Unit |
+| Metric name | Unit |
| | | | \Memory\AvailableMemory |Bytes | | \Memory\PercentAvailableMemory |Percent |
Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,U
| \NetworkInterface\TotalTxErrors |Count | | \NetworkInterface\TotalCollisions |Count |
-## Commonly used App Service (Server Farm) metrics
+## Commonly used App Service (server farm) metrics
-You can also perform autoscale based on common web server metrics such as the Http queue length. Its metric name is **HttpQueueLength**. The following section lists available server farm (App Service) metrics.
+You can also perform autoscale based on common web server metrics such as the HTTP queue length. Its metric name is **HttpQueueLength**. The following section lists available server farm (App Service) metrics.
### Web Apps metrics
-You can generate a list of the Web Apps metrics by using the following command in PowerShell.
+You can generate a list of the Web Apps metrics by using the following command in PowerShell:
``` Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,Unit
Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,U
You can alert on or scale by these metrics.
-| Metric Name | Unit |
+| Metric name | Unit |
| | | | CpuPercentage |Percent | | MemoryPercentage |Percent |
You can alert on or scale by these metrics.
## Commonly used Storage metrics
-You can scale by Storage queue length, which is the number of messages in the storage queue. Storage queue length is a special metric and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That can be 100 messages per instance, 120 and 80, or any other combination that adds up to 200 or more.
+You can scale by Azure Storage queue length, which is the number of messages in the Storage queue. Storage queue length is a special metric, and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That amount can be 100 messages per instance, 120 plus 80, or any other combination that adds up to 200 or more.
-Configure this setting in the Azure portal in the **Settings** blade. For VM scale sets, you can update the Autoscale setting in the Resource Manager template to use *metricName* as *ApproximateMessageCount* and pass the ID of the storage queue as *metricResourceUri*.
+Configure this setting in the Azure portal in the **Settings** pane. For virtual machine scale sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
-For example, with a Classic Storage Account the autoscale setting metricTrigger would include:
+For example, with a Classic Storage account, the autoscale setting `metricTrigger` would include:
``` "metricName": "ApproximateMessageCount",
For example, with a Classic Storage Account the autoscale setting metricTrigger
"metricResourceUri": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/RES_GROUP_NAME/providers/Microsoft.ClassicStorage/storageAccounts/STORAGE_ACCOUNT_NAME/services/queue/queues/QUEUE_NAME" ```
-For a (non-classic) storage account, the metricTrigger would include:
+For a (non-classic) Storage account, the `metricTrigger` setting would include:
``` "metricName": "ApproximateMessageCount",
For a (non-classic) storage account, the metricTrigger would include:
## Commonly used Service Bus metrics
-You can scale by Service Bus queue length, which is the number of messages in the Service Bus queue. Service Bus queue length is a special metric and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That can be 100 messages per instance, 120 and 80, or any other combination that adds up to 200 or more.
+You can scale by Azure Service Bus queue length, which is the number of messages in the Service Bus queue. Service Bus queue length is a special metric, and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That amount can be 100 messages per instance, 120 plus 80, or any other combination that adds up to 200 or more.
-For VM scale sets, you can update the Autoscale setting in the Resource Manager template to use *metricName* as *ApproximateMessageCount* and pass the ID of the storage queue as *metricResourceUri*.
+For virtual machine scale sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
``` "metricName": "ApproximateMessageCount",
For VM scale sets, you can update the Autoscale setting in the Resource Manager
``` > [!NOTE]
-> For Service Bus, the resource group concept does not exist but Azure Resource Manager creates a default resource group per region. The resource group is usually in the 'Default-ServiceBus-[region]' format. For example, 'Default-ServiceBus-EastUS', 'Default-ServiceBus-WestUS', 'Default-ServiceBus-AustraliaEast' etc.
-
+> For Service Bus, the resource group concept doesn't exist, but Azure Resource Manager creates a default resource group per region. The resource group is usually in the Default-ServiceBus-[region] format. Examples are Default-ServiceBus-EastUS, Default-ServiceBus-WestUS, and Default-ServiceBus-AustraliaEast.
azure-monitor Autoscale Custom Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-custom-metric.md
Title: How to autoscale in Azure using a custom metric
-description: Learn how to scale your web app is custom metric in the Azure portal
+ Title: Autoscale in Azure using a custom metric
+description: Learn how to scale your web app by using custom metrics in the Azure portal.
Last updated 06/22/2022
-# Customer intent: As a user or dev ops administrator I want to use the portal to set up autoscale so I can scale my resources.
+# Customer intent: As a user or dev ops administrator, I want to use the portal to set up autoscale so I can scale my resources.
-# How to autoscale a web app using custom metrics.
+# Autoscale a web app by using custom metrics
-This article describes how to set up autoscale for a web app using a custom metric in the Azure portal.
+This article describes how to set up autoscale for a web app by using a custom metric in the Azure portal.
-Autoscale allows you to add and remove resources to handle increases and decreases in load. In this article we'll show you how to set up autoscale for a web app, using one of the Application Insights metrics to scale the web app in and out.
+Autoscale allows you to add and remove resources to handle increases and decreases in load. In this article, we'll show you how to set up autoscale for a web app by using one of the Application Insights metrics to scale the web app in and out.
Azure Monitor autoscale applies to:
-+ [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/)
-+ [Cloud Services](https://azure.microsoft.com/services/cloud-services/)
-+ [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/)
-+ [Azure Data Explorer Cluster](https://azure.microsoft.com/services/data-explorer/)
-+ Integration Service Environment and [API Management services](../../api-management/api-management-key-concepts.md).
-## Prerequisites
-An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
++ [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/)++ [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/)++ [Azure App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/)++ [Azure Data Explorer cluster](https://azure.microsoft.com/services/data-explorer/) ++ Integration service environment and [Azure API Management](../../api-management/api-management-key-concepts.md)+
+## Prerequisite
+
+You need an Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free).
## Overview
-To create an autoscaled web app, follow the steps below.
-1. If you do not already have one, [Create an App Service Plan](#create-an-app-service-plan). Note that you can't set up autoscale for free or basic tiers.
-1. If you do not already have one, [Create a web app](#create-a-web-app) using your service plan.
+
+To create an autoscaled web app:
+
+1. If you don't already have one, [create an App Service plan](#create-an-app-service-plan). You can't set up autoscale for free or basic tiers.
+1. If you don't already have one, [create a web app](#create-a-web-app) by using your service plan.
1. [Configure autoscaling](#configure-autoscale) for your service plan.
-
-## Create an App Service Plan
+## Create an App Service plan
-An App Service plan defines a set of compute resources for a web app to run on.
+An App Service plan defines a set of compute resources for a web app to run on.
1. Open the [Azure portal](https://portal.azure.com). 1. Search for and select **App Service plans**.
- :::image type="content" source="media\autoscale-custom-metric\search-app-service-plan.png" alt-text="Screenshot of the search bar, searching for app service plans.":::
+ :::image type="content" source="media\autoscale-custom-metric\search-app-service-plan.png" alt-text="Screenshot that shows searching for App Service plans.":::
-1. Select **Create** from the **App Service plan** page.
+1. On the **App Service plan** page, select **Create**.
1. Select a **Resource group** or create a new one. 1. Enter a **Name** for your plan. 1. Select an **Operating system** and **Region**.
-1. Select an **Sku and size**.
+1. Select an **SKU** and **size**.
+ > [!NOTE]
- > You cannot use autoscale with free or basic tiers.
+ > You can't use autoscale with free or basic tiers.
-1. Select **Review + create**, then **Create**.
+1. Select **Review + create** > **Create**.
- :::image type="content" source="media\autoscale-custom-metric\create-app-service-plan.png" alt-text="Screenshot of the Basics tab of the Create App Service Plan screen that you configure the App Service plan on.":::
+ :::image type="content" source="media\autoscale-custom-metric\create-app-service-plan.png" alt-text="Screenshot that shows the Basics tab of the Create App Service Plan screen on which you configure the App Service plan.":::
## Create a web app
-1. Search for and select *App services*.
+1. Search for and select **App services**.
- :::image type="content" source="media\autoscale-custom-metric\search-app-services.png" alt-text="Screenshot of the search bar, searching for app service.":::
+ :::image type="content" source="media\autoscale-custom-metric\search-app-services.png" alt-text="Screenshot that shows searching for App Services.":::
-1. Select **Create** from the **App Services** page.
+1. On the **App Services** page, select **Create**.
1. On the **Basics** tab, enter a **Name** and select a **Runtime stack**.
-1. Select the **Operating System** and **Region** that you chose when defining your App Service plan.
+1. Select the **Operating System** and **Region** that you chose when you defined your App Service plan.
1. Select the **App Service plan** that you created earlier.
-1. Select the **Monitoring** tab from the menu bar.
+1. Select the **Monitoring** tab.
- :::image type="content" source="media\autoscale-custom-metric\create-web-app.png" alt-text="Screenshot of the Basics tab of the Create web app page where you set up a web app.":::
+ :::image type="content" source="media\autoscale-custom-metric\create-web-app.png" alt-text="Screenshot that shows the Basics tab of the Create Web App page where you set up a web app.":::
1. On the **Monitoring** tab, select **Yes** to enable Application Insights.
-1. Select **Review + create**, then **Create**.
-
- :::image type="content" source="media\autoscale-custom-metric\enable-application-insights.png"alt-text="Screenshot of the Monitoring tab of the Create web app page where you enable Application Insights.":::
+1. Select **Review + create** > **Create**.
+ :::image type="content" source="media\autoscale-custom-metric\enable-application-insights.png"alt-text="Screenshot that shows the Monitoring tab of the Create Web App page where you enable Application Insights.":::
## Configure autoscale+ Configure the autoscale settings for your App Service plan.
-1. Search and select *autoscale* in the search bar or select **Autoscale** under **Monitor** in the side menu bar.
+1. Search and select **autoscale** in the search bar or select **Autoscale** under **Monitor** in the menu bar on the left.
1. Select your App Service plan. You can only configure production plans.
- :::image type="content" source="media\autoscale-custom-metric\autoscale-overview-page.png" alt-text="A screenshot of the autoscale landing page where you select the resource to set up autoscale for.":::
+ :::image type="content" source="media\autoscale-custom-metric\autoscale-overview-page.png" alt-text="Screenshot that shows the Autoscale page where you select the resource to set up autoscale.":::
+
+### Set up a scale-out rule
-### Set up a scale out rule
-Set up a scale out rule so that Azure spins up an additional instance of the web app, when your web app is handling more than 70 sessions per instance.
+Set up a scale-out rule so that Azure spins up another instance of the web app when your web app is handling more than 70 sessions per instance.
1. Select **Custom autoscale**.
-1. In the **Rules** section of the default scale condition, select **Add a rule**.
+1. In the **Rules** section of the default scale condition, select **Add a rule**.
- :::image type="content" source="media/autoscale-custom-metric/autoscale-settings.png" alt-text="A screenshot of the autoscale settings page where you set up the basic autoscale settings.":::
+ :::image type="content" source="media/autoscale-custom-metric/autoscale-settings.png" alt-text="Screenshot that shows the Autoscale setting page where you set up the basic autoscale settings.":::
1. From the **Metric source** dropdown, select **Other resource**.
-1. From **Resource Type**, select **Application Insights**.
+1. From **Resource type**, select **Application Insights**.
1. From the **Resource** dropdown, select your web app.
-1. Select a **Metric name** to base your scaling on, for example *Sessions*.
-1. Select **Enable metric divide by instance count** so that the number of sessions per instance is measured.
-1. 1. From the **Operator** dropdown, select **Greater than**.
-1. Enter the **Metric threshold to trigger the scale action**, for example, *70*.
-1. Under **Actions**, set the **Operation** to *Increase count* and set the **Instance count** to *1*.
+1. Select a **Metric name** to base your scaling on. For example, use **Sessions**.
+1. Select the **Enable metric divide by instance count** checkbox so that the number of sessions per instance is measured.
+1. From the **Operator** dropdown, select **Greater than**.
+1. Enter the **Metric threshold to trigger the scale action**. For example, use **70**.
+1. Under **Action**, set **Operation** to **Increase count by**. Set **Instance count** to **1**.
1. Select **Add**.
- :::image type="content" source="media/autoscale-custom-metric/scale-out-rule.png" alt-text="A screenshot of the Scale rule page where you configure the scale out rule.":::
+ :::image type="content" source="media/autoscale-custom-metric/scale-out-rule.png" alt-text="Screenshot that shows the Scale rule page where you configure the scale-out rule.":::
+
+### Set up a scale-in rule
+Set up a scale-in rule so that Azure spins down one of the instances when the number of sessions your web app is handling is less than 60 per instance. Azure will reduce the number of instances each time this rule is run until the minimum number of instances is reached.
-### Set up a scale in rule
-Set up a scale in rule so Azure spins down one of the instances when the number of sessions your web app is handling is less than 60 per instance. Azure will reduce the number of instances each time this rule is run until the minimum number of instances is reached.
-1. In the **Rules** section of the default scale condition, select **Add a rule**.
+1. In the **Rules** section of the default scale condition, select **Add a rule**.
1. From the **Metric source** dropdown, select **Other resource**.
-1. From **Resource Type**, select **Application Insights**.
+1. From **Resource type**, select **Application Insights**.
1. From the **Resource** dropdown, select your web app.
-1. Select a **Metric name** to base your scaling on, for example *Sessions*.
-1. Select **Enable metric divide by instance count** so that the number of sessions per instance is measured.
+1. Select a **Metric name** to base your scaling on. For example, use **Sessions**.
+1. Select the **Enable metric divide by instance count** checkbox so that the number of sessions per instance is measured.
1. From the **Operator** dropdown, select **Less than**.
-1. Enter the **Metric threshold to trigger the scale action**, for example, *60*.
-1. Under **Actions**, set the **Operation** to **Decrease count** and set the **Instance count** to *1*.
+1. Enter the **Metric threshold to trigger the scale action**. For example, use **60**.
+1. Under **Action**, set **Operation** to **Decrease count by** and set **Instance count** to **1**.
1. Select **Add**.
- :::image type="content" source="media/autoscale-custom-metric/scale-in-rule.png" alt-text="A screenshot of the Scale rule page where you configure the scale in rule.":::
+ :::image type="content" source="media/autoscale-custom-metric/scale-in-rule.png" alt-text="Screenshot that shows the Scale rule page where you configure the scale-in rule.":::
### Limit the number of instances
-1. Set the maximum number of instances that can be spun up in the **Maximum** field of the **Instance limits** section, for example, *4*.
+1. Set the maximum number of instances that can be spun up in the **Maximum** field of the **Instance limits** section. For example, use **4**.
1. Select **Save**.
- :::image type="content" source="media/autoscale-custom-metric/autoscale-instance-limits.png" alt-text="A screenshot of the autoscale settings page where you set up instance limits.":::
+ :::image type="content" source="media/autoscale-custom-metric/autoscale-instance-limits.png" alt-text="Screenshot that shows the Autoscale setting page where you set up instance limits.":::
## Clean up resources
-If you're not going to continue to use this application, delete
-resources with the following steps:
-1. From the App service overview page, select **Delete**.
+If you're not going to continue to use this application, delete resources.
- :::image type="content" source="media/autoscale-custom-metric/delete-web-app.png" alt-text="A screenshot of the App Service page where you can Delete the web app.":::
+1. On the App Service overview page, select **Delete**.
-1. From The App Service Plan page, select **Delete**. The autoscale settings are deleted along with the App Service plan.
+ :::image type="content" source="media/autoscale-custom-metric/delete-web-app.png" alt-text="Screenshot that shows the App Service page where you can delete the web app.":::
- :::image type="content" source="media/autoscale-custom-metric/delete-service-plan.png" alt-text="A screenshot of the App Service plan page where you can Delete the app service plan.":::
+1. On the **App Service plans** page, select **Delete**. The autoscale settings are deleted along with the App Service plan.
+
+ :::image type="content" source="media/autoscale-custom-metric/delete-service-plan.png" alt-text="Screenshot that shows the App Service plans page where you can delete the App Service plan.":::
## Next steps
-Learn more about autoscale by referring to the following articles:
+
+To learn more about autoscale, see the following articles:
+ - [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md) - [Overview of autoscale](./autoscale-overview.md) - [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
Last updated 04/05/2022
-# Get started with Autoscale in Azure
-This article describes how to set up your Autoscale settings for your resource in the Microsoft Azure portal.
+# Get started with autoscale in Azure
-Azure Monitor autoscale applies only to [Virtual Machine scale sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
+This article describes how to set up your autoscale settings for your resource in the Azure portal.
-## Discover the Autoscale settings in your subscription
+Azure Monitor autoscale applies only to [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/), [Azure App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [Azure API Management](../../api-management/api-management-key-concepts.md).
+
+## Discover the autoscale settings in your subscription
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4u7ts]
-You can discover all the resources for which Autoscale is applicable in Azure Monitor. Use the following steps for a step-by-step walkthrough:
+To discover all the resources for which autoscale is applicable in Azure Monitor, follow these steps.
1. Open the [Azure portal.][1]
-1. Click the Azure Monitor icon on the top of the page.
- [![Screenshot on how to open Azure Monitor.](./media/autoscale-get-started/click-on-monitor-1.png)](./media/autoscale-get-started/click-on-monitor-1.png#lightbox)
-1. Click **Autoscale** to view all the resources for which Autoscale is applicable, along with their current Autoscale status.
- [![Screenshot of Autoscale in Azure Monitor.](./media/autoscale-get-started/click-on-autoscale-2.png)](./media/autoscale-get-started/click-on-autoscale-2.png#lightbox)
-
+1. Select the Azure Monitor icon at the top of the page.
+
+ [![Screenshot that shows how to open Azure Monitor.](./media/autoscale-get-started/click-on-monitor-1.png)](./media/autoscale-get-started/click-on-monitor-1.png#lightbox)
+
+1. Select **Autoscale** to view all the resources for which autoscale is applicable, along with their current autoscale status.
+
+ [![Screenshot that shows autoscale in Azure Monitor.](./media/autoscale-get-started/click-on-autoscale-2.png)](./media/autoscale-get-started/click-on-autoscale-2.png#lightbox)
-You can use the filter pane at the top to scope down the list to select resources in a specific resource group, specific resource types, or a specific resource.
+1. Use the filter pane at the top to scope down the list to select resources in a specific resource group, specific resource types, or a specific resource.
-[![Screenshot of View resource status.](./media/autoscale-get-started/view-all-resources-3.png)](./media/autoscale-get-started/view-all-resources-3.png#lightbox)
+ [![Screenshot that shows viewing resource status.](./media/autoscale-get-started/view-all-resources-3.png)](./media/autoscale-get-started/view-all-resources-3.png#lightbox)
-For each resource, you will find the current instance count and the Autoscale status. The Autoscale status can be:
+ For each resource, you'll find the current instance count and the autoscale status. The autoscale status can be:
-- **Not configured**: You have not enabled Autoscale yet for this resource.-- **Enabled**: You have enabled Autoscale for this resource.-- **Disabled**: You have disabled Autoscale for this resource.
+ - **Not configured**: You haven't enabled autoscale yet for this resource.
+ - **Enabled**: You've enabled autoscale for this resource.
+ - **Disabled**: You've disabled autoscale for this resource.
+ You can also reach the scaling page by selecting **All Resources** on the home page and filter to the resource you're interested in scaling.
-Additionally, you can reach the scaling page by clicking on **All Resources** on the home page and filter to the resource you're interested in scaling.
+ [![Screenshot that shows all resources.](./media/autoscale-get-started/choose-all-resources.png)](./media/autoscale-get-started/choose-all-resources.png#lightbox)
-[![Screenshot of all resources.](./media/autoscale-get-started/choose-all-resources.png)](./media/autoscale-get-started/choose-all-resources.png#lightbox)
+1. After you've selected the resource that you're interested in, select the **Scaling** tab to configure autoscaling rules.
+ [![Screenshot that shows the scaling button.](./media/autoscale-get-started/scaling-page.png)](./media/autoscale-get-started/scaling-page.png#lightbox)
-Once you've selected the resource that you're interested in, select the **Scaling** tab to configure autoscaling rules.
+## Create your first autoscale setting
-[![Screenshot of scaling button.](./media/autoscale-get-started/scaling-page.png)](./media/autoscale-get-started/scaling-page.png#lightbox)
+Let's now go through a step-by-step walkthrough to create your first autoscale setting.
-## Create your first Autoscale setting
+1. Open the **Autoscale** pane in Azure Monitor and select a resource that you want to scale. The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.][5]
+1. The current instance count is 1. Select **Custom autoscale**.
-Let's now go through a simple step-by-step walkthrough to create your first Autoscale setting.
+ [![Screenshot that shows scale setting for a new web app.](./media/autoscale-get-started/manual-scale-04.png)](./media/autoscale-get-started/manual-scale-04.png#lightbox)
-1. Open the **Autoscale** blade in Azure Monitor and select a resource that you want to scale. (The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.][5])
-1. Note that the current instance count is 1. Click **Custom autoscale**.
- [![Scale setting for new web app.](./media/autoscale-get-started/manual-scale-04.png)](./media/autoscale-get-started/manual-scale-04.png#lightbox)
-1. Provide a name for the scale setting, and then click **Add a rule**. This opens as a context pane on the right side. By default, this sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70 percent. Leave it at its default values and click **Add**.
- [![Create scale setting for a web app.](./media/autoscale-get-started/custom-scale-add-rule-05.png)](./media/autoscale-get-started/custom-scale-add-rule-05.png#lightbox)
-1. You've now created your first scale rule. Note that the UX recommends best practices and states that "It is recommended to have at least one scale in rule." To do so:
+1. Provide a name for the scale setting. Select **Add a rule** to open a context pane on the right side. By default, this action sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70 percent. Leave it at its default values and select **Add**.
- a. Click **Add a rule**.
+ [![Screenshot that shows creating a scale setting for a web app.](./media/autoscale-get-started/custom-scale-add-rule-05.png)](./media/autoscale-get-started/custom-scale-add-rule-05.png#lightbox)
- b. Set **Operator** to **Less than**.
+1. You've now created your first scale rule. The UX recommends best practices and states that "It is recommended to have at least one scale in rule." To do so:
- c. Set **Threshold** to **20**.
+ 1. Select **Add a rule**.
+ 1. Set **Operator** to **Less than**.
+ 1. Set **Threshold** to **20**.
+ 1. Set **Operation** to **Decrease count by**.
- d. Set **Operation** to **Decrease count by**.
+ You should now have a scale setting that scales out and scales in based on CPU usage.
- You should now have a scale setting that scales out/scales in based on CPU usage.
- [![Scale based on CPU](./media/autoscale-get-started/custom-scale-results-06.png)](./media/autoscale-get-started/custom-scale-results-06.png#lightbox)
-1. Click **Save**.
+ [![Screenshot that shows scale based on CPU.](./media/autoscale-get-started/custom-scale-results-06.png)](./media/autoscale-get-started/custom-scale-results-06.png#lightbox)
+
+1. Select **Save**.
Congratulations! You've now successfully created your first scale setting to autoscale your web app based on CPU usage. > [!NOTE]
-> The same steps are applicable to get started with a Virtual Machine Scale Set or cloud service role.
+> The same steps are applicable to get started with a Virtual Machine Scale Sets or cloud service role.
## Other considerations+
+The following sections introduce other considerations for autoscaling.
+ ### Scale based on a schedule
-In addition to scale based on CPU, you can set your scale differently for specific days of the week.
-1. Click **Add a scale condition**.
+You can set your scale differently for specific days of the week.
+
+1. Select **Add a scale condition**.
1. Setting the scale mode and the rules is the same as the default condition. 1. Select **Repeat specific days** for the schedule. 1. Select the days and the start/end time for when the scale condition should be applied.
-[![Scale condition based on schedule](./media/autoscale-get-started/scale-same-based-on-condition-07.png)](./media/autoscale-get-started/scale-same-based-on-condition-07.png#lightbox)
+[![Screenshot that shows the scale condition based on schedule.](./media/autoscale-get-started/scale-same-based-on-condition-07.png)](./media/autoscale-get-started/scale-same-based-on-condition-07.png#lightbox)
+ ### Scale differently on specific dates
-In addition to scale based on CPU, you can set your scale differently for specific dates.
-1. Click **Add a scale condition**.
+You can set your scale differently for specific dates.
+
+1. Select **Add a scale condition**.
1. Setting the scale mode and the rules is the same as the default condition. 1. Select **Specify start/end dates** for the schedule. 1. Select the start/end dates and the start/end time for when the scale condition should be applied.
-[![Scale condition based on dates](./media/autoscale-get-started/scale-different-based-on-time-08.png)](./media/autoscale-get-started/scale-different-based-on-time-08.png#lightbox)
+[![Screenshot that shows the scale condition based on dates.](./media/autoscale-get-started/scale-different-based-on-time-08.png)](./media/autoscale-get-started/scale-different-based-on-time-08.png#lightbox)
### View the scale history of your resource+ Whenever your resource is scaled up or down, an event is logged in the activity log. You can view the scale history of your resource for the past 24 hours by switching to the **Run history** tab.
-![Run history][12]
+![Screenshot that shows a Run history screen.][12]
-If you want to view the complete scale history (for up to 90 days), select **Click here to see more details**. The activity log opens, with Autoscale pre-selected for your resource and category.
+To view the complete scale history for up to 90 days, select **Click here to see more details**. The activity log opens, with autoscale preselected for your resource and category.
### View the scale definition of your resource
-Autoscale is an Azure Resource Manager resource. You can view the scale definition in JSON by switching to the **JSON** tab.
-[![Scale definition](./media/autoscale-get-started/view-scale-definition-09.png)](./media/autoscale-get-started/view-scale-definition-09.png#lightbox)
+Autoscale is an Azure Resource Manager resource. To view the scale definition in JSON, switch to the **JSON** tab.
+
+[![Screenshot that shows scale definition.](./media/autoscale-get-started/view-scale-definition-09.png)](./media/autoscale-get-started/view-scale-definition-09.png#lightbox)
-You can make changes in JSON directly, if required. These changes will be reflected after you save them.
+You can make changes in JSON directly, if necessary. These changes will be reflected after you save them.
### Cool-down period effects
-Autoscale uses a cool-down period to prevent "flapping", which is the rapid, repetitive up and down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). Other valuable information on flapping and understanding how to monitor the autoscale engine can be found in [Autoscale Best Practices](autoscale-best-practices.md#choose-the-thresholds-carefully-for-all-metric-types) and [Troubleshooting autoscale](autoscale-troubleshoot.md) respectively.
+Autoscale uses a cool-down period to prevent "flapping," which is the rapid, repetitive up-and-down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). For other valuable information on flapping and understanding how to monitor the autoscale engine, see [Autoscale best practices](autoscale-best-practices.md#choose-the-thresholds-carefully-for-all-metric-types) and [Troubleshooting autoscale](autoscale-troubleshoot.md), respectively.
## Route traffic to healthy instances (App Service) <a id="health-check-path"></a>
-When your Azure web app is scaled out to multiple instances, App Service can perform health checks on your instances to route traffic to the healthy instances. To learn more, see [this article on App Service Health check](../../app-service/monitor-instances-health-check.md).
+When your Azure web app is scaled out to multiple instances, App Service can perform health checks on your instances to route traffic to the healthy instances. To learn more, see [Monitor App Service instances using Health check](../../app-service/monitor-instances-health-check.md).
+
+## Move autoscale to a different region
+
+This section describes how to move Azure autoscale to another region under the same subscription and resource group. You can use REST API to move autoscale settings.
-## Moving Autoscale to a different region
-This section describes how to move Azure autoscale to another region under the same Subscription, and Resource Group. You can use REST API to move autoscale settings.
-### Prerequisite
-1. Ensure that the subscription and Resource Group are available and the details in both the source and destination regions are identical.
-1. Ensure that Azure autoscale is available in the [Azure region you want to move to](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all).
+### Prerequisites
+
+- Ensure that the subscription and resource group are available and the details in both the source and destination regions are identical.
+- Ensure that Azure autoscale is available in the [Azure region you want to move to](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all).
### Move+ Use [REST API](/rest/api/monitor/autoscalesettings/createorupdate) to create an autoscale setting in the new environment. The autoscale setting created in the destination region will be a copy of the autoscale setting in the source region.
-[Diagnostic settings](../essentials/diagnostic-settings.md) that were created in association with the autoscale setting in the source region cannot be moved. You will need to recreate diagnostic settings in the destination region, after the creation of autosale settings is completed.
+[Diagnostic settings](../essentials/diagnostic-settings.md) that were created in association with the autoscale setting in the source region can't be moved. You'll need to re-create diagnostic settings in the destination region, after the creation of autoscale settings is completed.
### Learn more about moving resources across Azure regions
-To learn more about moving resources between regions and disaster recovery in Azure, refer to [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
+
+To learn more about moving resources between regions and disaster recovery in Azure, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
## Next steps-- [Create an Activity Log Alert to monitor all Autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert)-- [Create an Activity Log Alert to monitor all failed Autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)+
+- [Create an activity log alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert)
+- [Create an activity log alert to monitor all failed autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
<!--Reference-->
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
Title: Use predictive autoscale to scale out before load demands in virtual machine scale sets (Preview)
-description: Details on the new predictive autoscale feature in Azure Monitor.
+ Title: Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)
+description: This article provides information on the new predictive autoscale feature in Azure Monitor.
Last updated 07/18/2022
-# Use predictive autoscale to scale out before load demands in virtual machine scale sets (Preview)
+# Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)
-**Predictive autoscale** uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load to your virtual machine scale set, based on your historical CPU usage patterns. By observing and learning from historical usage, it predicts the overall CPU load ensuring scale-out occurs in time to meet the demand.
+*Predictive autoscale* uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load to your virtual machine scale set, based on your historical CPU usage patterns. It predicts the overall CPU load by observing and learning from historical usage. This process ensures that scale-out occurs in time to meet the demand.
-Predictive autoscale needs a minimum of 7 days of history to provide predictions, though 15 days of historical data provides the most accurate results. It adheres to the scaling boundaries you have set for your virtual machine scale set. When the system predicts that the percentage CPU load of your virtual machine scale set will cross your scale-out boundary, new instances are added according to your specifications. You can also configure how far in advance you would like new instances to be provisioned, up to 1 hour before the predicted workload spike will occur.
+Predictive autoscale needs a minimum of 7 days of history to provide predictions. The most accurate results come from 15 days of historical data.
-**Forecast only** allows you to view your predicted CPU forecast without actually triggering the scaling action based on the prediction. You can then compare the forecast with your actual workload patterns to build confidence in the prediction models before enabling the predictive autoscale feature.
+Predictive autoscale adheres to the scaling boundaries you've set for your virtual machine scale set. When the system predicts that the percentage CPU load of your virtual machine scale set will cross your scale-out boundary, new instances are added according to your specifications. You can also configure how far in advance you want new instances to be provisioned, up to 1 hour before the predicted workload spike will occur.
-## Public preview support, availability and limitations
+*Forecast only* allows you to view your predicted CPU forecast without triggering the scaling action based on the prediction. You can then compare the forecast with your actual workload patterns to build confidence in the prediction models before you enable the predictive autoscale feature.
+
+## Public preview support, availability, and limitations
>[!NOTE]
-> This is a public preview release. We are testing and gathering feedback for future releases. As such, we do not provide production level support for this feature. Support is best effort. Send feature suggestions or feedback on predicative autoscale to predautoscalesupport@microsoft.com.
+> This release is a public preview. We're testing and gathering feedback for future releases. As such, we do not provide production-level support for this feature. Support is best effort. Send feature suggestions or feedback on predicative autoscale to predautoscalesupport@microsoft.com.
During public preview, predictive autoscale is only available in the following regions:
During public preview, predictive autoscale is only available in the following r
The following limitations apply during public preview. Predictive autoscale: - Only works for workloads exhibiting cyclical CPU usage patterns.-- Only can be enabled for Virtual Machine Scale Sets.
+- Only can be enabled for virtual machine scale sets.
- Only supports using the metric *Percentage CPU* with the aggregation type *Average*.-- Only supports scale-out. You canΓÇÖt use predictive autoscale to scale-in.
+- Only supports scale-out. You can't use predictive autoscale to scale in.
+
+You must enable standard (or reactive) autoscale to manage scale-in.
-You have to enable standard (or reactive) autoscale to manage scale-in.
-Enabling predictive autoscale or forecast only with Azure portal
+## Enable predictive autoscale or forecast only with the Azure portal
-1. Go to the virtual machine scale set screen and select on **Scaling**.
+1. Go to the **Virtual machine scale set** screen and select **Scaling**.
- :::image type="content" source="media/autoscale-predictive/main-scaling-screen-1.png" alt-text="Screenshot showing selecting the scaling screen from the left hand menu in Azure portal":::
+ :::image type="content" source="media/autoscale-predictive/main-scaling-screen-1.png" alt-text="Screenshot that shows selecting Scaling on the left menu in the Azure portal.":::
-2. Under **Custom autoscale** section, there's a new field called **Predictive autoscale**.
+1. Under the **Custom autoscale** section, **Predictive autoscale** appears.
- :::image type="content" source="media/autoscale-predictive/custom-autoscale-2.png" alt-text="Screenshot sowing selecting custom autoscale and then predictive autoscale option from Azure portal":::
+ :::image type="content" source="media/autoscale-predictive/custom-autoscale-2.png" alt-text="Screenshot that shows selecting Custom autoscale and the Predictive autoscale option in the Azure portal.":::
- Using the drop-down selection, you can:
- - Disable predictive autoscale - Disable is the default selection when you first land on the page for predictive autoscale.
- - Enable forecast only mode
- - Enable predictive autoscale
+ By using the dropdown selection, you can:
+ - Disable predictive autoscale. Disable is the default selection when you first land on the page for predictive autoscale.
+ - Enable forecast-only mode.
+ - Enable predictive autoscale.
- > [!NOTE]
- > Before you can enable predictive autoscale or forecast only mode, you must set up the standard reactive autoscale conditions.
+ > [!NOTE]
+ > Before you can enable predictive autoscale or forecast-only mode, you must set up the standard reactive autoscale conditions.
-3. To enable forecast only, select it from the dropdown. Define a scale up trigger based on *Percentage CPU*. Then select **Save**. The same process applies to enable predictive autoscale. To disable predictive autoscale or forecast only mode, choose **Disable** from the drop-down.
+1. To enable forecast-only mode, select it from the dropdown. Define a scale-up trigger based on *Percentage CPU*. Then select **Save**. The same process applies to enable predictive autoscale. To disable predictive autoscale or forecast-only mode, select **Disable** from the dropdown.
- :::image type="content" source="media/autoscale-predictive/enable-forecast-only-mode-3.png" alt-text="Screenshot of enable forecast only mode":::
+ :::image type="content" source="media/autoscale-predictive/enable-forecast-only-mode-3.png" alt-text="Screenshot that shows enabling forecast-only mode.":::
-4. If desired, specify a pre-launch time so the instances are full running before they're needed. You can pre-launch instances between 5 and 60 minutes before the needed prediction time.
+1. If desired, specify a pre-launch time so the instances are fully running before they're needed. You can pre-launch instances between 5 and 60 minutes before the needed prediction time.
- :::image type="content" source="media/autoscale-predictive/pre-launch-4.png" alt-text="Screenshot of predictive autoscale pre-launch setup":::
+ :::image type="content" source="media/autoscale-predictive/pre-launch-4.png" alt-text="Screenshot that shows predictive autoscale pre-launch setup.":::
-5. Once you have enabled predictive autoscale or forecast only and saved it, select *Predictive charts*.
+1. After you've enabled predictive autoscale or forecast-only mode and saved it, select **Predictive charts**.
- :::image type="content" source="media/autoscale-predictive/predictve-charts-option-5.png" alt-text="Screenshot of selecting predictive charts menu option":::
+ :::image type="content" source="media/autoscale-predictive/predictve-charts-option-5.png" alt-text="Screenshot that shows selecting the Predictive charts menu option.":::
-6. You see three charts:
+1. You see three charts:
- :::image type="content" source="media/autoscale-predictive/predictive-charts-6.png" alt-text="Screenshot of three charts for predictive autoscale" lightbox="media/autoscale-predictive/predictive-charts-6.png":::
+ :::image type="content" source="media/autoscale-predictive/predictive-charts-6.png" alt-text="Screenshot that shows three charts for predictive autoscale." lightbox="media/autoscale-predictive/predictive-charts-6.png":::
-- The top chart shows an overlaid comparison of actual vs predicted total CPU percentage. The timespan of the graph shown is from the last 24 hours to the next 24 hours.-- The second chart shows the number of instances running at specific times over the last 24 hours.-- The third chart shows the current Average CPU utilization over the last 24 hours.
+ - The top chart shows an overlaid comparison of actual versus predicted total CPU percentage. The time span of the graph shown is from the last 24 hours to the next 24 hours.
+ - The middle chart shows the number of instances running at specific times over the last 24 hours.
+ - The bottom chart shows the current Average CPU utilization over the last 24 hours.
## Enable using an Azure Resource Manager template
-1. Retrieve the virtual machine scale set resource ID and resource group of your virtual machine scale set. For example: /subscriptions/e954e48d-abcd-abcd-abcd-3e0353cb45ae/resourceGroups/patest2/providers/Microsoft.Compute/virtualMachineScaleSets/patest2
+1. Retrieve the virtual machine scale set resource ID and resource group of your virtual machine scale set. For example: /subscriptions/e954e48d-abcd-abcd-abcd-3e0353cb45ae/resourceGroups/patest2/providers/Microsoft.Compute/virtualMachineScaleSets/patest2
-2. Update *autoscale_only_parameters* file with the virtual machine scale set resource ID and any autoscale setting parameters.
+1. Update the *autoscale_only_parameters* file with the virtual machine scale set resource ID and any autoscale setting parameters.
-3. Use a PowerShell command to deploy the template containing the autoscale settings. For example,
+1. Use a PowerShell command to deploy the template that contains the autoscale settings. For example:
```cmd PS G:\works\kusto_onboard\test_arm_template> new-azurermresourcegroupdeployment -name binzAutoScaleDeploy -resourcegroupname cpatest2 -templatefile autoscale_only.json -templateparameterfile autoscale_only_parameters.json ``` **autoscale_only.json** ```json
PS G:\works\kusto_onboard\test_arm_template> new-azurermresourcegroupdeployment
} } ```
-
-For more information on Azure Resource Manager templates, see [Resource Manager template overview](../../azure-resource-manager/templates/overview.md)
+
+For more information on Azure Resource Manager templates, see [Resource Manager template overview](../../azure-resource-manager/templates/overview.md).
## Common questions
+This section answers common questions.
+ ### What happens over time when you turn on predictive autoscale for a virtual machine scale set?
-Prediction autoscale uses the history of a running virtual machine scale set. If your scale set has been running less than 7 days, you'll receive a message that the model is being trained. See the [no predictive data message](#errors-and-warnings). Predictions improve as time goes by achieving its maximum accuracy 15 days after the virtual machine scale set is created.
+Prediction autoscale uses the history of a running virtual machine scale set. If your scale set has been running less than 7 days, you'll receive a message that the model is being trained. For more information, see the [no predictive data message](#errors-and-warnings). Predictions improve as time goes by and achieve maximum accuracy 15 days after the virtual machine scale set is created.
-If changes to the workload pattern occur (but remain periodic), the model recognizes the change and begins to adjust the forecast accordingly. The forecast improves as time goes by. Maximum accuracy is reached 15 days after the change in the traffic pattern happens. Remember that your standard autoscale rules still apply. If a new unpredicted increase in traffic occurs, your virtual machine scale set will still scale out to meet the demand.
+If changes to the workload pattern occur but remain periodic, the model recognizes the change and begins to adjust the forecast. The forecast improves as time goes by. Maximum accuracy is reached 15 days after the change in the traffic pattern happens. Remember that your standard autoscale rules still apply. If a new unpredicted increase in traffic occurs, your virtual machine scale set will still scale out to meet the demand.
### What if the model isn't working well for me?
-The modeling works best with workloads that exhibit periodicity. We recommended you first evaluate the predictions by enabling "forecast only" which will overlay the scale setΓÇÖs predicted CPU usage with the actual, observed usage. Once you compare and evaluate the results, you can then choose to enable scaling based on the predicted metrics if the model predictions are close enough for your scenario.
+The modeling works best with workloads that exhibit periodicity. We recommend that you first evaluate the predictions by enabling "forecast only," which will overlay the scale set's predicted CPU usage with the actual, observed usage. After you compare and evaluate the results, you can then choose to enable scaling based on the predicted metrics if the model predictions are close enough for your scenario.
+
+### Why do I need to enable standard autoscale before I enable predictive autoscale?
-### Why do I need to enable standard autoscale before enabling predictive autoscale?
+Standard autoscaling is a necessary fallback if the predictive model doesn't work well for your scenario. Standard autoscale will cover unexpected load spikes, which aren't part of your typical CPU load pattern. It also provides a fallback if an error occurs in retrieving the predictive data.
-Standard autoscaling is a necessary fallback if the predictive model doesn't work well for your scenario. Standard autoscale will cover unexpected load spikes which aren't part of your typical CPU load pattern. It also provides a fallback should there be any error retrieving the predictive data.
+## Errors and warnings
-## Errors and Warnings
+This section addresses common errors and warnings.
### Didn't enable standard autoscale
-
-You receive the error message as seen below:
- *Predictive autoscale is based on the metric percentage CPU of the current resource. Choose this metric in the scale up trigger rules*.
+You receive the following error message:
+
+ *Predictive autoscale is based on the metric percentage CPU of the current resource. Choose this metric in the scale up trigger rules*.
This message means you attempted to enable predictive autoscale before you enabled standard autoscale and set it up to use the *Percentage CPU* metric with the *Average* aggregation type. ### No predictive data
-You won't see data on the predictive charts under certain conditions. This isn't an error; it's the intended behavior.
+You won't see data on the predictive charts under certain conditions. This behavior isn't an error, it's the intended behavior.
-When predictive autoscale is disabled, you instead receive a message beginning with "No data to show..." and giving you instructions on what to enable so you can see a predictive chart.
+When predictive autoscale is disabled, you instead receive a message that begins with "No data to show..." You then see instructions on what to enable so that you can see a predictive chart.
- :::image type="content" source="media/autoscale-predictive/error-no-data-to-show.png" alt-text="Screenshot of message No data to show":::
+ :::image type="content" source="media/autoscale-predictive/error-no-data-to-show.png" alt-text="Screenshot that shows the message No data to show.":::
-When you first create a virtual machine scale set and enable forecast only mode, you receive a message telling you "Predictive data is being trained.." and a time to return to see the chart.
+When you first create a virtual machine scale set and enable forecast-only mode, you receive the message "Predictive data is being trained..." and a time to return to see the chart.
- :::image type="content" source="media/autoscale-predictive/message-being-trained-12.png" alt-text="Screenshot of message Predictive data is being trained":::
+ :::image type="content" source="media/autoscale-predictive/message-being-trained-12.png" alt-text="Screenshot that shows the message Predictive data is being trained.":::
## Next steps
-Learn more about Autoscale by referring to the following:
+Learn more about autoscale in the following articles:
- [Overview of autoscale](./autoscale-overview.md) - [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
This section contains a declaration of all the destinations where the data will
This section ties the other sections together. Defines the following for each stream declared in the `streamDeclarations` section: - `destination` from the `destinations` section where the data will be sent. -- `transformKql` which is the [transformation](/data-collection-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table.
+- `transformKql` which is the [transformation](data-collection-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table.
- `outputStream` section, which describes which table in the workspace specified under the `destination` property the data will be ingested into. The value of the outputStream will have the `Microsoft-[tableName]` shape when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom-created table. Only one destination is allowed per stream. ## Azure Monitor agent
azure-monitor Data Collection Transformations Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md
ms.reviwer: nikeist
# Structure of transformation in Azure Monitor (preview)
-[Transformations in Azure Monitor](/data-collection-transformations.md) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. They are implemented as a Kusto Query Language (KQL) statement in a [data collection rule (DCR)](data-collection-rule-overview.md). This article provides details on how this query is structured and limitations on the KQL language allowed.
+[Transformations in Azure Monitor](data-collection-transformations.md) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. They are implemented as a Kusto Query Language (KQL) statement in a [data collection rule (DCR)](data-collection-rule-overview.md). This article provides details on how this query is structured and limitations on the KQL language allowed.
## Transformation structure
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-resource-manager Child Resource Name Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/child-resource-name-type.md
The following example shows the child resource outside of the parent resource. Y
] ```
-When defined outside of the parent resource, you format the type and with slashes to include the parent type and name.
+When defined outside of the parent resource, you format the type and name values with slashes to include the parent type and name.
```json "type": "{resource-provider-namespace}/{parent-resource-type}/{child-resource-type}",
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
communication-services Call Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/call-logs-azure-monitor.md
The Call Summary Log contains data to help you identify key properties of all Ca
| operationVersion | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. | | category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. | | correlationIdentifier | `correlationIdentifier` is the unique ID for a Call. The `correlationIdentifier` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationID` will be used to easily identify the Call you're troubleshooting. |
-| identifier | This is the unique ID for the user, matching the identity assigned by the Communications Authentication service. You can use this ID to correlate user events across different logs. This ID can also be used to identify Microsoft Teams "Interoperability" scenarios described later in this document. |
+| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
| callStartTime | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. | | callDuration | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. | | callType | Will contain either `"P2P"` or `"Group"`. A `"P2P"` Call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` Call is a Call that has more than two endpoints or is created as `"Group"` Call prior to the connection. |
Call Diagnostic Logs provide important information about the Endpoints and the m
| category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. | | correlationIdentifier | The `correlationIdentifier` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationIdentifier` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationID` will be used to easily identify the Call you're troubleshooting. | | participantId | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| identifier | This ID represents the user identity, as defined by the Authentication service. Use this ID to correlate different events across calls and services. |
+| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
| endpointId | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationIdentifier`) for native clients but will be unique for every Call when the client is a web browser. | | endpointType | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, or `ΓÇ£UnknownΓÇ¥`. | | mediaType | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. |
Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3:
"jitterAvg": "1", "jitterMax": "4", "packetLossRateAvg": "0",
-```
+```
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
Communication Services offers the following types of logs that you can enable:
| URI | The URI of the request. | | SdkType | The SDK type used in the request. | | PlatformType | The platform type used in the request. |
-| Identity | The Communication Services identity related to the operation. |
+| Identity | The identity of Azure Communication Services or Teams user related to the operation. |
| Scopes | The Communication Services scopes present in the access token. | ### Network Traversal operational logs
Communication Services offers the following types of logs that you can enable:
| RecipientId | The email address for the targeted recipient. If this is a message-level event, the property will be empty. | | EngagementType | The type of user engagement being tracked. | | EngagementContext | The context represents what the user interacted with. |
-| UserAgent | The user agent string from the client. |
+| UserAgent | The user agent string from the client. |
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for det
**Total cost for the VoIP + escalation call**: $0.16 + $0.13 = $.29 -
-### Pricing example: A user of the Communication Services JavaScript SDK joins a scheduled Microsoft Teams meeting
-
-Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit from the Teams Desktop application. Bob will receive a link to join using the healthcare provider website, which connects to the meeting using the Communication Services JavaScript SDK. Bob will use his mobile phone to enter the meeting using a web browser (iPhone with Safari). Chat will be available during the virtual visit.
--- The call lasts a total of 30 minutes.-- When Bob joins the meeting, he's placed in the Teams meeting lobby per Teams policy. After one minute, Alice admits him into the meeting.-- After Bob is admitted to the meeting, Alice and Bob participate for the entire call. Alice turns on her video five minutes after the call starts and shares her screen for 13 minutes. Bob has his video on for the whole call.-- Alice sends five messages, Bob replies with three messages.--
-**Cost calculations**
--- One Participant (Bob) connected to Teams lobby x 1 minute x $0.004 per participant per minute (lobby charged at regular rate of meetings) = $0.004-- One participant (Bob) x 29 minutes x $0.004 per participant per minute = $0.116 [both video and audio are charged at the same rate]-- One participant (Alice) x 30 minutes x $0.000 per participant per minute = $0.0*.-- One participant (Bob) x three chat messages x $0.0008 = $0.0024.-- One participant (Alice) x five chat messages x $0.000 = $0.0*.-
-*Alice's participation is covered by her Teams license. Your Azure invoice will show the minutes and chat messages that Teams users had with Communication Services Users for your convenience, but those minutes and messages originating from the Teams client won't be charged.
-
-**Total cost for the visit**:
-- User joining using the Communication Services JavaScript SDK: $0.004 + $0.116 + $0.0024 = $0.1224-- User joining on Teams Desktop Application: $0 (covered by Teams license)-
-### Pricing example: Inbound PSTN call to the Communication Services JavaScript SDK with Teams identity elevated to group call with another Teams user on Teams desktop client
-
-Alice has ordered a product from Contoso and struggles to set it up. Alice calls from her phone (Android) 800-CONTOSO to ask for help with the received product. Bob is a customer support agent in Contoso and sees an incoming call from Alice on the customer support website (Windows, Chrome browser). Bob accepts the incoming call via Communication Services JavaScript SDK initialized with Teams identity. Teams calling plan enables Bob to receive PSTN calls. Bob sees on the website the product ordered by Alice. Bob decides to invite product expert Charlie to the call. Charlie sees an incoming group call from Bob in the Teams Desktop client and accepts the call.
--- The call lasts a total of 30 minutes.-- Bob accepts the call from Alice.-- After five minutes, Bob adds Charlie to the call. Charlie has his camera turned off for 10 minutes. Then turns his camera on for the rest of the call. -- After another 10 minutes, Alice leaves the call. -- After another five minutes, both Bob and Charlie leave the call-
-**Cost calculations**
--- One Participant (Alice) called the phone number associated with Teams user Bob using Teams Calling plan x 25 minutes deducted from Bob's tenant Teams minute pool-- One participant (Bob) x 30 minutes x $0.004 per participant per minute = $0.12 [both video and audio are charged at the same rate]-- One participant (Charlie) x 25 minutes x $0.000 per participant per minute = $0.0*.-
-*Charlie's participation is covered by his Teams license.
-
-**Total cost of the visit**:
-- Teams cost for a user joining using the Communication Services JavaScript SDK: 25 minutes from Teams minute pool-- Communication Services cost for a user joining using the Communication Services JavaScript SDK: $0.12-- User joining on Teams Desktop client: $0 (covered by Teams license)-- ## Call Recording Azure Communication Services allows customers to record PSTN, WebRTC, Conference, SIP Interface calls. Currently Call Recording supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats. Call Recording SDKs are available for Java and C#. Refer to [this page to learn more](../quickstarts/voice-video-calling/call-recording-sample.md).
communication-services Teams Interop Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing/teams-interop-pricing.md
+
+ Title: Pricing for Teams interop scenarios
+
+description: Learn about Communication Services' Pricing Model for Teams interoperability
+++++ Last updated : 08/01/2022+++
+# Teams interoperability pricing
+
+Azure Communication Services and Graph API allow developers to integrate chat and calling capabilities into any product. The pricing depends on the following factors:
+- Identity
+- Product used for real-time communication
+
+The following sections cover communication defined based on the criteria mentioned above.
+
+## Communication as Teams guest
+
+Teams guest is a user that does not belong to any Azure AD tenant, and Teams administrator regulates its access via policies targeting `Teams anonymous users`.
+
+### Teams clients
+Teams meeting organizer's license covers the usage generated by Teams guests joining Teams meeting via built-in experience in Teams web, desktop, and mobile client. The Teams meeting organizer's license does not cover the usage generated in the third-party Teams extension and Teams app. The following table shows the price of using Teams clients as Teams guests:
+
+| Action | Tool | Price|
+|--|| --|
+| Send message | Teams web, mobile, desktop client | $0|
+| Receive message | Teams web, mobile, desktop client | $0 |
+| Teams guest participates in Teams meeting with audio, video, screen sharing, and TURN services | Teams web, mobile, desktop client | $0 per minute |
+
+### APIs
+External customers joining Teams meeting's audio, video, screen sharing, or chat create usage on Azure Communication Services resource. Teams extensions and Teams apps will use existing APIs to integrate communication, which generates consumption on Azure Communication Services resources. The following table shows the price of using Azure Communication Services as Teams guests:
+
+| Action | Tool | Price|
+|--|| --|
+| Send message | Azure Communication Services | $0.0008|
+| Receive message | Azure Communication Services | $0 |
+| Teams guest participates in Teams meeting with audio, video, screen sharing, and TURN services | Azure Communication Services | $0.004 per minute |
+
+Teams user in the lobby or on hold generates consumption on the Azure Communication Services resource.
+
+## Communication as Teams user
+
+Teams user is an Azure AD user with appropriate licenses. Teams users can be from the same or different organizations, depending on the Azure AD tenant. Teams administrator regulates the communication of Teams users via policies targeting `people in my organization` and `people in trusted organization`.
+
+### Teams clients
+Teams meeting organizer's license covers the usage generated by Teams users joining Teams meetings and participating in calls via built-in experience in Teams web, desktop, and mobile client. The Teams license does not cover the usage generated in third-party Teams extensions and Teams apps. The following table shows the price of using Teams clients as Teams users:
+
+| Action | Tool | Price|
+|--|| --|
+| Send message | Teams web, mobile, desktop client | $0|
+| Receive message | Teams web, mobile, desktop client | $0 |
+| Teams guest participates in Teams meeting with audio, video, screen sharing, and TURN services | Teams web, mobile, desktop client | $0 per minute |
+
+### APIs
+Teams users participating in Teams meetings and calls generate usage on Azure Communication Services resources and Graph API for audio, video, screen sharing, and chat. Teams extensions and Teams apps will use existing APIs to integrate communication, which generates consumption on Azure Communication Services resources or Graph API. The following table shows the price of using Azure Communication Services as Teams user:
+
+| Action | Tool | Price|
+|--|| --|
+| Send message | Graph API | $0|
+| Receive message | Graph API | $0 |
+| Teams guest participates in Teams meeting with audio, video, screen sharing, and TURN services | Azure Communication Services | $0.004 per minute |
+
+Teams user in the lobby or on hold generates consumption on the Azure Communication Services resource.
+
+## Pricing scenarios
+
+### Teams guest joins scheduled Microsoft Teams meeting via Azure Communication Services SDK
+
+Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit from the Teams Desktop application. Bob will receive a link to join using the healthcare provider website, which connects to the meeting using the Communication Services JavaScript SDK. Bob will use his mobile phone to enter the meeting using a web browser (iPhone with Safari). Chat will be available during the virtual visit.
+
+- The call lasts a total of 30 minutes.
+- When Bob joins the meeting, he's placed in the Teams meeting lobby per Teams policy. After one minute, Alice admits him into the meeting.
+- After Bob is admitted to the meeting, Alice and Bob participate for the entire call. Alice turns on her video five minutes after the call starts and shares her screen for 13 minutes. Bob has his video on for the whole call.
+- Alice sends five messages, Bob replies with three messages.
++
+**Cost calculations**
+
+- One Participant (Bob) connected to Teams lobby x 1 minute x $0.004 per participant per minute (lobby charged at regular rate of meetings) = $0.004
+- One participant (Bob) x 29 minutes x $0.004 per participant per minute = $0.116 [both video and audio are charged at the same rate]
+- One participant (Alice) x 30 minutes x $0.000 per participant per minute = $0.0*.
+- One participant (Bob) x three chat messages x $0.0008 = $0.0024.
+- One participant (Alice) x five chat messages x $0.000 = $0.0*.
+
+*Alice's participation is covered by her Teams license. Your Azure invoice will show the minutes and chat messages that Teams users had with Communication Services Users for your convenience, but those minutes and messages originating from the Teams client won't be charged.
+
+**Total cost for the visit**:
+- User joining using the Communication Services JavaScript SDK: $0.004 + $0.116 + $0.0024 = $0.1224
+- User joining on Teams Desktop Application: $0 (covered by Teams license)
+
+## Inbound phone call to the Teams user using Azure Communication Services SDK elevated to group call with another Teams user on Teams desktop client
+
+Alice has ordered a product from Contoso and struggles to set it up. Alice calls from her phone (Android) 800-CONTOSO to ask for help with the received product. Bob is a customer support agent in Contoso and sees an incoming call from Alice on the customer support website (Windows, Chrome browser). Bob accepts the incoming call via Communication Services JavaScript SDK initialized with Teams identity. Teams calling plan enables Bob to receive PSTN calls. Bob sees on the website the product ordered by Alice. Bob decides to invite product expert Charlie to the call. Charlie sees an incoming group call from Bob in the Teams Desktop client and accepts the call.
+
+- The call lasts a total of 30 minutes.
+- Bob accepts the call from Alice.
+- After five minutes, Bob adds Charlie to the call. Charlie has his camera turned off for 10 minutes. Then turns his camera on for the rest of the call.
+- After another 10 minutes, Alice leaves the call.
+- After another five minutes, both Bob and Charlie leave the call
+
+**Cost calculations**
+
+- One Participant (Alice) called the phone number associated with Teams user Bob using Teams Calling plan x 25 minutes deducted from Bob's tenant Teams minute pool
+- One participant (Bob) x 30 minutes x $0.004 per participant per minute = $0.12 [both video and audio are charged at the same rate]
+- One participant (Charlie) x 25 minutes x $0.000 per participant per minute = $0.0*.
+
+*Charlie's participation is covered by his Teams license.
+
+**Total cost of the visit**:
+- Teams cost for a user joining using the Communication Services JavaScript SDK: 25 minutes from Teams minute pool
+- Communication Services cost for a user joining using the Communication Services JavaScript SDK: $0.12
+- User joining on Teams Desktop client: $0 (covered by Teams license)
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
The Calling package supports UWP apps build with .NET Native or C++/WinRT on:
## REST APIs
-Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
+Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs). You can find throttling limits for individual APIs on [service limits page](./service-limits.md).
### REST API Throttles
confidential-computing Confidential Node Pool Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-node-pool-aks.md
+
+ Title: Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs - Preview
+description: Learn about confidential node pool support on AKS with AMD SEV-SNP confidential VMs
+++ Last updated : 8/1/2022+++++
+# Confidential VM node pool support on AKS with AMD SEV-SNP confidential VMs - Preview
+
+[Azure Kubernetes Service (AKS)](../aks/index.yml) makes it simple to deploy a managed Kubernetes cluster in Azure. In AKS, nodes of the same configuration are grouped together into node pools. These node pools contain the underlying VMs that run your applications.
+
+AKS now supports confidential VM node pools with Azure confidential VMs. These confidential VMs are the [generally available DCasv5 and ECasv5 confidential VM-series](https://aka.ms/AMD-ACC-VMs-GA-Inspire-2022) utilizing 3rd Gen AMD EPYC<sup>TM</sup> processors with Secure Encrypted Virtualization-Secure Nested Paging ([SEV-SNP](https://www.amd.com/en/technologies/infinity-guard)) security features. To read more about this offering, head to our [announcement](https://aka.ms/ACC-AKS-AMD-SEV-SNP-Preview-Blog).
+
+## Benefits
+Confidential node pools leverage VMs with a hardware-based Trusted Execution Environment (TEE). AMD SEV-SNP confidential VMs deny the hypervisor and other host management code access to VM memory and state, and add defense in depth protections against operator access.
+
+In addition to the hardened security profile, confidential node pools on AKS also enable:
+
+- Lift and Shift with full AKS feature support - to enable a seamless lift-and-shift of Linux container workloads
+- Heterogenous Node Pools - to store sensitive data in a VM-level TEE node pool with memory encryption keys generated from the chipset itself
++
+Get started and add confidential node pools to existing AKS cluster with [this quick start guide](../aks/use-cvm.md).
+
+## Questions?
+
+If you have questions about container offerings, please reach out to <acconaks@microsoft.com>.
+
+## Next steps
+
+- [Deploy a confidential node pool in your AKS cluster](../aks/use-cvm.md)
+- Learn more about sizes and specs for [general purpose](../virtual-machines/dcasv5-dcadsv5-series.md) and [memory-optimized](../virtual-machines/ecasv5-ecadsv5-series.md) confidential VMs.
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 05/28/2022 Last updated : 07/30/2022 tags: connectors
You can add network security to an Azure storage account by [restricting access
- To access storage accounts behind firewalls using the Azure Blob Storage managed connector in Consumption, Standard, and ISE-based logic apps, review the following documentation:
- - [Access storage accounts in same region with managed identities](#access-blob-storage-in-same-region-with-managed-identities)
+ - [Access storage accounts in same region with system-managed identities](#access-blob-storage-in-same-region-with-system-managed-identities)
- [Access storage accounts in other regions](#access-storage-accounts-in-other-regions)
To add your outbound IP addresses to the storage account firewall, follow these
You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
-### Access Blob Storage in same region with managed identities
+### Access Blob Storage in same region with system-managed identities
To connect to Azure Blob Storage in any region, you can use [managed identities for authentication](../active-directory/managed-identities-azure-resources/overview.md). You can create an exception that gives Microsoft trusted services, such as a managed identity, access to your storage account through a firewall.
To use managed identities in your logic app to access Blob Storage, follow these
> [!NOTE] > Limitations for this solution: >
-> - You must set up a managed identity to authenticate your storage account connection.
+> - To authenticate your storage account connection, you have to set up a system-assigned managed identity.
+> A user-assigned managed identity won't work.
>
-> - For Standard logic apps in the single-tenant Azure Logic Apps environment, only the system-assigned
-> managed identity is available and supported, not the user-assigned managed identity.
#### Configure storage account access
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPP
az monitor log-analytics query \ --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" \
- --out table
+ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | sort by TimeGenerated | take 5" \
+ --out table |
``` # [PowerShell](#tab/powershell)
$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`
az monitor log-analytics query ` --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" `
+ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | sort by TimeGenerated | take 5" `
--out table ```
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
cosmos-db How To Javascript Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-queries.md
ms.devlang: javascript Previously updated : 06/23/2022 Last updated : 07/29/2022
-# Use a query in Azure Cosmos DB MongoDB API using JavaScript
+# Query data in Azure Cosmos DB MongoDB API using JavaScript
[!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
-Use queries to find documents in a collection.
+Use [queries](#query-for-documents) and [aggregation pipelines](#aggregation-pipelines) to find and manipulate documents in a collection.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
The preceding code snippet displays the following example console output:
:::code language="console" source="~/samples-cosmosdb-mongodb-javascript/275-find/index.js" id="console_result_findone":::
+## Aggregation pipelines
+
+Aggregation pipelines are useful to isolate expensive query computation, transformations, and other processing on your Cosmos DB server, instead of performing these operations on the client.
+
+For specific **aggregation pipeline support**, refer to the following:
+
+* [Version 4.2](feature-support-42.md#aggregation-pipeline)
+* [Version 4.0](feature-support-40.md#aggregation-pipeline)
+* [Version 3.6](feature-support-36.md#aggregation-pipeline)
+* [Version 3.2](feature-support-32.md#aggregation-pipeline)
+
+### Aggregation pipeline syntax
+
+A pipeline is an array with a series of stages as JSON objects.
+
+```javascript
+const pipeline = [
+ stage1,
+ stage2
+]
+```
+
+### Pipeline stage syntax
+
+A _stage_ defines the operation and the data it's applied to, such as:
+
+* $match - find documents
+* $addFields - add field to cursor, usually from previous stage
+* $limit - limit the number of results returned in cursor
+* $project - pass along new or existing fields, can be computed fields
+* $group - group results by a field or fields in pipeline
+* $sort - sort results
+
+```javascript
+// reduce collection to relative documents
+const matchStage = {
+ '$match': {
+ 'categoryName': { $regex: 'Bikes' },
+ }
+}
+
+// sort documents on field `name`
+const sortStage = {
+ '$sort': {
+ "name": 1
+ }
+},
+```
+
+### Aggregate the pipeline to get iterable cursor
+
+The pipeline is aggregated to produce an iterable cursor.
+
+```javascript
+const db = 'adventureworks';
+const collection = 'products';
+
+const aggCursor = client.db(databaseName).collection(collectionName).aggregate(pipeline);
+
+await aggCursor.forEach(product => {
+ console.log(JSON.stringify(product));
+});
+```
+
+## Use an aggregation pipeline in JavaScript
+
+Use a pipeline to keep data processing on the server before returning to the client.
+
+### Example product data
+
+The aggregations below use the [sample products collection](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/252-insert-many/products.json) with data in the shape of:
+
+```json
+[
+ {
+ "_id": "08225A9E-F2B3-4FA3-AB08-8C70ADD6C3C2",
+ "categoryId": "75BF1ACB-168D-469C-9AA3-1FD26BB4EA4C",
+ "categoryName": "Bikes, Touring Bikes",
+ "sku": "BK-T79U-50",
+ "name": "Touring-1000 Blue, 50",
+ "description": "The product called \"Touring-1000 Blue, 50\"",
+ "price": 2384.0700000000002,
+ "tags": [
+ ]
+ },
+ {
+ "_id": "0F124781-C991-48A9-ACF2-249771D44029",
+ "categoryId": "56400CF3-446D-4C3F-B9B2-68286DA3BB99",
+ "categoryName": "Bikes, Mountain Bikes",
+ "sku": "BK-M68B-42",
+ "name": "Mountain-200 Black, 42",
+ "description": "The product called \"Mountain-200 Black, 42\"",
+ "price": 2294.9899999999998,
+ "tags": [
+ ]
+ },
+ {
+ "_id": "3FE1A99E-DE14-4D11-B635-F5D39258A0B9",
+ "categoryId": "26C74104-40BC-4541-8EF5-9892F7F03D72",
+ "categoryName": "Components, Saddles",
+ "sku": "SE-T924",
+ "name": "HL Touring Seat/Saddle",
+ "description": "The product called \"HL Touring Seat/Saddle\"",
+ "price": 52.640000000000001,
+ "tags": [
+ ]
+ },
+]
+```
+
+### Example 1: Product subcategories, count of products, and average price
+
+Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/average-price-in-each-product-subcategory.js) to report on average price in each product subcategory.
+++
+### Example 2: Bike types with price range
+
+Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/bike-types-and-price-ranges.js) to report on the `Bikes` subcategory.
++++ ## See also - [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
cosmos-db Create Sql Api Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-java.md
>
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
> [!IMPORTANT] > This quickstart is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-nodejs.md
> * [Spark v3](create-sql-api-spark.md) > * [Go](create-sql-api-go.md) >
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. Without a credit card or an Azure subscription, you can set up a free 30 day [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
## Walkthrough video
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
> * [Go](create-sql-api-go.md) >
-This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x.
+This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
Throughout this quick tutorial, we rely on [Azure Databricks Runtime 8.0 with Spark 3.1.1](/azure/databricks/release-notes/runtime/8.0) and a Jupyter Notebook to show how to use the Cosmos DB Spark Connector, but you can also use [Azure Databricks Runtime 10.3 with Spark 3.2.1](/azure/databricks/release-notes/runtime/10.3).
You can use any other Spark 3.1.1 or 3.2.1 spark offering as well, also you shou
## Prerequisites
-* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/try/cosmosdb/). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
+* An active Azure account. If you don't have one, you can sign up for a [free account](https://aka.ms/trycosmosdb). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
* [Azure Databricks](/azure/databricks/release-notes/runtime/8.0) runtime 8.0 with Spark 3.1.1 or [Azure Databricks](/azure/databricks/release-notes/runtime/10.3) runtime 10.3 with Spark 3.2.1.
cosmos-db Create Sql Api Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spring-data.md
> * [Go](create-sql-api-go.md) >
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
> [!IMPORTANT] > These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find [release notes for version 2 here](sql-api-sdk-java-spring-v2.md).
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription or credit card. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
cosmos-db Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-template.md
# Quickstart: Create an Azure Cosmos DB and a container by using an ARM template [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create an Azure Cosmos database and a container within that database. You can later store data in this container.
+Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create an Azure Cosmos database and a container within that database. You can later store data in this container.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
> * [Go](create-sql-api-go.md) >
-Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks.
+Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-dotnet-quickstart) are available on GitHub as a .NET project.
Get started with the Azure Cosmos DB client library for .NET to create databases
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* An Azure account with an active subscription. [Create an account for free](https://aka.ms/trycosmosdb).
* [.NET 6.0 or later](https://dotnet.microsoft.com/download) * [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/samples-dotnet.md
The [cosmos-db-sql-api-dotnet-samples](https://github.com/Azure-Samples/cosmos-d
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* An Azure account with an active subscription. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
* Azure Cosmos DB SQL API account. [Create a SQL API account](how-to-create-account.md). * [.NET 6.0 or later](https://dotnet.microsoft.com/download) * [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-application.md
> * [Python](./create-sql-api-python.md) >
-This Java web application tutorial shows you how to use the [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service to store and access data from a Java application hosted on Azure App Service Web Apps. In this article, you will learn:
+This Java web application tutorial shows you how to use the [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service to store and access data from a Java application hosted on Azure App Service Web Apps. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this article, you will learn:
* How to build a basic JavaServer Pages (JSP) application in Eclipse. * How to work with the Azure Cosmos DB service using the [Azure Cosmos DB Java SDK](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos).
This Java application tutorial shows you how to create a web-based task-manageme
Before you begin this application development tutorial, you must have the following:
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cosmos-db Sql Api Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-nodejs-get-started.md
> * [Node.js](sql-api-nodejs-get-started.md) >
-As a developer, you might have applications that use NoSQL document data. You can use a SQL API account in Azure Cosmos DB to store and access this document data. This tutorial shows you how to build a Node.js console application to create Azure Cosmos DB resources and query them.
+As a developer, you might have applications that use NoSQL document data. You can use a SQL API account in Azure Cosmos DB to store and access this document data. This tutorial shows you how to build a Node.js console application to create Azure Cosmos DB resources and query them. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
In this tutorial, you will:
In this tutorial, you will:
Make sure you have the following resources:
-* An active Azure account. If you don't have one, you can sign up for a [Free Azure Trial](https://azure.microsoft.com/pricing/free-trial/).
+* An active Azure account. If you don't have one, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 07/18/2022 Last updated : 08/01/2022
As you begin to plan your product transfer, consider the information needed to a
- Previous Azure offer in CSP - New Azure offer in CSP, also referred to as Azure Plan with a Microsoft Partner Agreement (MPA) - Enterprise Agreement (EA)
- - Microsoft Customer Agreement (MCA) in the Enterprise motion when you buy Azure services through a Microsoft representative and individual MCA when you buy Azure services through Azure.com
+ - Microsoft Customer Agreement (MCA) in the Enterprise motion where you buy Azure services through a Microsoft representative. Also called an MCA enterprise agreement.
+ - Microsoft Customer Agreement (MCA) that you bought through the Azure website. Also called an MCA individual agreement.
- Others like MSDN, BizSpark, EOPEN, Azure Pass, and Free Trial - Do you have the required permissions on the product to accomplish a transfer? Specific permission needed for each transfer type is listed in the following product transfer support table. - Only the billing administrator of an account can transfer subscription ownership.
cost-management-billing Troubleshoot Reservation Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/troubleshoot-reservation-utilization.md
As usage data arrives, the value changes toward the correct percentage. When all
If you find that your utilization values don't match your expectations, review the graph to get the most view of your actual utilization. Any point value older than two days should be accurate. Longer term averages from seven to 30 days should be accurate. ## Other common scenarios
+- If the reservation status is "No Benefit", it will give you a warning message and to solve this, follow recommendations presented on the reservation's page.
- You may have stopped running resource A and started running resource B which is not applicable for the reservation you purchased for. To solve this, you may need to exchange the reservation to match it to the right resource. - You may have moved a resource from one subscription or resource group to another, whereas the scope of the reservation is different from where the resource is being moved to. To resolve this case, you may need to change the scope of the reservation. - You may have purchased another reservation that also applied a benefit to the same scope, and as a result, less of an existing reserved instance applied a benefit. To solve this, you may need to exchange/refund one of the reservations.
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-marketplace-web-service.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Amazon Marketplace Web Service connector is supported for the following activities:
+This Amazon Marketplace Web Service connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Amazon Marketplace Web Service to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+ For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Asana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-asana.md
This article outlines how to use Data Flow to transform data in Asana (Preview).
## Supported capabilities
-This Asana connector is supported for the following activities:
+This Asana connector is supported for the following capabilities:
-- [Mapping data flow](concepts-data-flow-overview.md)
+| Supported capabilities|IR |
+|| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
## Create an Asana linked service using UI
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Concur connector is supported for the following activities:
+This Concur connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Concur to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
> [!NOTE] > Partner account is currently not supported.
data-factory Connector Dataworld https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dataworld.md
This article outlines how to use Data Flow to transform data in data.world (Prev
## Supported capabilities
-This data.world connector is supported for the following activities:
+This data.world connector is supported for the following capabilities:
-- [Mapping data flow](concepts-data-flow-overview.md)
+| Supported capabilities|IR |
+|| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
## Create a data.world linked service using UI
data-factory Connector Dynamics Ax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-ax.md
This article outlines how to use Copy Activity in Azure Data Factory and Synapse
## Supported capabilities
-This Dynamics AX connector is supported for the following activities:
+This Dynamics AX connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Dynamics AX to any supported sink data store. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that supports as sources and sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
Specifically, this Dynamics AX connector supports copying data from Dynamics AX using **OData protocol** with **Service Principal authentication**.
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
This article outlines how to use a copy activity in Azure Data Factory or Synaps
This connector is supported for the following activities: -- [Copy activity](copy-activity-overview.md) with [supported source and sink matrix](copy-activity-overview.md)-- [Mapping data flow](concepts-data-flow-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Dynamics 365 (Microsoft Dataverse) or Dynamics CRM to any supported sink data store. You also can copy data from any supported source data store to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM. For a list of data stores that a copy activity supports as sources and sinks, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
++
+For a list of data stores that a copy activity supports as sources and sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
>[!NOTE] >Effective November 2020, Common Data Service has been renamed to [Microsoft Dataverse](/powerapps/maker/data-platform/data-platform-intro). This article is updated to reflect the latest terminology.
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-adwords.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Google AdWords connector is supported for the following activities:
+This Google AdWords connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-You can copy data from Google AdWords to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 07/12/2022 Last updated : 08/01/2022 # Overview of Microsoft Defender for Containers
Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
### View vulnerabilities for running images
-The recommendation `Running container images should have vulnerability findings resolved` shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender security profile/extension. Images that are deployed from a non-ACR registry, will appear under the Not applicable tab.
+The recommendation `Running container images should have vulnerability findings resolved` shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender agent. Images that are deployed from a non-ACR registry, will appear under the Not applicable tab.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable." lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png"::: ## Run-time protection for Kubernetes nodes and clusters
-Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
+Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 07/27/2022 Last updated : 08/01/2022
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VM, VMSS | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Compliance | Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds | | Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Preview | Defender profile | Defender for Containers | Commercial clouds |
The **tabs** below show the features that are available, by environment, for Mic
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+### Network restrictions
+
+#### Private link
+
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+ ### [**AWS (EKS)**](#tab/aws-eks) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
The **tabs** below show the features that are available, by environment, for Mic
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Outbound proxy support
-
-Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
- ### Supported host operating systems Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
Defender for Containers relies on the **Defender extension** for several feature
Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+### Network restrictions
+
+#### Private link
+
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+#### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+ ### [**GCP (GKE)**](#tab/gcp-gke) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
Ensure your Kubernetes node is running on one of the verified supported operatin
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Outbound proxy support
-
-Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
- ### Supported host operating systems Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
Defender for Containers relies on the **Defender extension** for several feature
Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+### Network restrictions
+
+#### Private link
+
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+#### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+ ### [**On-prem/IaaS (Arc)**](#tab/iaas-arc) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
Ensure your Kubernetes node is running on one of the verified supported operatin
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Outbound proxy support
-
-Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
- ### Supported host operating systems Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
Defender for Containers relies on the **Defender extension** for several feature
Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+### Network restrictions
+
+#### Private link
+
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+#### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+ ## Next steps
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
This procedure describes how to add a Defender for IoT plan for OT networks to a
:::image type="content" source="media/how-to-manage-subscriptions/onboard-plan-2.png" alt-text="Screenshot of adding a plan for OT networks to your subscription.":::
-1. Select **I accept the terms** option, and then select **Save**.
+1. Select the **I accept the terms** option, and then select **Save**.
Your OT networks plan will be shown under the associated subscription in the **Plans** grid.
Continue with one of the following tutorials, depending on whether you're settin
For more information, see: - [Welcome to Microsoft Defender for IoT for organizations](overview.md)-- [Microsoft Defender for IoT architecture](architecture.md)
+- [Microsoft Defender for IoT architecture](architecture.md)
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
The following table lists available integrations for Microsoft Defender for IoT,
|**ServiceNow** | View Defender for IoT device detections, attributes, and connections in ServiceNow. | [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) | | **Splunk** | Send Defender for IoT alerts to Splunk | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) | |**Axonius Cybersecurity Asset Management** | Import and manage device inventory discovered by Defender for IoT in your Axonius instance. | [Axonius documentation](https://docs.axonius.com/docs/azure-defender-for-iot) |
+|**Skybox** | Import vulnerability occurrence data discovered by Defender for IoT in your Skybox platform. | [Skybox documentation](https://docs.skyboxsecurity.com) <br><br> [Skybox integration page](https://www.skyboxsecurity.com/products/integrations) |
## Next steps
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | |||
-|**Enterprise IoT networks** | - [Enterprise IoT purchase experience and Defender for Endpoint integration in GA](#enterprise-iot-purchase-experience-and-defender-for-endpoint-integration-in-ga) |
+|**Enterprise IoT networks** | - [Enterprise IoT and Defender for Endpoint integration in GA](#enterprise-iot-and-defender-for-endpoint-integration-in-ga) |
|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) <br><br>**To update to version 22.2.x**:<br>- **From version 22.1.x**, update directly to the latest **22.2.x** version<br>- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version <br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). | |**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) |
-### Enterprise IoT purchase experience and Defender for Endpoint integration in GA
+### Enterprise IoT and Defender for Endpoint integration in GA
-Defender for IoTΓÇÖs new purchase experience and the Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
+The Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
-- An updated **Plans and pricing** page with an enhanced onboarding process, as well as smooth onboarding directly in Defender for Endpoint. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+- Onboard an Enterprise IoT plan directly in Defender for Endpoint. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
- Seamless integration with Microsoft Defender for Endpoint to view detected Enterprise IoT devices, and their related alerts, vulnerabilities, and recommendations in the Microsoft 365 Security portal. For more information, see the [Enterprise IoT tutorial](tutorial-getting-started-eiot-sensor.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration). You can continue to view detected Enterprise IoT devices on the Defender for IoT **Device inventory** page in the Azure portal.
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
The common steps to subscribe to events published by any partner, including Grap
### Enable Microsoft Graph API events to flow to your partner topic > [!IMPORTANT]
-> Microsoft Graph API's (MGA) ability to send events to Even Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask.graph.and.grid@microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
+> Microsoft Graph API's (MGA) ability to send events to Event Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask.graph.and.grid@service.microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample:
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
external-attack-surface-management Deploying The Defender Easm Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/deploying-the-defender-easm-azure-resource.md
+
+ Title: Deploying the Defender EASM Azure resource
+description: This article explains how to deploy the Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource using the Azure Portal.
+++ Last updated : 07/14/2022+++
+# Deploying the Defender EASM Azure resource
+
+This article explains how to deploy the Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource using the Azure Portal.
+
+Deploying the EASM Azure resource involves two steps:
+
+- Create a resource group
+- Deploy the EASM resource to the resource group
+
+## Prerequisites
+
+Before you create a Defender EASM resource group, we recommend that you are familiar with how to access and use the [Microsoft Azure Portal](https://ms.portal.azure.com/) and read the [Defender EASM Overview article](index.md) for key context on the product. You will need:
+
+- A valid Azure subscription or free Defender EASM trial account. If you donΓÇÖt have an [Azure subscription](/azure/guides/developer/azure-developer-guide#understanding-accounts-subscriptions-and-billing), create an free Azure account before you begin.
+
+- Your Azure account must have a contributor role assigned for you to create a resource. To get this role assigned to your account, follow the steps in the [Assign roles](/azure/role-based-access-control/role-assignments-steps) documentation, or contact your administrator.
+
+## Create a resource group
+
+1. To create a new resource group, first select **Resource groups** in the Azure portal.
+
+ ![Screenshot of resource groups pane highlighted from Azure home page](media/QuickStart-1.png)
+
+2. Under Resource Groups, select **Create**:
+
+ ![Screenshot of "create resourceΓÇ¥ highlighted in resource group list view](media/QuickStart-2.png)
+
+3. Select or enter the following property values:
+
+ - **Subscription**: Select an Azure subscription.
+ - **Resource Group**: Give the resource group a name.
+ - **Region**: Specify an Azure location. This location is where the resource group stores metadata about the resource. For compliance reasons, you may want to specify where that metadata is stored. In general, we recommend that you specify a location where most of your resources will be. Using the same location can simplify your template.
+
+ ![Screenshot of create resource group basics tab](media/QuickStart-3.png)
+
+4. Select **Review + Create**.
+
+5. Review the values, and then select **Create**.
+
+6. Select **Refresh** to view the new resource group in the list.
+
+## Deploy resources to a resource group
+
+After you create a resource group, you can deploy resources to the group from the Marketplace. The Marketplace provides all services and pre-defined solutions available in Azure.
+
+1. To start a deployment, select ΓÇ£Create a resourceΓÇ¥ in the Azure portal.
+
+ ![Screenshot of ΓÇ£create resourceΓÇ¥ option highlighted from Azure home page](media/QuickStart-4.png)
+
+2. In the search box, type **Microsoft Defender EASM**, and then press Enter.
+
+3. Select the **Create** button to create an EASM resource.
+
+ ![Screenshot of "createΓÇ¥ button highlighted from Defender EASM list view](media/QuickStart-5.png)
+
+4. Select or enter the following property values:
+
+ - **Subscription**: Select an Azure subscription.
+ - **Resource Group**: Select the Resource Group created in the earlier step, or you can create a new one as part of the process of creating this resource.
+ - **Name**: give the Defender EASM workspace a name.
+ - **Region**: Select an Azure location.
+
+ ![Screenshot of create EASM resource basics tab](media/QuickStart-6.png)
+
+5. Select **Review + Create**.
+
+6. Review the values, and then select **Create**.
+
+7. Select **Refresh** to see the status of the deployment and once finished you can go to the Resource to get started.
+
+## Next steps
+
+- [Using and managing discovery](using-and-managing-discovery.md)
+- [Understanding dashboards](understanding-dashboards.md)
external-attack-surface-management Discovering Your Attack Surface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/discovering-your-attack-surface.md
+
+ Title: Discovering your attack surface
+description: Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets.
+++ Last updated : 07/14/2022+++
+# Discovering your attack surface
+
+## Prerequisites
+
+Before completing this tutorial, see the [What is discovery?](what-is-discovery.md) and [Using and managing discovery](using-and-managing-discovery.md) articles to understand key concepts mentioned in this article.
+
+## Accessing your automated attack surface
+
+Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. It is recommended that all users search for their organizationΓÇÖs attack surface before creating a custom attack surface and running additional discoveries. This enables users to quickly access their inventory as Defender EASM refreshes the data, adding additional assets and recent context to your Attack Surface.
+
+1. When first accessing your Defender EASM instance, select ΓÇ£Getting StartedΓÇ¥ in the ΓÇ£GeneralΓÇ¥ section to search for your organization in the list of automated attack surfaces.
+
+2. Then select your organization from the list and click ΓÇ£Build my Attack SurfaceΓÇ¥.
+
+![Screenshot of pre-configured attack surface option](media/Tutorial-1.png)
+
+At this point, the discovery will be running in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Please review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. Please read the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
+
+If you notice any missing assets or have other entities to manage that may not be discovered through infrastructure clearly linked to your organization, you can elect to run customized discoveries to detect these outlier assets.
+
+## Customizing discovery
+Custom discoveries are ideal for organizations that require deeper visibility into infrastructure that may not be immediately linked to their primary seed assets. By submitting a larger list of known assets to operate as discovery seeds, the discovery engine will return a wider pool of assets. Custom discovery can also help organizations find disparate infrastructure that may relate to independent business units and acquired companies.
+
+## Discovery groups
+Custom discoveries are organized into Discovery Groups. They are independent seed clusters that comprise a single discovery run and operate on their own recurrence schedules. Users can elect to organize their Discovery Groups to delineate assets in whatever way best benefits their company and workflows. Common options include organizing by responsible team/business unit, brands or subsidiaries.
+
+## Creating a discovery group
+
+1. Select the **Discovery** panel under the **Manage** section in the left-hand navigation column.
+
+ ![Screenshot of EASM instance from overview page with manage section highlighted](media/Tutorial-2.png)
+
+2. This Discovery page shows your list of Discovery Groups by default. This list will be empty when you first access the platform. To run your first discovery, click **Add Discovery Group**.
+
+ ![Screenshot of Discovery screen with ΓÇ£add disco groupΓÇ¥ highlighted](media/Tutorial-3.png)
+
+3. First, name your new discovery group and add a description. The **Recurring Frequency** field allows you to schedule discovery runs for this group, scanning for new assets related to the designated seeds on a continuous basis. The default recurrence selection is **Weekly**; Microsoft recommends this cadence to ensure that your organizationΓÇÖs assets are routinely monitored and updated. For a single, one-time discovery run, select **Never**. However, we recommend that users keep the **Weekly** default cadence and instead turn off historical monitoring within their Discovery Group settings if they later decide to discontinue recurrent discovery runs.
+
+ Select **Next: Seeds >**
+
+ ![Screenshot of first page of disco group setup](media/Tutorial-4.png)
+
+4. Next, select the seeds that youΓÇÖd like to use for this Discovery Group. Seeds are known assets that belong to your organization; the Defender EASM platform scans these entities, mapping their connections to other online infrastructure to create your Attack Surface.
+
+ ![Screenshot of seed selection page of disco group setup](media/Tutorial-5.png)
+
+ The **Quick Start** option lets you search for your organization in a list of pre-populated Attack Surfaces. You can quickly create a Discovery Group based on the known assets belonging to your organization.
+
+ ![Screenshot of pre-baked attack surface selection page, then output in seed list](media/Tutorial-6.png)
+
+ ![Screenshot of pre-baked attack surface selection page,](media/Tutorial-7.png)
+
+ Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, certificate common names, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
+
+ Once your seeds have been selected, select **Review + Create**.
+
+5. Review your group information and seed list, then select **Create & Run**.
+
+ ![Screenshot of review + create screen](media/Tutorial-8.png)
+
+You will then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you will see new assets added to your Confirmed Inventory.
+
+## Next steps
+- [Understanding asset details](understanding-asset-details.md)
+- [Understanding dashboards](understanding-dashboards.md)
external-attack-surface-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/index.md
+
+ Title: Overview
+description: Microsoft Defender External Attack Surface Management (Defender EASM) continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure.
+++ Last updated : 07/14/2022+++
+# Defender EASM Overview
+
+*Microsoft Defender External Attack Surface Management (Defender EASM)* continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. This visibility enables security and IT teams to identify unknowns, prioritize risk, eliminate threats, and extend vulnerability and exposure control beyond the firewall.
+Defender EASM leverages MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by leveraging vulnerability and infrastructure data to showcase the key areas of concern for your organization.
+
+![Screenshot of Overview Dashboard](media/Overview-1.png)
+
+## Discovery and inventory
+
+Microsoft's proprietary discovery technology recursively searches for infrastructure with observed connections to known legitimate assets to make inferences about that infrastructure's relationship to the organization and uncover previously unknown and unmonitored properties. These known legitimate assets are called discovery ΓÇ£seedsΓÇ¥; Defender EASM first discovers strong connections to these selected entities, recursing to unveil more connections and ultimately compile your Attack Surface.
+
+Defender EASM includes the discovery of the following kinds of assets:
+
+- Domains
+- Hostnames
+- Web Pages
+- IP Blocks
+- IP Addresses
+- ASNs
+- SSL Certificates
+- WHOIS Contacts
+
+![Screenshot of Discovery View](media/Overview-2.png)
+
+Discovered assets are indexed and classified in your Defender EASM Inventory, providing a dynamic record of all web infrastructure under the organization's management. Assets are categorized as recent (currently active) or historic, and can include web applications, third party dependencies, and other asset connections.
+
+## Dashboards
+
+Defender EASM provides a series of dashboards that help users quickly understand their online infrastructure and any key risks to their organization. These dashboards are designed to provide insight on specific areas of risk, including vulnerabilities, compliance, and security hygiene. These insights help customers quickly address the components of their attack surface that pose the greatest risk to their organization.
+
+![Screenshot of Dashboard View](media/Overview-3.png)
+
+## Managing assets
+
+Customers can filter their inventory to surface the specific insights they care about most. Filtering offers a level of flexibility and customization that enables users to access a specific subset of assets. This allows you to leverage Defender EASM data according to your specific use case, whether searching for assets that connect to deprecating infrastructure or identifying new cloud resources.
+
+![Screenshot of Inventory View](media/Overview-4.png)
+
+## Data residency, availability and privacy
+
+Microsoft Defender External Attack Surface Management contains both global data and customer-specific data. The underlying internet data is global Microsoft data; labels applied by customers are considered customer data. All customer data is stored in the region of the customerΓÇÖs choosing.
+
+For security purposes, Microsoft collects users' IP addresses when they log in. This data is stored for up to 30 days but may be stored longer if needed to investigate potential fraudulent or malicious use of the product.
+
+In the case of a region down scenario, customers should see now downtime as Defender EASM uses technologies that replicate data to a backup regions.
+
+Defender EASM processes customer data. By default, customer data is replicated to the paired region.
+
+## Next Steps
+
+- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Understanding inventory assets](understanding-inventory-assets.md)
+- [What is discovery?](what-is-discovery.md)
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
+
+ Title: Understanding asset details
+description: Understanding asset details- Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface.
+++ Last updated : 07/14/2022+++
+# Understanding asset details
+
+## Overview
+
+Defender EASM frequently scans all inventory assets, collecting robust contextual metadata that powers Attack Surface Insights and can also be viewed more granularly on the Asset Details page. The provided data changes depending on the asset type. For instance, the platform provides unique WHOIS data for domains, hosts and IP addresses and signature algorithm data for SSL certificates.
+
+This article provides guidance on how to view and interpret the expansive data collected by Microsoft for each of your inventory assets. It defines this metadata for each asset type and explains how the insights derived from it can help you manage the security posture of your online infrastructure.
+
+*For more information, see [understanding inventory assets](understanding-inventory-assets.md) to familiarize yourself with the key concepts mentioned in this article.*
+
+## Asset details summary view
+
+You can view the Asset Details page for any asset by clicking on its name from your inventory list. On the left pane of this page, you can view an asset summary that provides key information about that particular asset. This section is primarily comprised of data that applies to all asset types, although additional fields will be available in some cases. The chart below for more information on the metadata provided for each asset type in the summary section.
+
+![Screenshot of asset details, left-hand summary pane highlighted](media/Inventory_1.png)
+
+### General information
+
+This section is comprised of high-level information that is key to understanding your assets at a glance. Most of these fields are applicable to all assets, although this section can also include information that is specific to one or more asset types.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| Asset Name | The name of an asset. | All |
+| UUID | This 128-bit label represents the universally unique identifier (UUID) for the | All |
+| Status | The status of the asset within the RiskIQ system. Options include Approved Inventory, Candidate, Dependencies, or Requires Investigation. | All |
+| First seen | This column displays the date that the asset was first observed by crawling | All |
+| Last seen | This column displays the date that the asset was last observed by crawling infrastructure. | All |
+| Discovered on | The date that the asset was found in a discovery run scanning for assets related to an organizationΓÇÖs known infrastructure. | All |
+| Last updated | This column displays the date that the asset was last updated in the system after new data was found in a scan. | All |
+| Country | The country of origin detected for this asset. | All |
+| State/Province | The state or province of origin detected for this asset. | All |
+| City | The city of origin detected for this asset. | All |
+| WhoIs name | | Host |
+| WhoIs email | The primary contact email in a WhoIs record. | Host |
+| WhoIS organization | The listed organization in a WhoIs record. | Host |
+| WhoIs registrar | The listed registrar in a WhoIs record. | Host |
+| WhoIs name servers | The listed name servers in a WhoIs record. | Host |
+| Certificate issued | The date when a certificate was issued. | SSL certificate |
+| Certificate expires | The date when a certificate will expire. | SSL certificate |
+| Serial number | The serial number associated with an SSL certificate. | SSL certificate |
+| SSL version | The version of SSL that the certificate was registered | SSL certificate |
+| Certificate key algorithm | The key algorithm used to encrypt the SSL certificate. | SSL certificate |
+| Certificate key size | The number of bits within a SSL certificate key. | SSL certificate |
+| Signature algorithm oid | The OID identifying the hash algorithm used to sign the certificate request. | SSL certificate |
+| Self-signed | Indicates whether the SSL certificate was self-signed.| SSL certificate |
+
+### Network
+
+IP address information that provides additional context about the usage of the IP.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| Name server record | Any name servers detected on the asset. | IP address |
+| Mail server record | Any mail servers detected on the asset. | IP address |
+| IP Blocks | The IP block that contains the IP address asset. | IP address |
+| ASNs | The ASN associated with an asset. | IP address |
+
+### Block info
+
+Data specific to IP blocks that provides contextual information about its use.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| CIDR | The Classless Inter-Domain Routing (CIDR) for an IP Block. | IP block |
+| Network name | The network name associated to the IP block. | IP block |
+| Organization name | The organization name found in the registration information for the IP block. | IP block |
+| Org ID | The organization ID found in the registration information for the IP block. | IP block |
+| ASNs | The ASN associated with the IP block. | IP block |
+| Country | The country of origin as detected in the WhoIs registration information for the IP block. | IP block |
+
+### Subject
+
+Data specific to the subject (i.e. protected entity) associated with a SSL Certificate.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| Common name | The Issuer Common Name of the subject of the SSL certificate. | SSL certificate |
+| Alternate names | Any alternative common names for the subject of the SSL certificate.| SSL certificate |
+| Organization name | The organization linked to the subject of the SSL certificate. | SSL certificate |
+| Organization unit | Optional metadata that indicates the department within an organization that is responsible for the certificate. | SSL certificate |
+| Locality | Denotes the city where the organization is located. | SSL certificate |
+| Country | Denotes the country where the organization is located. | SSL certificate |
+| State/Province | Denotes the state or province where the organization is located. | SSL certificate |
+
+### Issuer
+
+Data specific to the issuer of an SSL Certificate.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| Common name | The common name of the issuer of the certificate. | SSL certificate |
+| Alternate names | Any additional names of the issuer. | SSL certificate |
+| Organization name | The name of the organization that orchestrated the issue of a certificate. | SSL certificate |
+| Organization unit | Additional information about the organization issuing the certificate. | SSL certificate |
+
+## Data tabs
+
+In the right-hand pane of the Asset Details page, users can access more expansive data related to the selected asset. This data is organized in a series of categorized tabs. The available metadata tabs will change depending on the type of asset youΓÇÖre viewing.
+
+### Overview
+
+The Overview tab provides key additional context to ensure that significant insights are quickly identifiable when viewing the details of an asset. This section will include key discovery data for all asset types, providing insight about how Microsoft maps the asset to your known infrastructure. This section can also include dashboard widgets that visualize insights that are particularly relevant to the asset type in question.
+
+![Screenshot of asset details, right-hand overview pane highlighted](media/Inventory_2.png)
+
+### Discovery chain
+
+The discovery chain outlines the observed connections between a discovery seed and the asset. This information helps users visualize these connections and better understand why an asset was determined to belong to their organization.
+
+In the example below, we see that the seed domain is tied to this asset through the contact email in its WhoIs record. That same contact email was used to register the IP block that includes this particular IP address asset.
+
+![Screenshot of discovery chain](media/Inventory_3.png)
+
+### Discovery information
+
+This section provides information about the process used to detect the asset. It includes information about the discovery seed that connects to the asset, as well as the approval process. Options include ΓÇ£Approved InventoryΓÇ¥ which indicates the relationship between the seed and discovered asset was strong enough to warrant an automatic approval by the Defender EASM system. Otherwise, the process will be listed as ΓÇ£CandidateΓÇ¥, indicating that the asset required manual approval to be incorporated into your inventory. This section also provides the date that the asset was added to your inventory, as well as the date that it was last scanned in a discovery run.
+
+### IP reputation
+
+The IP reputation tab displays a list of potential threats related to a given IP address. This section outlines any detected malicious or suspicious activity that relates to the IP address. This is key to understanding the trustworthiness of your own attack surface; these threats can help organizations uncover past or present vulnerabilities in their infrastructure.
+
+Defender EASMΓÇÖs IP reputation data displays instances when the IP address was detected on a threat list. For instance, the recent detection in the example below shows that the IP address relates to a host known to be running a cryptocurrency miner. This data was derived from a suspicious host list supplied by CoinBlockers. Results are organized by the ΓÇ£last seenΓÇ¥ date, surfacing the most relevant detections first. In this example, the IP address is present on an abnormally high number of threat feeds, indicating that the asset should be thoroughly investigated to prevent malicious activity in the future.
+
+![Screenshot of asset details, IP reputation tab](media/Inventory_4.png)
+
+### Services
+
+The ΓÇ£ServicesΓÇ¥ tab is available for IP address, domain and host assets. This section provides information on services observed to be running on the asset, and includes IP addresses, name and mail servers, and open ports that correspond with additional types of infrastructure (e.g. remote access services). Defender EASMΓÇÖs Services data is key to understanding the infrastructure powering your asset. It can also alert you of resources that are exposed on the open internet that should be protected.
+
+![Screenshot of asset details, services tab](media/Inventory_5.png)
+
+### IP Addresses
+
+This section provides insight on any IP addresses that are running on the assetΓÇÖs infrastructure. On the Services tab, Defender EASM provides the name of the IP address, the first and last seen dates, and a recency column which indicates whether the IP address was observed during our most recent scan of the asset. If there is no checkbox in this column, the IP address has been seen in prior scans but is not currently running on the asset.
+
+![Screenshot of asset details, IP address section of services tab](media/Inventory_6.png)
+
+### Mail Servers
+
+This section provides a list of any mail servers running on the asset, indicating that the asset is capable of sending emails. In this section, Defender EASM provides the name of the mail server, the first and last seen dates, and a recency column that indicates whether the mail server was detected during our most recent scan of the asset.
+
+![Screenshot of asset details, mail server section of services tab](media/Inventory_7.png)
+
+### Name Servers
+
+This section displays any name servers running on the asset, providing resolution for a host. In this section, we provide the name of the mail server, the first and last seen dates, and a recency column that indicates whether the name server was detected during our most recent scan of the asset.
+
+![Screenshot of asset details, name server section of services tab](media/Inventory_8.png)
+
+### Open Ports
+
+This section lists any open ports detected on the asset. Microsoft scans around 230 distinct ports on a regular basis. This data is useful to identify any unsecured services that shouldnΓÇÖt be accessible from the open internet, including databases, IoT devices, and network services like routers and switches. ItΓÇÖs also helpful in identifying shadow IT infrastructure or insecure remote access services.
+
+In this section, Defender EASM provides the open port number, a description of the port, the last state it was observed in, the first and last seen dates, and a recency column that indicates whether the port was observed as open during MicrosoftΓÇÖs most recent scan.
+
+![Screenshot of asset details, open ports section of services tab](media/Inventory_9.png)
+
+### Trackers
+
+Trackers are unique codes or values found within web pages and often are used to track user interaction. These codes can be used to correlate a disparate group of websites to a central entity. Microsoft's tracker dataset includes IDs from providers like Google, Yandex, Mixpanel, New Relic, Clicky and continues to grow on a regular basis.
+
+In this section, Defender EASM provides the tracker type (e.g. GoogleAnalyticsID), the unique identifier value, and the first and last seen dates.
+
+### Web components & CVEs
+
+Web components are details describing the infrastructure of an asset as observed through a Microsoft scan. These components provide a high-level understanding of the technologies leveraged on the asset. Microsoft categorizes the specific components and includes version numbers when possible.
+
+![Screenshot of top of Web components & CVEs tab](media/Inventory_10.png)
+
+The Web components section provides the category, name and version of the component, as well as a list of any applicable CVEs that should be remediated. Defender EASM also provides a first and last seen date as well as a recency indicator; a checked box indicates that this infrastructure was observed during our most recent scan of the asset.
+
+Web components are categorized based on their function. Options include:
+
+| Web Component | Examples |
+|--|--|
+| Hosting Provider | hostingprovider.com |
+| Server | Apache |
+| DNS Server | ISC BIND |
+| Data stores | MySQL, ElasticSearch, MongoDB |
+| Remote access | OpenSSH, Microsoft Admin Center, Netscaler Gateway |
+| Data Exchange | Pure-FTPd, |
+| Internet of things (IoT) | HP Deskjet, Linksys Camera, Sonos |
+| Email server | ArmorX, Lotus Domino, Symantec Messaging Gateway |
+| Network device | Cisco Router, Motorola WAP, ZyXEL Modem |
+| Building control | Linear eMerge, ASI Controls Weblink, Optergy |
+
+Below the Web components section, users can view a list of all CVEs applicable to the list of web components. This provides a more granular view of the CVEs themselves, and the CVSS score indicating the level of risk it poses to your organization.
+
+![Screenshot of CVEs section of tab](media/Inventory_11.png)
+
+### Resources
+
+The Resources tab provides insight on any JavaScript resources running on any page or host assets. When applicable to a host, these resources are aggregated to represent the Javascript running on all pages on that host. This section provides an inventory of the JavaScript detected on each asset so that your organization has full visibility into these resources and can detect any changes. Defender EASM provides the resource URL and host, MD5 value, and first and last seen dates to help organizations effectively monitor the use of Javascript resources across their inventory.
+
+![Screenshot of resources tab](media/Inventory_12.png)
+
+### SSL certificates
+
+Certificates are used to secure communications between a browser and a web server via Secure Sockets Layer (SSL). This ensures that sensitive data in transit cannot be read, tampered with, or forged. This section of Defender EASM lists any SSL certificates detected on the asset, including key data like the issue and expiry dates.
+
+![Screenshot of SSL certificates tab](media/Inventory_13.png)
+
+### WhoIs
+
+WhoIs is a protocol that is leveraged to query and respond to the databases that store data related to the registration and ownership of Internet resources. WhoIs contains key registration data that can apply to domains, hosts, IP addresses and IP blocks in Defender EASM. In the WhoIs data tab, Microsoft provides a robust amount of information associated with the registry of the asset.
+
+![Screenshot of WhoIs values tab](media/Inventory_14.png)
+
+Fields include:
+
+| Field | Description |
+|--|--|
+| WhoIs server | A server set up by an ICANN-accredited registrar to acquire up-to-date information about entities that are registered with it. |
+| Registrar | The company whose service was used to register an asset. Popular registrars include GoDaddy, Namecheap, and HostGator. |
+| Domain status | Any status for a domain as set by the registry. These statuses can indicate that a domain is pending delete or transfer by the registrar or is simply active on the internet. This field can also denote the limitations of an asset; in the below example, ΓÇ£client delete prohibitedΓÇ¥ indicates that the registrar is unable to delete the asset. |
+| Email | Any contact email addresses provided by the registrant. WhoIs allows registrants to specify the contact type; options include administrative, technical, registrant and registrar contacts. |
+| Name | The name of a registrant, if provided. |
+| Organization | The organization responsible for the registered entity. |
+| Street | The street address for the registrant if provided|
+| City | The city listed in the street address for the registrant if provided. |
+| State | The state listed in the street address for the registrant if provided. |
+| Postal Code | The postal code listed in the street address for the registrant if provided. |
+| Country | The country listed in the street address for the registrant if provided. |
+| Phone | The phone number associated with a registrant contact if provided. |
+| Name Servers | Any name servers associated with the registered entity. |
+
+ItΓÇÖs important to note that many organizations opt to obfuscate their registry information. In the example above, you can see that some of the contact email addresses end in ΓÇ£@anonymised.emailΓÇ¥ which is a placeholder in lieu of the real contact address. Furthermore, many of these fields are optional when configurating a registration, so any field with an empty value was not included by the registrant.
+
+## Next steps
+
+- [Understanding dashboards](understanding-dashboards.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
external-attack-surface-management Understanding Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-dashboards.md
+
+ Title: Understanding dashboards
+description: Microsoft Defender External Attack Surface Management (Defender EASM) offers a series of four dashboards designed to help users quickly surface valuable insights derived from their Attack Surface inventory.
+++ Last updated : 07/14/2022+++
+# Understanding dashboards
+
+## Overview
+
+Microsoft Defender External Attack Surface Management (Defender EASM) offers a series of four dashboards designed to help users quickly surface valuable insights derived from their Attack Surface inventory. These dashboards help organizations prioritize the vulnerabilities, risks and compliance issues that pose the greatest threat to their Attack Surface, making it easy to quickly mitigate key issues.
+
+Defender EASM provides four dashboards:
+
+- **Attack Surface Summary**: this dashboard summarizes the key observations derived from your inventory. It provides a high-level overview of your Attack Surface and the asset types that comprise it, and surfaces potential vulnerabilities by severity (high, medium, low). This dashboard also provides key context on the infrastructure that comprises your Attack Surface, providing insight into cloud hosting, sensitive services, SSL certificate and domain expiry, and IP reputation.
+- **Security Posture**: this dashboard helps organizations understand the maturity and complexity of their security program based on the metadata derived from assets in your Confirmed Inventory. It is comprised of technical and non-technical policies, processes and controls that mitigate risk of external threats. This dashboard provides insight on CVE exposure, domain administration and configuration, hosting and networking, open ports, and SSL certificate configuration.
+- **GDPR Compliance**: this dashboard surfaces key areas of compliance risk based on the General Data Protection Regulation (GDPR) requirements for online infrastructure thatΓÇÖs accessible to European nations. This dashboard provides insight on the status of your websites, SSL certificate issues, exposed personal identifiable information (PII), login protocols, and cookie compliance.
+- **OWASP Top 10**: this dashboard surfaces any assets that are vulnerable according to OWASPΓÇÖs list of the most critical web application security risks. On this dashboard, organizations can quickly identify assets with broken access control, cryptographic failures, injections, insecure designs, security misconfigurations and other critical risks as defined by OWASP.
+
+## Accessing dashboards
+
+To access your Defender EASM dashboards, first navigate to your Defender EASM instance. In the left-hand navigation column, select the dashboard youΓÇÖd like to view. You can access these dashboards from many pages in your Defender EASM instance from this navigation pane.
+
+![Screenshot of dashboard screen with dashboard navigation section highlighted](media/Dashboards-1.png)
+
+## Attack surface summary
+
+The Attack Surface summary dashboard is designed to provide a high-level summary of the composition of your Attack Surface, surfacing the key observations that should be addressed to improve your security posture. This dashboard identifies and prioritizes risks within an organization's assets by High, Medium, and Low severity and enables users to drill down into each section, accessing the list of impacted assets. Additionally, the dashboard reveals key details about your Attack Surface composition, cloud infrastructure, sensitive services, SSL and domain expiry timelines, and IP reputation.
+
+Microsoft identifies organizations' attack surfaces through proprietary technology that discovers Internet-facing assets that belong to an organization based on infrastructure connections to some set of initially known assets. Data in the dashboard is updated daily based on new observations.
+
+### Attack surface priorities
+
+At the top of this dashboard, Defender EASM provides a list of security priorities organized by severity (high, medium, low). Large organizationsΓÇÖ attack surfaces can be incredibly broad, so prioritizing the key findings derived from our expansive data helps users quickly and efficiently address the most important exposed elements of their attack surface. These priorities can include critical CVEs, known associations to compromised infrastructure, use of deprecated technology, infrastructure best practice violations, or compliance issues.
+
+Insight Priorities are determined by MicrosoftΓÇÖs assessment of the potential impact of each insight. For instance, high severity insights may include vulnerabilities that are new, exploited frequently, particularly damaging, or easily exploited by hackers with a lower skill level. Low Severity Insights may include use of deprecated technology that is no longer supported, infrastructure that will soon expire, or compliance issues that do not align with security best practices. Each Insight contains suggested remediation actions to protect against potential exploits.
+
+![Screenshot of attack surface priorities with clickable options highlighted](media/Dashboards-2.png)
+
+Based on the Attack Surface Priorities chart displayed above, a user would want to first investigate the two Medium Severity Observations. You can click the top-listed observation (ΓÇ£Hosts with Expired SSL CertificatesΓÇ¥) to be directly routed to a list of applicable assets, or instead select ΓÇ£View All 91 InsightsΓÇ¥ to see a comprehensive, expandable list of all potential observations that Defender EASM categorizes as ΓÇ£medium severityΓÇ¥.
+
+The Medium Severity Observations page features a list of all potential insights in the left-hand column. This list is sorted by the number of assets that are impacted by each security risk, displaying the issues that impact the greatest number of assets first. To view the details of any security risk, simply click on it from this list.
+
+![Screenshot of attack surface drilldown for medium severity priorities](media/Dashboards-3.png)
+
+This detailed view for any observation will include the title of the issue, a description, and remediation guidance from the Defender EASM team. In this example, the description explains how expired SSL certificates can lead to critical business functions becoming unavailable, preventing customers or employees from accessing web content and thus damaging your organizationΓÇÖs brand. The Remediation section provides advice on how to swiftly fix the issue; in this example, Microsoft recommends that you review the certificates associated with the impacted host assets, update the coinciding SSL certificate(s), and update your internal procedures to ensure that SSL certificates are updated in a timely manner.
+
+Finally, the Asset section lists any entities that have been impacted by this specific security concern. In this example, a user will want to investigate the impacted assets to learn more about the expired SSL Certificate. You can click on any asset name from this list to view the Asset Details page.
+
+From the Asset Details page, weΓÇÖll then click on the ΓÇ£SSL certificatesΓÇ¥ tab to view more information about the expired certificate. In this example, the listed certificate shows an ΓÇ£ExpiresΓÇ¥ date in the past, indicating that the certificate is currently expired and therefore likely inactive. This section also provides the name of the SSL certificate which you can then send to the appropriate team within your organization for swift remediation.
+
+![Screenshot of impacted asset list from drilldown view, must be expired SSL certificate](media/Dashboards-4.png)
+
+### Attack surface composition
+
+The following section provides a high-level summary of the composition of your Attack Surface. This chart provides counts of each asset type, helping users understand how their infrastructure is spread across domains, hosts, pages, SSL certificates, ASNs, IP blocks, IP addresses and email contacts.
+
+![Screenshot of asset details view of same SSL certificate showing expiration highlighted](media/Dashboards-5.png)
+
+Each value is clickable, routing users to their inventory list filtered to display only assets of the designated type. From this page, you can click on any asset to view more details, or you can add additional filters to narrow down the list according to your needs.
+
+### Securing the cloud
+
+This section of the Attack Surface Summary dashboard provides insight on the cloud technologies used across your infrastructure. As most organizations adapt to the cloud gradually, the hybrid nature of your online infrastructure can be difficult to monitor and manage. Defender EASM helps organizations understand the usage of specific cloud technologies across your Attack Surface, mapping cloud host providers to your confirmed assets to inform your cloud adoption program and ensure compliance with your organizations process.
+
+![Screenshot of cloud chart](media/Dashboards-6.png)
+
+For instance, your organization may have recently decided to migrate all cloud infrastructure to a single provider to simplify and consolidate their Attack Surface. This chart can help you identify assets that still need to be migrated. Each bar of the chart is clickable, routing users to a filtered list that displays the assets that comprise the chart value.
+
+### Sensitive services
+
+This section displays sensitive services detected on your Attack Surface that should be assessed and potentially adjusted to ensure the security of your organization. This chart highlights any services that have historically been vulnerable to attack or are common vectors of information leakage to malicious actors. Any assets in this section should be investigated, and Microsoft recommends that organizations consider alternative services with a better security posture to mitigate risk.
+
+![Screenshot of sensitive services chart](media/Dashboards-7.png)
+
+The chart is organized by the name of each service; clicking on any individual bar will return a list of assets that are running that particular service. The chart below is empty, indicating that the organization is not currently running any services that are especially susceptible to attack.
+
+### SSL and domain expirations
+
+These two expiration charts display upcoming SSL Certificate and Domain expirations, ensuring that an organization has ample visibility into upcoming renewals of key infrastructure. An expired domain can suddenly make key content inaccessible, and the domain could even be swiftly purchased by a malicious actor who intends to target your organization. An expired SSL Certificate leaves corresponding assets susceptible to attack.
+
+![Screenshot of SSL charts](media/Dashboards-8.png)
+
+Both charts are organized by the expiration timeframe, ranging from ΓÇ£greater than 90 daysΓÇ¥ to already expired. Microsoft recommends that organizations immediately renew any expired SSL certificates or domains, and proactively arrange the renewal of assets due to expire in 30-60 days.
+
+### IP reputation
+
+IP reputation data helps users understand the trustworthiness of your attack surface and identifying potentially compromised hosts. Microsoft develops IP reputation scores based on our proprietary data as well as IP information collected from external sources. We recommend further investigation of any IP addresses identified here, as a suspicious or malicious score associated with an owned asset indicates that the asset is susceptible to attack or has already been leveraged by malicious actors.
+
+![Screenshot of IP reputation chart](media/Dashboards-9.png)
+
+This chart is organized by the detection policy that triggered a negative reputation score. For instance, the DDOS value indicates that the IP address has been involved in a Distributed Denial-Of-Service attack. Users can click on any bar value to access a list of assets that comprise it. In the example below, the chart is empty which indicates all IP addresses in your inventory have satisfactory reputation scores.
+
+## Security posture dashboard
+
+The Security Posture dashboard helps organizations measure the maturity of their security program based on the status of assets in your Confirmed Inventory. It is comprised of technical and non-technical policies, processes and controls that mitigate the risk of external threats. This dashboard provides insight on CVE exposure, domain administration and configuration, hosting and networking, open ports, and SSL certificate configuration.
+
+![Screenshot of security posture chart](media/Dashboards-10.png)
+
+### CVE exposure
+
+The first chart in the Security Posture dashboard relates to the management of an organizationΓÇÖs website portfolio. Microsoft analyzes website components such as frameworks, server software, and 3rd party plugins and then matches them to a current list of Common Vulnerability Exposures (CVEs) to identify vulnerability risks to your organization. The web components that comprise each website are inspected daily to ensure recency and accuracy.
+
+![Screenshot of CVE exposure chart](media/Dashboards-11.png)
+
+It is recommended that users immediately address any CVE-related vulnerabilities, mitigating risk by updating your web components or following the remediation guidance for each CVE. Each bar on the chart is clickable, displaying a list of any impacted assets.
+
+### Domains administration
+
+This chart provides insight on how an organization manages their domains. Companies with a decentralized domain portfolio management program are susceptible to unnecessary threats, including domain hijacking, domain shadowing, email spoofing, phishing, and illegal domain transfers. A cohesive domain registration process mitigates this risk. For instance, organizations should use the same registrars and registrant contact information for their domains to ensure that all domains are mappable to the same entities. This helps ensure that domains donΓÇÖt slip through the cracks as you update and maintain them.
+
+![Screenshot of domain administration chart](media/Dashboards-12.png)
+
+Each bar of the chart is clickable, routing to a list of all assets that comprise the value.
+
+### Hosting and networking
+
+This chart provides insight on the security posture related to where an organizationΓÇÖs hosts are located. Risk associated with ownership of Autonomous systems depends on the size, maturity of an organizationΓÇÖs IT department.
+
+![Screenshot of hosting and networking chart](media/Dashboards-13.png)
+
+Each bar of the chart is clickable, routing to a list of all assets that comprise the value.
+
+### Domains configuration
+
+This section helps organizations understand the configuration of their domain names, surfacing any domains that may be susceptible to unnecessary risk. Extensible Provisioning Protocol (EPP) domain status codes indicate the status of a domain name registration. All domains have at least one code, although multiple codes can apply to a single domain. This section is useful to understanding the policies in place to manage your domains, or missing policies that leave domains vulnerable.
+
+![Screenshot of domain config chart](media/Dashboards-14.png)
+
+For instance, the ΓÇ£clientUpdateProhibitedΓÇ¥ status code prevents unauthorized updates to your domain name; an organization must contact their registrar to lift this code and make any updates. The chart below searches for domain assets that do not have this status code, indicating that the domain is currently open to updates which can potentially result in fraud. Users should click any bar on this chart to view a list of assets that do not have the appropriate status codes applied to them so they can update their domain configurations accordingly.
+
+### Open Ports
+
+This section helps users understand how their IP space is managed, detecting services that are exposed on the open internet. Attackers commonly scan ports across the internet to look for known exploits related to service vulnerabilities or misconfigurations. Microsoft identifies these open ports to compliment vulnerability assessment tools, flagging observations for review to ensure they are properly managed by your information technology team.
+
+![Screenshot of open ports chart](media/Dashboards-15.png)
+
+By performing basic TCP SYN/ACK scans across all open ports on the addresses in an IP space, Microsoft detects ports that may need to be restricted from direct access to the open internet. Examples include databases, DNS servers, IoT devices, routers and switches. This data can also be used to detect shadow IT assets or insecure remote access services. All bars on this chart are clickable, opening a list of assets that comprise the value so your organization can investigate the open port in question and remediate any risk.
+
+### SSL configuration and organization
+
+The SSL configuration and organization charts display common SSL-related issues that may impact functions of your online infrastructure.
+
+![Screenshot of SSL configuration and organization charts](media/Dashboards-16.png)
+
+For instance, the SSL configuration chart displays any detected configuration issues that can disrupt your online services. This includes expired SSL certificates and certificates using outdated signature algorithms like SHA1 and MD5, resulting in unnecessary security risk to your organization.
+
+The SSL organization chart provides insight on the registration of your SSL certificates, indicating the organization and business units associated with each certificate. This can help users understand the designated ownership of these certificates; it is recommended that companies consolidate their organization and unit list when possible to help ensure proper management moving forward.
+
+## GDPR compliance dashboard
+
+The GDPR compliance dashboard presents an analysis of assets in your Confirmed Inventory as they relate to the requirements outlined in General Data Protection Regulation (GDPR). GDPR is a regulation in European Union (EU) law that enforces data protection and privacy standards for any online entities accessible to the EU. These regulations have become a model for similar laws outside of the EU, so it serves as an excellent guide on how to handle data privacy worldwide.
+
+This dashboard analyzes an organizationΓÇÖs public-facing web properties to surface any assets that are potentially non-compliant with GDPR.
+
+## Websites by status
+
+This chart organizes your website assets by HTTP response status code. These codes indicate whether a specific HTTP request has been successfully completed or provides context as to why the site is inaccessible. HTTP codes can also alert you of redirects, server error responses, and client errors. The HTTP response ΓÇ£451ΓÇ¥ indicates that a website is unavailable for legal reasons. This may indicate that a site has been blocked for people in the EU because it does not comply with GDPR.
+
+This chart organizes your websites by status code. Options include Active, Inactive, Requires Authorization, Broken, and Browser Error; users can click any component on the bar graph to view a comprehensive list of assets that comprise the value.
+
+### SSL certificate posture
+
+An organizationΓÇÖs security posture for SSL/TLS Certificates is a critical component of security for web-based communication. SSL certificates are leveraged by websites to ensure secure communication between a website and its users. Decentralized or complex management of SSL certificates heightens the risk of SSL certificates expiring, use of weak ciphers, and potential exposure to fraudulent SSL Registration. The GDPR compliance dashboard provides charts on live sites with certificate issues, certificate expiration time frames, and sites by certificate posture.
+
+### Live sites with cert issues
+
+This chart displays pages that are actively serving content and present users with a warning that the site is insecure. The user must manually accept the warning to view the content on these pages. This can occur for a variety of reasons; this chart organizes results by the specific reason for easy mitigation. Options include broken certificates, active certificate issues, requires authorization and browser certificate errors.
+
+### SSL certificate expiration
+
+This chart displays upcoming SSL Certificate expirations, ensuring that an organization has ample visibility into any upcoming renewals. An expired SSL Certificate leaves corresponding assets susceptible to attack and can make the content of a page inaccessible to the internet.
+
+This chart is organized by the detected expiry window, ranging from already expired to expiring in over 90 days. Users can click any component in the bar graph to access a list of applicable assets, making it easy to send a list of certificate names to your IT Department for remediation.
+
+### SSL certificate posture
+
+This section analysis the signature algorithms that power an SSL certificate. SSL certificates can be secured with a variety of cryptographic algorithms; certain newer algorithms are considered more reputable and secure than older algorithms, so companies are advised to retire older algorithms like SHA-1.
+
+Users can click any segment of the pie chart to view a list of assets that comprise the selected value. SHA256 is considered secure, whereas organizations should update any certificates using the SHA1 algorithm.
+
+## Personal identifiable information (PII) posture
+
+The protection of personal identifiable information (PII) is a critical component to the General Data Protection Regulation. PII is defined as any data that can identify an individual, including names, addresses, birthdays, or email addresses. Any website that accepts this data through a form must be thoroughly secured according to GDPR guidelines. By analyzing the Document Object Model (DOM) of your pages, Microsoft identifies forms and login pages that may accept PII and should therefore be assessed according to European Union law.
+
+### Login posture
+
+A login page is a page on a website where a user has the option to enter a username and password to gain access to services hosted on that site. Login pages have specific requirements under GDPR, so Defender EASM references the DOM of all scanned pages to search for code that correlates to a login. For instance, login pages must be secure to be compliant.
+
+### Cookie posture
+
+A cookie is information in the form of a very small text file that is placed on the hard drive of the computer running a web browser when browsing a site. Each time a website is visited, the browser sends the cookie back to the server to notify the website of your previous activity. GDPR has specific requirements for obtaining consent to issue a cookie, and different storage regulations for first- versus third-party cookies.
+
+## OWASP top 10 dashboard
+
+The OWASP Top 10 dashboard is designed to provide insight on the most critical security recommendations as designated by OWASP, a reputable open-source foundation for web application security. This list is globally recognized as a critical resource for developers who want to ensure their code is secure. OWASP provides key information about their top 10 security risks, as well as guidance on how to avoid or remediate the issue. This Defender EASM dashboard looks for evidence of these security risks within your Attack Surface and surfaces them, listing any applicable assets and how to remediate the risk.
+
+![Screenshot of OWASP dashboard](media/Dashboards-17.png)
+
+The current OWASP Top 10 Critical Securities list includes:
+
+1. **Broken access control**: the failure of access control infrastructure that enforces policies such that users cannot act outside of their intended permissions.
+2. **Cryptographic failure**: failures related to cryptography (or lack thereof) which often lead to the exposure of sensitive data.
+3. **Injection**: applications vulnerable to injection attacks due to improper handling of data and other compliance-related issues.
+4. **Insecure design**: missing or ineffective security measures that result in weaknesses to your application.
+5. **Security misconfiguration**: missing or incorrect security configurations that are often the result of insufficiently defined configuration process.
+6. **Vulnerable and outdated components**: outdated components that run the risk of added exposures in comparison to up-to-date software.
+7. **Identification and authentication failures**: failure to properly confirm a userΓÇÖs identity, authentication or session management to protect against authentication-related attacks.
+8. **Software and data integrity failures**: code and infrastructure that does not protect against integrity violations, such as plugins from untrusted sources.
+9. **Security logging and monitoring**: lack of proper security logging and alerting, or related misconfigurations, that can impact an organizationΓÇÖs visibility and subsequent accountability over their security posture.
+10. **Server-side request forgery**: web applications that fetch a remote resource without validating the user-supplied URL.
+
+This dashboard provides a description of each critical risk, information on why it matters, and remediation guidance alongside a list of any assets that are potentially impacted. For more information, see the [OWASP website](https://owasp.org/www-project-top-ten/).
+
+## Next Steps
+
+- [Understanding asset details](understanding-asset-details.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
external-attack-surface-management Understanding Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-inventory-assets.md
+
+ Title: Understanding inventory assets
+description: Microsoft's proprietary discovery technology recursively searches for infrastructure with observed connections to known legitimate assets.
+++ Last updated : 07/14/2022+++
+# Understanding inventory assets
+
+## Overview
+
+Microsoft's proprietary discovery technology recursively searches for infrastructure with observed connections to known legitimate assets (e.g. discovery "seeds") to make inferences about that infrastructure's relationship to the organization and uncover previously unknown and unmonitored properties.
+
+Defender EASM includes the discovery of the following kinds of assets:
+
+- Domains
+- Hosts
+- Pages
+- IP Blocks
+- IP Addresses
+- Autonomous System Numbers (ASNs)
+- SSL Certificates
+- WHOIS Contacts
+
+These asset types comprise your attack surface inventory in Defender EASM. This solution discovers externally facing assets that are exposed to the open internet outside of traditional firewall protection; they need to be monitored and maintained to minimize risk and improve an organizationΓÇÖs security posture. Microsoft Defender External Attack Surface Management (Defender EASM) actively discovers and monitors these assets, then surfacing key insights that help customers efficiently address any vulnerabilities to their organization.
+
+![Screenshot of Inventory screen](media/Inventory-1.png)
+
+## Asset states
+
+All assets are labeled as one of the following states:
+
+| State name | Description |
+|--|--|
+| Approved Inventory | A part of your owned attack surface; an item that you are directly responsible for. |
+| Dependency | Infrastructure that is owned by a third party but is part of your attack surface because it directly supports the operation of your owned assets. For example, you might depend on an IT provider to host your web content. While the domain, hostname, and pages would be part of your ΓÇ£Approved Inventory,ΓÇ¥ you may wish to treat the IP Address running the host as a ΓÇ£Dependency.ΓÇ¥ |
+| Monitor Only | An asset that is relevant to your attack surface but is neither directly controlled nor a technical dependency. For example, independent franchisees or assets belonging to related companies might be labeled as ΓÇ£Monitor OnlyΓÇ¥ rather than ΓÇ£Approved InventoryΓÇ¥ to separate the groups for reporting purposes. |
+| Candidate | An asset that has some relationship to your organization's known seed assets but does not have a strong enough connection to immediately label it as ΓÇ£Approved Inventory.ΓÇ¥ These candidate assets must be manually reviewed to determine ownership. |
+| Requires Investigation | A state similar to the ΓÇ£CandidateΓÇ¥ states, but this value is applied to assets that require manual investigation to validate. This is determined based on our internally generated confidence scores that assess the strength of detected connections between assets. It does not indicate the infrastructure's exact relationship to the organization as much as it denotes that this asset has been flagged as requiring additional review to determine how it should be categorized. |
+
+## Handling of different asset states
+
+These asset states are uniquely processed and monitored to ensure that customers have clear visibility into the most critical assets by default. For instance, ΓÇ£Approved InventoryΓÇ¥ assets are always represented in dashboard charts and are scanned daily to ensure data recency. All other kinds of assets are not included in dashboard charts by default; however, users can adjust their inventory filters to view assets in different states as needed. Similarly, "CandidateΓÇ¥ assets are only scanned during the discovery process; itΓÇÖs important to review these assets and change their state to ΓÇ£Approved InventoryΓÇ¥ if they are owned by your organization.
+
+## Next steps
+
+- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Understanding asset details](understanding-asset-details.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
+
+ Title: Using and managing discovery
+description: Using and managing discovery - Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface.
+++ Last updated : 07/14/2022+++
+# Using and managing discovery
+
+## Overview
+
+Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface. Discovery scans the internet for assets owned by your organization to uncover previously unknown and unmonitored properties. Discovered assets are indexed in a customerΓÇÖs inventory, providing a dynamic system of record of web applications, third party dependencies, and web infrastructure under the organizationΓÇÖs management through a single pane of glass.
+
+Before you run a custom discovery, see the [What is discovery?](what-is-discovery.md) article to understand key concepts mentioned in this article.
+
+## Accessing your automated attack surface
+
+Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. It is recommended that all users search for their organizationΓÇÖs attack surface before creating a custom attack surface and running additional discoveries. This enables users to quickly access their inventory as Defender EASM refreshes the data, adding additional assets and recent context to your Attack Surface.
+
+When first accessing your Defender EASM instance, select ΓÇ£Getting StartedΓÇ¥ in the ΓÇ£GeneralΓÇ¥ section to search for your organization in the list of automated attack surfaces. Then select your organization from the list and click ΓÇ£Build my Attack SurfaceΓÇ¥.
+
+![Screenshot of pre-configured attack surface selection screen](media/Discovery_1.png)
+
+At this point, the discovery will be running in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. See the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
+
+If you notice any missing assets or have other entities to manage that may not be discovered through infrastructure clearly linked to your organization, you can elect to run customized discoveries to detect these outlier assets.
+
+## Customizing discovery
+
+Custom discoveries are ideal for organizations that require deeper visibility into infrastructure that may not be immediately linked to their primary seed assets. By submitting a larger list of known assets to operate as discovery seeds, the discovery engine will return a wider pool of assets. Custom discovery can also help organizations find disparate infrastructure that may relate to independent business units and acquired companies.
+
+### Discovery groups
+
+Custom discoveries are organized into Discovery Groups. They are independent seed clusters that comprise a single discovery run and operate on their own recurrence schedules. Users can elect to organize their Discovery Groups to delineate assets in whatever way best benefits their company and workflows. Common options include organizing by responsible team/business unit, brands or subsidiaries.
+
+### Creating a discovery group
+
+1. Select the **Discovery** panel under the **Manage** section in the left-hand navigation column.
+
+ ![Screenshot of EASM instance from overview page with manage section highlighted](media/Discovery_2.png)
+
+2. This Discovery page shows your list of Discovery Groups by default. This list will be empty when you first access the platform. To run your first discovery, click **Add Discovery Group**.
+
+ ![Screenshot of Discovery screen with ΓÇ£add disco groupΓÇ¥ highlighted](media/Discovery_3.png)
+
+3. First, name your new discovery group and add a description. The **Recurring Frequency** field allows you to schedule discovery runs for this group, scanning for new assets related to the designated seeds on a continuous basis. The default recurrence selection is **Weekly**; Microsoft recommends this cadence to ensure that your organizationΓÇÖs assets are routinely monitored and updated. For a single, one-time discovery run, select **Never**. However, we recommend that users keep the **Weekly** default cadence and instead turn off historical monitoring within their Discovery Group settings if they later decide to discontinue recurrent discovery runs.
+
+ Select **Next: Seeds >**
+
+ ![Screenshot of first page of disco group setup](media/Discovery_4.png)
+
+4. Next, select the seeds that youΓÇÖd like to use for this Discovery Group. Seeds are known assets that belong to your organization; the Defender EASM platform scans these entities, mapping their connections to other online infrastructure to create your Attack Surface.
+
+ ![Screenshot of seed selection page of disco group setup](media/Discovery_5.png)
+
+ The **Quick Start** option lets you search for your organization in a list of pre-populated Attack Surfaces. You can quickly create a Discovery Group based on the known assets belonging to your organization.
+
+ ![Screenshot of pre-baked attack surface selection page, then output in seed list](media/Discovery_6.png)
+
+ ![Screenshot of pre-baked attack surface selection page.](media/Discovery_7.png)
+
+ Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, certificate common names, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
+
+ Once your seeds have been selected, select **Review + Create**.
+
+5. Review your group information and seed list, then select **Create & Run**.
+
+ ![Screenshot of review + create screen](media/Discovery_8.png)
+
+ You will then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you will see new assets added to your Confirmed Inventory.
+
+### Viewing and editing discovery groups
+
+Users can manage their discovery groups from the main ΓÇ£DiscoveryΓÇ¥ page. The default view displays a list of all your discovery groups and some key data about each one. From the list view, you can see the number of seeds, recurrence schedule, last run date and created date for each group.
+
+![Screenshot of discovery groups screen](media/Discovery_9.png)
+
+Click on any discovery group to view more information, edit the group, or immediately kickstart a new discovery process.
+
+### Run history
+
+The discovery group details page contains the run history for the group. Once expanded, this section displays key information about each discovery run that has been performed on the specific group of seeds. The Status column indicates whether the run is ΓÇ£In ProgressΓÇ¥, ΓÇ£Complete,ΓÇ¥ or ΓÇ£FailedΓÇ¥. This section also includes ΓÇ£startedΓÇ¥ and ΓÇ£completedΓÇ¥ timestamps and counts of the total number of assets versus new assets discovered.
+
+Run history is organized by the seed assets scanned during the discovery run. To see a list of the applicable seeds, click ΓÇ£DetailsΓÇ¥. This opens a right-hand pane that lists all the seeds and exclusions by kind and name.
+
+![Screenshot of run history for disco group screen](media/Discovery_10.png)
+
+### Viewing seeds and exclusions
+
+The Discovery page defaults to a list view of Discovery Groups, but users can also view lists of all seeds and excluded entities from this page. Simply click the either tab to view a list of all the seeds or exclusions that power your discovery groups.
+
+### Seeds
+
+The seed list view displays seed values with three columns: type, source name, and discovery group. The ΓÇ£type" field displays the category of the seed asset; the most common seeds are domains, hosts and IP blocks, but you can also use email contacts, ASNs, certificate common names or WhoIs organizations. The source name is simply the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups that use the seed; each value is clickable, taking you to the details page for that discovery group.
+
+![Screenshot of seeds view of discovery page](media/Discovery_11.png)
+
+### Exclusions
+
+Similarly, you can click the ΓÇ£ExclusionsΓÇ¥ tab to see a list of entities that have been excluded from the discovery group. This means that these assets will not be used as discovery seeds or added to your inventory. It is important to note that exclusions only impact future discovery runs for an individual discovery group. The ΓÇ£type" field displays the category of the excluded entity. The source name is the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups where this exclusion is present; each value is clickable, taking you to the details page for that discovery group.
+
+## Next steps
+
+- [Discovering your attack surface](discovering-your-attack-surface.md)
+- [Understanding asset details](understanding-asset-details.md)
+- [Understanding dashboards](understanding-dashboards.md)
external-attack-surface-management What Is Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/what-is-discovery.md
+
+ Title: What is Discovery?
+description: What is Discovery - Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface.
+++ Last updated : 07/14/2022+++
+# What is Discovery?
+
+## Overview
+
+Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface. Discovery scans known assets owned by your organization to uncover previously unknown and unmonitored properties. Discovered assets are indexed in a customerΓÇÖs inventory, providing a dynamic system of record of web applications, third party dependencies, and web infrastructure under the organizationΓÇÖs management through a single pane of glass.
+
+![Screenshot of Discovery configuration screen](media/Discovery-1.png)
+
+Through this process, Microsoft enables organizations to proactively monitor their constantly shifting digital attack surface and identify emerging risks and policy violations as they arise. Many vulnerability programs lack visibility outside their firewall, leaving them unaware of external risks and threatsΓÇöthe primary source of data breaches. At the same time, digital growth continues to outpace an enterprise security teamΓÇÖs ability to protect it. Digital initiatives and overly common ΓÇ£shadow ITΓÇ¥ lead to an expanding attack surface outside the firewall. At this pace, it is nearly impossible to validate controls, protections, and compliance requirements. Without Defender EASM, it is nearly impossible to identify and remove vulnerabilities and scanners cannot reach beyond the firewall to assess the full attack surface.
+
+## How it works
+
+To create a comprehensive mapping of your organizationΓÇÖs attack surface, the system first intakes known assets (i.e. ΓÇ£seedsΓÇ¥) that are recursively scanned to discover additional entities through their connections to a seed. An initial seed may be any of the following kinds of web infrastructure indexed by Microsoft:
+
+- Pages
+- Host Name
+- Domain
+- SSL Cert
+- Contact Email Address
+- IP Block
+- IP Address
+- ASN
+
+![Screenshot of Seed list view on discovery screen](media/Discovery-2.png)
+
+Starting with a seed, the system then discovers associations to other online infrastructure to discover other assets owned by your organization; this process ultimately creates your attack surface inventory. The discovery process uses the seeds as the central nodes and spiders outward towards the periphery of your attack surface by identifying all the infrastructure directly connected to the seed, and then identifying all the things related to each of the things in the first set of connections, etc. This process continues until we reach the edge of what your organization is responsible for managing.
+
+For example, to discover ContosoΓÇÖs infrastructure, you might use the domain, contoso.com, as the initial keystone seed. Starting with this seed, we could consult the following sources and derive the following relationships:
+
+| Data source | Example |
+|--|--|
+| WhoIs records | Other domain names registered to the same contact email or registrant org used to register contoso.com likely also belong to Contoso |
+| WhoIs records | All domain names registered to any @contoso.com email address likely also belong to Microsoft |
+| Whois records | Other domains associated with the same name server as contoso.com may also belong to Contoso |
+| DNS records | We can assume that Contoso also owns all observed hosts on the domains it owns and any websites that are associated with those hosts |
+| DNS records | Domains with other hosts resolving to the same IP blocks might also belong to Contoso if the organization owns the IP block |
+| DNS records | Mail servers associated with Contoso-owned domain names would also belong to Contoso |
+| SSL certificates | Contoso probably also owns all SSL certificates connected to each of those hosts and any other hosts using the same SSL certs |
+| ASN records | Other IP blocks associated with the same ASN as the IP blocks to which hosts on ContosoΓÇÖs domain names are connected may also belong to Contoso ΓÇô as would all the hosts and domains that resolve to them |
+
+Using this set of first-level connections, we can quickly derive an entirely new set of assets to investigate. Before performing additional recursions, Microsoft determines whether a connection is strong enough for a discovered entity to be automatically added to your Confirmed Inventory. For each of these assets, the discovery system runs automated, recursive searches based on all available attributes to find second-level and third-level connections. This repetitive process provides more information on an organizationΓÇÖs online infrastructure and therefore discovers disparate assets that may not have been discovered and subsequently monitored otherwise.
+
+## Automated versus customized attack surfaces
+
+When first using Defender EASM, you can access a pre-built inventory for your organization to quickly kick start your workflows. From the ΓÇ£Getting StartedΓÇ¥ page, users can search for their organization to quickly populate their inventory based on asset connections already identified by Microsoft. It is recommended that all users search for their organizationΓÇÖs pre-built Attack Surface before creating a custom inventory.
+
+To build a customized inventory, users create Discovery Groups to organize and manage the seeds they use when running discoveries. Separate Discovery groups allow users to automate the discovery process, configuring the seed list and recurrent run schedule.
+
+![Screenshot of Automated attack surface selection screen](media/Discovery-3.png)
+
+## Confirmed inventory vs. candidate assets
+
+If the discovery engine detects a strong connection between a potential asset and the initial seed, the system will automatically include that asset in an organizationΓÇÖs ΓÇ£Confirmed Inventory.ΓÇ¥ As the connections to this seed are iteratively scanned, discovering third- or fourth-level connections, the systemΓÇÖs confidence in the ownership of any newly detected assets is lower. Similarly, the system may detect assets that are relevant to your organization but may not be directly owned by them.
+For these reasons, newly discovered assets are labeled as one of the following states:
+
+| State name | Description |
+|--|--|
+| Approved Inventory | A part of your owned attack surface; an item that you are directly responsible for. |
+| Dependency | Infrastructure that is owned by a third party but is part of your attack surface because it directly supports the operation of your owned assets. For example, you might depend on an IT provider to host your web content. While the domain, hostname, and pages would be part of your ΓÇ£Approved Inventory,ΓÇ¥ you may wish to treat the IP Address running the host as a ΓÇ£Dependency.ΓÇ¥ |
+| Monitor Only | An asset that is relevant to your attack surface but is neither directly controlled nor a technical dependency. For example, independent franchisees or assets belonging to related companies might be labeled as ΓÇ£Monitor OnlyΓÇ¥ rather than ΓÇ£Approved InventoryΓÇ¥ to separate the groups for reporting purposes. |
+| Candidate | An asset that has some relationship to your organization's known seed assets but does not have a strong enough connection to immediately label it as ΓÇ£Approved Inventory.ΓÇ¥ These candidate assets must be manually reviewed to determine ownership. |
+| Requires Investigation | A state similar to the ΓÇ£CandidateΓÇ¥ states, but this value is applied to assets that require manual investigation to validate. This is determined based on our internally generated confidence scores that assess the strength of detected connections between assets. It does not indicate the infrastructure's exact relationship to the organization as much as it denotes that this asset has been flagged as requiring additional review to determine how it should be categorized. |
+
+Asset details are continuously refreshed and updated over time to maintain an accurate map of asset states and relationships, as well as to uncover newly created assets as they emerge. The discovery process is managed by placing seeds in Discovery Groups that can be scheduled to rerun on a recurrent basis. Once an inventory is populated, the Defender EASM system continuously scans your assets with MicrosoftΓÇÖs virtual user technology to uncover fresh, detailed data about each one. This process examines the content and behavior of each page within applicable sites to provide robust information that can be used to identify vulnerabilities, compliance issues and other potential risks to your organization.
+
+## Next steps
+- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
+- [Understanding asset details](understanding-asset-details.md)
firewall Threat Intel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/threat-intel.md
Previously updated : 11/04/2021 Last updated : 08/01/2022 # Azure Firewall threat intelligence-based filtering
-Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses and domains. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed, which includes multiple sources including the Microsoft Cyber Security team. [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence) powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud.<br>
+Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses, FQDNs, and URLs. The IP addresses, domains and URLs are sourced from the Microsoft Threat Intelligence feed, which includes multiple sources including the Microsoft Cyber Security team. [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence) powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud.<br>
<br> :::image type="content" source="media/threat-intel/firewall-threat.png" alt-text="Firewall threat intelligence" border="false":::
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
Previously updated : 05/25/2022 Last updated : 08/01/2022 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
First, create a resource group to contain the resources needed to deploy the fir
The resource group contains all the resources used in this procedure. 1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Add**.
+2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Create**.
4. For **Subscription**, select your subscription.
-1. For **Resource group name**, enter *Test-FW-RG*.
+1. For **Resource group name**, type **Test-FW-RG**.
1. For **Resource group location**, select a location. All other resources that you create must be in the same location. 1. Select **Review + create**. 1. Select **Create**. ### Create a VNet
-This VNet will have three subnets.
+This VNet will have two subnets.
> [!NOTE] > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size). 1. On the Azure portal menu or from the **Home** page, select **Create a resource**. 1. Select **Networking** > **Virtual network**.
-1. Select **Create**.
1. For **Subscription**, select your subscription. 1. For **Resource group**, select **Test-FW-RG**. 1. For **Name**, type **Test-FW-VN**. 1. For **Region**, select the same location that you used previously. 1. Select **Next: IP addresses**.
-1. For **IPv4 Address space**, type **10.0.0.0/16**.
-1. Under **Subnet**, select **default**.
-1. For **Subnet name** type **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
-1. For **Address range**, type **10.0.1.0/26**.
+1. For **IPv4 Address space**, accept the default **10.0.0.0/16**.
+1. Under **Subnet name**, select **default**.
+1. For **Subnet name** change it to **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
+1. For **Address range**, change it to **10.0.1.0/26**.
1. Select **Save**. Next, create a subnet for the workload server.
This VNet will have three subnets.
Now create the workload virtual machine, and place it in the **Workload-SN** subnet. 1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Select **Windows Server 2016 Datacenter**.
+2. Select **Windows Server 2019 Datacenter**.
4. Enter these values for the virtual machine: |Setting |Value |
Now create the workload virtual machine, and place it in the **Workload-SN** sub
|Resource group |**Test-FW-RG**| |Virtual machine name |**Srv-Work**| |Region |Same as previous|
- |Image|Windows Server 2016 Datacenter|
+ |Image|Windows Server 2019 Datacenter|
|Administrator user name |Type a user name| |Password |Type a password|
Now create the workload virtual machine, and place it in the **Workload-SN** sub
8. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**. 9. For **Public IP**, select **None**. 11. Accept the other defaults and select **Next: Management**.
-12. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+12. For **Boot diagnostics**, select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
13. Review the settings on the summary page, and then select **Create**.
+1. After the deployment is complete, select **Srv-Work** and note the private IP address that you'll need to use later.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)] + ## Deploy the firewall Deploy the firewall into the VNet.
Deploy the firewall into the VNet.
|Resource group |**Test-FW-RG** | |Name |**Test-FW01**| |Region |Select the same location that you used previously|
+ |Firewall tier|**Standard**|
|Firewall management|**Use Firewall rules (classic) to manage this firewall**| |Choose a virtual network |**Use existing**: **Test-FW-VN**| |Public IP address |**Add new**<br>**Name**: **fw-pip**|
As a result, there is no need create an additional UDR to include the AzureFirew
For the **Workload-SN** subnet, configure the outbound default route to go through the firewall.
-1. On the Azure portal menu, select **All services** or search for and select *All services* from any page.
-2. Under **Networking**, select **Route tables**.
-3. Select **Add**.
+1. On the Azure portal menu, select **Create a resource**.
+2. Under **Networking**, select **Route table**.
5. For **Subscription**, select your subscription. 6. For **Resource group**, select **Test-FW-RG**. 7. For **Region**, select the same location that you used previously.
For the **Workload-SN** subnet, configure the outbound default route to go throu
After deployment completes, select **Go to resource**.
-1. On the Firewall-route page, select **Subnets** and then select **Associate**.
+1. On the **Firewall-route** page, select **Subnets** and then select **Associate**.
1. Select **Virtual network** > **Test-FW-VN**. 1. For **Subnet**, select **Workload-SN**. Make sure that you select only the **Workload-SN** subnet for this route, otherwise your firewall won't work correctly. 13. Select **OK**. 14. Select **Routes** and then select **Add**. 15. For **Route name**, type **fw-dg**.
-16. For **Address prefix**, type **0.0.0.0/0**.
-17. For **Next hop type**, select **Virtual appliance**.
+1. For **Address prefix destination**, select **IP Addresses**.
+1. For **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**.
+1. For **Next hop type**, select **Virtual appliance**.
Azure Firewall is actually a managed service, but virtual appliance works in this situation. 18. For **Next hop address**, type the private IP address for the firewall that you noted previously.
This is the network rule that allows outbound access to two IP addresses at port
2. For **Destination type** select **IP address**. 3. For **Destination address**, type **209.244.0.3,209.244.0.4**
- These are public DNS servers operated by CenturyLink.
+ These are public DNS servers operated by Level3.
1. For **Destination Ports**, type **53**. 2. Select **Add**.
This rule allows you to connect a remote desktop to the Srv-Work virtual machine
8. For **Source**, type **\***. 9. For **Destination address**, type the firewall public IP address. 10. For **Destination Ports**, type **3389**.
-11. For **Translated address**, type the **Srv-work** private IP address.
+11. For **Translated address**, type the Srv-work private IP address.
12. For **Translated port**, type **3389**. 13. Select **Add**.
For testing purposes, configure the server's primary and secondary DNS addresses
Now, test the firewall to confirm that it works as expected.
-1. Connect a remote desktop to firewall public IP address and sign in to the **Srv-Work** virtual machine.
-3. Open Internet Explorer and browse to `https://www.google.com`.
+1. Connect a remote desktop to the firewall public IP address and sign in to the Srv-Work virtual machine.
+1. Open Internet Explorer and browse to `https://www.google.com`.
4. Select **OK** > **Close** on the Internet Explorer security alerts. You should see the Google home page.
Now, test the firewall to confirm that it works as expected.
So now you've verified that the firewall rules are working:
+* You can connect to the virtual machine using RDP.
* You can browse to the one allowed FQDN, but not to any others. * You can resolve DNS names using the configured external DNS server.
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
initiative definition.
|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | |[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) |
-|[Azure Cache for Redis should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7803067c-7d34-46e3-8c79-0ca68fc4036d) |Private endpoints lets you connect your virtual network to Azure services without a public IP address at the source or destination. By mapping private endpoints to your Azure Cache for Redis instances, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_PrivateEndpoint_AuditIfNotExists.json) |
+|[Azure Cache for Redis should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7803067c-7d34-46e3-8c79-0ca68fc4036d) |Private endpoints lets you connect your virtual network to Azure services without a public IP address at the source or destination. By mapping private endpoints to your Azure Cache for Redis instances, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link](/azure/azure-cache-for-redis/cache-private-link). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_PrivateEndpoint_AuditIfNotExists.json) |
|[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
-|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](https://docs.microsoft.com/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
-|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [/azure/key-vault/general/network-security](/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [/azure/machine-learning/how-to-configure-private-link](/azure/machine-learning/how-to-configure-private-link). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
|[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
initiative definition.
|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | |[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) |
-|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
+|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
### Deploy firewall at the edge of enterprise network
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cosmos DB database accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5450f5bd-9c72-4390-a9c4-a7aba4edfdd2) |Disabling local authentication methods improves security by ensuring that Cosmos DB database accounts exclusively require Azure Active Directory identities for authentication. Learn more at: [https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac#disable-local-auth](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac#disable-local-auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_DisableLocalAuth_AuditDeny.json) |
+|[Cosmos DB database accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5450f5bd-9c72-4390-a9c4-a7aba4edfdd2) |Disabling local authentication methods improves security by ensuring that Cosmos DB database accounts exclusively require Azure Active Directory identities for authentication. Learn more at: [https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac#disable-local-auth](/azure/cosmos-db/how-to-setup-rbac#disable-local-auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_DisableLocalAuth_AuditDeny.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ### Manage application identities securely and automatically
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](/azure/virtual-machines/linux/create-ssh-keys-detailed). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) | |[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) | |[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
initiative definition.
||||| |[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) | |[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
-|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](https://docs.microsoft.com/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) | |[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) | |[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
+|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
+|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) | |[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | |[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark (Azure Government) description: Details of the Azure Security Benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
This built-in initiative is deployed as part of the
||||| |[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
-### Ensure ASC Default policy setting "Monitor Endpoint Protection" is not "Disabled"
+### Ensure ASC Default policy setting "Monitor Disk Encryption" is not "Disabled"
-**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.5
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.6
**Ownership**: Customer |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Rbi_Itf_Nbfc_V2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi_itf_nbfc_v2017.md
+
+ Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC
+description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Last updated : 08/01/2022+++
+# Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative
+
+The following article details how the Azure Policy Regulatory Compliance built-in initiative
+definition maps to **compliance domains** and **controls** in Reserve Bank of India - IT Framework for NBFC.
+For more information about this compliance standard, see
+[Reserve Bank of India - IT Framework for NBFC](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=10999&Mode=0#C1). To understand
+_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
+[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
+
+The following mappings are to the **Reserve Bank of India - IT Framework for NBFC** controls. Use the
+navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete
+initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
+Then, find and select the **[Preview]: Reserve Bank of India - IT Framework for NBFC** Regulatory Compliance built-in
+initiative definition.
+
+> [!IMPORTANT]
+> Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
+> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
+> control; however, there often is not a one-to-one or complete match between a control and one or
+> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions
+> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In
+> addition, the compliance standard includes controls that aren't addressed by any Azure Policy
+> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your
+> overall compliance status. The associations between compliance domains, controls, and Azure Policy
+> definitions for this compliance standard may change over time. To view the change history, see the
+> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/RBI_ITF_NBFC_v2017.json).
+
+## IT Governance
+
+### IT Governance-1
+
+**ID**: RBI IT Framework 1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb893a29-21bb-418c-a157-e99480ec364c) |Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. Vulnerability CVE-2019-9946 has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+ |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UpgradeVersion_KubernetesService_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### IT Governance-1.1
+
+**ID**: RBI IT Framework 1.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+
+## IT Policy
+
+### IT Policy-2
+
+**ID**: RBI IT Framework 2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
+
+## Information and Cyber Security
+
+### Information Security-3
+
+**ID**: RBI IT Framework 3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+
+### Identification and Classification of Information Assets-3.1
+
+**ID**: RBI IT Framework 3.1.a
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+
+### Segregation of Functions-3.1
+
+**ID**: RBI IT Framework 3.1.b
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97566dd7-78ae-4997-8b36-1c7bfe0d8121) |Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel and kernel drivers will be allowed to run. This assessment only applies to trusted launch enabled Windows virtual machines. |Audit, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableWindowsSB_Audit.json) |
+|[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) |
+|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
+|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) |
+|[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
+|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) |
+
+### Role based Access Control-3.1
+
+**ID**: RBI IT Framework 3.1.c
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
+
+### Maker-checker-3.1
+
+**ID**: RBI IT Framework 3.1.f
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+
+### Trails-3.1
+
+**ID**: RBI IT Framework 3.1.g
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[Activity log should be retained for at least one year](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb02aacc0-b073-424e-8298-42b22829ee0a) |This policy audits the activity log if the retention is not set for 365 days or forever (retention days set to 0). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLogRetention_365orGreater.json) |
+|[Application Insights components should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc02227-0cb6-4e11-8f53-eb0b22eab7e8) |Improve Application Insights security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs of this component. Learn more at [https://aka.ms/AzMonPrivateLink#configure-application-insights](https://aka.ms/AzMonPrivateLink#configure-application-insights). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_NetworkAccessEnabled_Deny.json) |
+|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) |
+|[Azure Monitor Logs clusters should be created with infrastructure-encryption enabled (double encryption)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea0dfaed-95fb-448c-934e-d6e713ce393d) |To ensure secure data encryption is enabled at the service level and the infrastructure level with two different encryption algorithms and two different keys, use an Azure Monitor dedicated cluster. This option is enabled by default when supported at the region, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKDoubleEncryptionEnabled_Deny.json) |
+|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
+|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) |
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
+|[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) |
+|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVm.json) |
+|[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) |
+|[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) |
+|[Log Analytics workspaces should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6c53d030-cc64-46f0-906d-2bc061cd1334) |Improve workspace security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs on this workspace. Learn more at [https://aka.ms/AzMonPrivateLink#configure-log-analytics](https://aka.ms/AzMonPrivateLink#configure-log-analytics). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_NetworkAccessEnabled_Deny.json) |
+|[Log Analytics Workspaces should block non-Azure Active Directory based ingestion.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe15effd4-2278-4c65-a0da-4d6f6d1890e2) |Enforcing log ingestion to require Azure Active Directory authentication prevents unauthenticated logs from an attacker which could lead to incorrect status, false alerts, and incorrect logs stored in the system. |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_DisableLocalAuth_Deny.json) |
+|[Log checkpoints should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e43d) |This policy helps audit any PostgreSQL databases in your environment without log_checkpoints setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogCheckpoint_Audit.json) |
+|[Log connections should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e442) |This policy helps audit any PostgreSQL databases in your environment without log_connections setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogConnections_Audit.json) |
+|[Log duration should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e8f3) |This policy helps audit any PostgreSQL databases in your environment without log_duration setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDuration_Audit.json) |
+|[Logic Apps Integration Service Environment should be encrypted with customer-managed keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fafeaf6-7927-4059-a50a-8eb2a7a6f2b5) |Deploy into Integration Service Environment to manage encryption at rest of Logic Apps data using customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_ISEWithCustomerManagedKey_AuditDeny.json) |
+|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_TrafficAnalytics_Audit.json) |
+|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+|[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) |
+|[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
+
+### Public Key Infrastructure (PKI)-3.1
+
+**ID**: RBI IT Framework 3.1.h
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
+|[App Configuration should use a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F967a4b4b-2da9-43c1-b7d0-f98d0d74d0b1) |Customer-managed keys provide enhanced data protection by allowing you to manage your encryption keys. This is often required to meet compliance requirements. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/CustomerManagedKey_Audit.json) |
+|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[App Service Environment should have internal encryption enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb74e86f-d351-4b8d-b034-93da7391c01f) |Setting InternalEncryption to true encrypts the pagefile, worker disks, and internal network traffic between the front ends and workers in an App Service Environment. To learn more, refer to [https://docs.microsoft.com/azure/app-service/environment/app-service-app-service-environment-custom-settings#enable-internal-encryption](https://docs.microsoft.com/azure/app-service/environment/app-service-app-service-environment-custom-settings#enable-internal-encryption). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_HostingEnvironment_InternalEncryption_Audit.json) |
+|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
+|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](https://docs.microsoft.com/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Monitor Logs clusters should be created with infrastructure-encryption enabled (double encryption)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea0dfaed-95fb-448c-934e-d6e713ce393d) |To ensure secure data encryption is enabled at the service level and the infrastructure level with two different encryption algorithms and two different keys, use an Azure Monitor dedicated cluster. This option is enabled by default when supported at the region, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKDoubleEncryptionEnabled_Deny.json) |
+|[Disk encryption should be enabled on Azure Data Explorer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff4b53539-8df9-40e4-86c6-6b607703bd4e) |Enabling disk encryption helps protect and safeguard your data to meet your organizational security and compliance commitments. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Data%20Explorer/ADX_disk_encrypted.json) |
+|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
+|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
+|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
+|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Infrastructure encryption should be enabled for Azure Database for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3a58212a-c829-4f13-9872-6371df2fd0b4) |Enable infrastructure encryption for Azure Database for MySQL servers to have higher level of assurance that the data is secure. When infrastructure encryption is enabled, the data at rest is encrypted twice using FIPS 140-2 compliant Microsoft managed keys. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_InfrastructureEncryption_Audit.json) |
+|[Infrastructure encryption should be enabled for Azure Database for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F24fba194-95d6-48c0-aea7-f65bf859c598) |Enable infrastructure encryption for Azure Database for PostgreSQL servers to have higher level of assurance that the data is secure. When infrastructure encryption is enabled, the data at rest is encrypted twice using FIPS 140-2 compliant Microsoft managed keys |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_InfrastructureEncryption_Audit.json) |
+|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) |
+|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) |
+|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
+|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) |
+|[Managed disks should use a specific set of disk encryption sets for the customer-managed key encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd461a302-a187-421a-89ac-84acdb4edc04) |Requiring a specific set of disk encryption sets to be used with managed disks give you control over the keys used for encryption at rest. You are able to select the allowed encrypted sets and all others are rejected when attached to a disk. Learn more at [https://aka.ms/disks-cmk](https://aka.ms/disks-cmk). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ManagedDiskEncryptionSetsAllowed_Deny.json) |
+|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) |
+|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
+|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
+|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
+|[Storage account encryption scopes should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5ec538c-daa0-4006-8596-35468b9148e8) |Use customer-managed keys to manage the encryption at rest of your storage account encryption scopes. Customer-managed keys enable the data to be encrypted with an Azure key-vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about storage account encryption scopes at [https://aka.ms/encryption-scopes-overview](https://aka.ms/encryption-scopes-overview). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_EncryptionScopesShouldUseCMK_Audit.json) |
+|[Storage account encryption scopes should use double encryption for data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbfecdea6-31c4-4045-ad42-71b9dc87247d) |Enable infrastructure encryption for encryption at rest of your storage account encryption scopes for added security. Infrastructure encryption ensures that your data is encrypted twice. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageEncryptionScopesShouldUseDoubleEncryption_Audit.json) |
+|[Storage accounts should have infrastructure encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4733ea7b-a883-42fe-8cac-97454c2a9e4a) |Enable infrastructure encryption for higher level of assurance that the data is secure. When infrastructure encryption is enabled, data in a storage account is encrypted twice. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountInfrastructureEncryptionEnabled_Audit.json) |
+|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
+|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+
+### Vulnerability Management-3.3
+
+**ID**: RBI IT Framework 3.3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb893a29-21bb-418c-a157-e99480ec364c) |Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. Vulnerability CVE-2019-9946 has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+ |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UpgradeVersion_KubernetesService_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) |
+
+### Digital Signatures-3.8
+
+**ID**: RBI IT Framework 3.8
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) |
+|[Certificates should be issued by the specified integrated certificate authority](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e826246-c976-48f6-b03e-619bb92b3d82) |Manage your organizational compliance requirements by specifying the Azure integrated certificate authorities that can issue certificates in your key vault such as Digicert or GlobalSign. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_Issuers_SupportedCAs.json) |
+|[Certificates should use allowed key types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1151cede-290b-4ba0-8b38-0ad145ac888f) |Manage your organizational compliance requirements by restricting the key types allowed for certificates. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_AllowedKeyTypes.json) |
+|[Certificates using elliptic curve cryptography should have allowed curve names](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd78111f-4953-4367-9fd5-7e08808b54bf) |Manage the allowed elliptic curve names for ECC Certificates stored in key vault. More information can be found at [https://aka.ms/akvpolicy](https://aka.ms/akvpolicy). |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_EC_AllowedCurveNames.json) |
+|[Certificates using RSA cryptography should have the specified minimum key size](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcee51871-e572-4576-855c-047c820360f0) |Manage your organizational compliance requirements by specifying a minimum key size for RSA certificates stored in your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_RSA_MinimumKeySize.json) |
+|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) |
+
+## IT Operations
+
+### IT Operations-4.2
+
+**ID**: RBI IT Framework 4.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+
+### IT Operations-4.4
+
+**ID**: RBI IT Framework 4.4.a
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+
+### MIS For Top Management-4.4
+
+**ID**: RBI IT Framework 4.4.b
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+
+## IS Audit
+
+### Policy for Information System Audit (IS Audit)-5
+
+**ID**: RBI IT Framework 5
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) |
+|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
+|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP firewall rules on Azure Synapse workspaces should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F56fd377d-098c-4f02-8406-81eb055902b8) |Removing all IP firewall rules improves security by ensuring your Azure Synapse workspace can only be accessed from a private endpoint. This configuration audits creation of firewall rules that allow public network access on the workspace. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceFirewallRules_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should use the specified mode for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12430be1-6cc8-4527-a9a8-e3d38f250096) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayMode_Audit.json) |
+|[Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F425bea59-a659-4cbb-8d31-34499bd030b8) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Azure Front Door Service. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Mode_Audit.json) |
+
+### Coverage-5.2
+
+**ID**: RBI IT Framework 5.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+
+## Business Continuity Planning
+
+### Business Continuity Planning (BCP) and Disaster Recovery-6
+
+**ID**: RBI IT Framework 6
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[\[Preview\]: Recovery Services vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11e3da8c-1d68-4392-badd-0ff3c43ab5b0) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links for Azure Site Recovery at: [https://aka.ms/HybridScenarios-PrivateLink](https://aka.ms/HybridScenarios-PrivateLink) and [https://aka.ms/AzureToAzure-PrivateLink](https://aka.ms/AzureToAzure-PrivateLink). |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Site%20Recovery/RecoveryServices_SiteRecovery_PrivateEndpoint_Audit.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+
+### Recovery strategy / Contingency Plan-6.2
+
+**ID**: RBI IT Framework 6.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+
+### Recovery strategy / Contingency Plan-6.3
+
+**ID**: RBI IT Framework 6.3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+
+### Recovery strategy / Contingency Plan-6.4
+
+**ID**: RBI IT Framework 6.4
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[\[Preview\]: Recovery Services vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11e3da8c-1d68-4392-badd-0ff3c43ab5b0) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links for Azure Site Recovery at: [https://aka.ms/HybridScenarios-PrivateLink](https://aka.ms/HybridScenarios-PrivateLink) and [https://aka.ms/AzureToAzure-PrivateLink](https://aka.ms/AzureToAzure-PrivateLink). |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Site%20Recovery/RecoveryServices_SiteRecovery_PrivateEndpoint_Audit.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+
+## Next steps
+
+Additional articles about Azure Policy:
+
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- See the [initiative definition structure](../concepts/initiative-definition-structure.md).
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
-|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
+|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
## Network Resilience
initiative definition.
||||| |[App Configuration should use a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F967a4b4b-2da9-43c1-b7d0-f98d0d74d0b1) |Customer-managed keys provide enhanced data protection by allowing you to manage your encryption keys. This is often required to meet compliance requirements. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/CustomerManagedKey_Audit.json) | |[Azure Container Instance container group should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0aa61e00-0a01-4a3c-9945-e93cffedf0e6) |Secure your containers with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled, Deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Instance/ContainerInstance_CMK_Audit.json) |
-|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
+|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
initiative definition.
|[Managed disks should use a specific set of disk encryption sets for the customer-managed key encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd461a302-a187-421a-89ac-84acdb4edc04) |Requiring a specific set of disk encryption sets to be used with managed disks give you control over the keys used for encryption at rest. You are able to select the allowed encrypted sets and all others are rejected when attached to a disk. Learn more at [https://aka.ms/disks-cmk](https://aka.ms/disks-cmk). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ManagedDiskEncryptionSetsAllowed_Deny.json) | |[OS and data disks should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F702dd420-7fcc-42c5-afe8-4026edd20fe0) |Use customer-managed keys to manage the encryption at rest of the contents of your managed disks. By default, the data is encrypted at rest with platform-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/disks-cmk](https://aka.ms/disks-cmk). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/OSAndDataDiskCMKRequired_Deny.json) | |[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
-|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
+|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | |[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) | |[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
initiative definition.
|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) | |[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) | |[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) |
-|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) |
+|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) |
|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | |[Azure Monitor solution 'Security and Audit' must be deployed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3e596b57-105f-48a6-be97-03e9243bad6e) |This policy ensures that Security and Audit is deployed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Security_Audit_MustBeDeployed.json) | |[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
initiative definition.
|[Deploy Diagnostic Settings for Stream Analytics to Event Hub](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fedf3780c-3d70-40fe-b17e-ab72013dafca) |Deploys the diagnostic settings for Stream Analytics to stream to a regional Event Hub when any Stream Analytics which is missing this diagnostic settings is created or updated. |DeployIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/StreamAnalytics_DeployDiagnosticLog_Deploy_EventHub.json) | |[Deploy Diagnostic Settings for Stream Analytics to Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F237e0f7e-b0e8-4ec4-ad46-8c12cb66d673) |Deploys the diagnostic settings for Stream Analytics to stream to a regional Log Analytics workspace when any Stream Analytics which is missing this diagnostic settings is created or updated. |DeployIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/StreamAnalytics_DeployDiagnosticLog_Deploy_LogAnalytics.json) | |[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) |
-|[Resource logs in Azure Key Vault Managed HSM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2a5b911-5617-447e-a49e-59dbe0e0434b) |To recreate activity trails for investigation purposes when a security incident occurs or when your network is compromised, you may want to audit by enabling resource logs on Managed HSMs. Please follow the instructions here: [https://docs.microsoft.com/azure/key-vault/managed-hsm/logging](https://docs.microsoft.com/azure/key-vault/managed-hsm/logging). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Azure Key Vault Managed HSM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2a5b911-5617-447e-a49e-59dbe0e0434b) |To recreate activity trails for investigation purposes when a security incident occurs or when your network is compromised, you may want to audit by enabling resource logs on Managed HSMs. Please follow the instructions here: [https://docs.microsoft.com/azure/key-vault/managed-hsm/logging](/azure/key-vault/managed-hsm/logging). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_AuditDiagnosticLog_Audit.json) |
|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | |[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | |[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Azure Key Vault Managed HSM should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc39ba22d-4428-4149-b981-70acb31fc383) |Malicious deletion of an Azure Key Vault Managed HSM can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge Azure Key Vault Managed HSM. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted Azure Key Vault Managed HSM. No one inside your organization or Microsoft will be able to purge your Azure Key Vault Managed HSM during the soft delete retention period. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_Recoverable_Audit.json) |
-|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
+|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
|[Configure App Configuration to disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F73290fa2-dfa7-4bbb-945d-a5e23b75df2c) |Disable public network access for App Configuration so that it isn't accessible over the public internet. This configuration helps protect them against data leakage risks. You can limit exposure of the your resources by creating private endpoints instead. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |Modify, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_PublicNetworkAccess_Modify.json) | |[Configure Azure SQL Server to disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F28b0b1e5-17ba-4963-a7a4-5a1ab4400a0b) |Disabling the public network access property shuts down public connectivity such that Azure SQL Server can only be accessed from a private endpoint. This configuration disables the public network access for all databases under the Azure SQL Server. |Modify, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Modify.json) | |[Configure Container registries to disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3701552-92ea-433e-9d17-33b7f1208fc9) |Disable public network access for your Container Registry resource so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Modify, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PublicNetworkAccess_Modify.json) |
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
description: Learn how to use Azure Monitor logs to monitor jobs running in an H
Previously updated : 09/21/2021 Last updated : 08/01/2021 # Use Azure Monitor logs to monitor HDInsight clusters
az hdinsight monitor disable --name $cluster --resource-group $resourceGroup
``` ## <a name="oms-with-firewall"></a>Prerequisites for clusters behind a firewall
-To be able to successfully setup Azure Monitor integration with HDInsight, behind a firewall, some customers may need to enable the following endpoints:
+To be able to successfully set up Azure Monitor integration with HDInsight, behind a firewall, some customers may need to enable the following endpoints:
|Agent Resource | Ports | Direction | Bypass HTTPS inspection | |||||
Once the setup is successful, enabling necessary endpoints for data ingestion is
## Install HDInsight cluster management solutions
-HDInsight provides cluster-specific management solutions that you can add for Azure Monitor logs. [Management solutions](../azure-monitor/insights/solutions.md) add functionality to Azure Monitor logs, providing more data and analysis tools. These solutions collect important performance metrics from your HDInsight clusters. And provide the tools to search the metrics. These solutions also provide visualizations and dashboards for most cluster types supported in HDInsight. By using the metrics that you collect with the solution, you can create custom monitoring rules and alerts.
+HDInsight provides cluster-specific management solutions that you can add for Azure Monitor Logs. [Management solutions](../azure-monitor/insights/solutions.md) add functionality to Azure Monitor Logs, providing more data and analysis tools. These solutions collect important performance metrics from your HDInsight clusters. And provide the tools to search the metrics. These solutions also provide visualizations and dashboards for most cluster types supported in HDInsight. By using the metrics that you collect with the solution, you can create custom monitoring rules and alerts.
Available HDInsight solutions:
If you have Azure Monitor Integration enabled on a cluster, updating the OMS age
``` ## Next steps-
+* [Selective logging analysis](selective-logging-analysis.md)
* [Query Azure Monitor logs to monitor HDInsight clusters](hdinsight-hadoop-oms-log-analytics-use-queries.md) * [How to monitor cluster availability with Apache Ambari and Azure Monitor logs](./hdinsight-cluster-availability.md)
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md
Title: Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more
-description: Set up Hadoop, Kafka, Spark, HBase, or Storm clusters for HDInsight from a browser, the Azure classic CLI, Azure PowerShell, REST, or SDK.
+description: Set up Hadoop, Kafka, Spark, or HBase clusters for HDInsight from a browser, the Azure classic CLI, Azure PowerShell, REST, or SDK.
Previously updated : 03/30/2022 Last updated : 07/22/2022 # Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more [!INCLUDE [selector](includes/hdinsight-create-linux-cluster-selector.md)]
-Learn how to set up and configure Apache Hadoop, Apache Spark, Apache Kafka, Interactive Query, Apache HBase, or Apache Storm in HDInsight. Also, learn how to customize clusters and add security by joining them to a domain.
+Learn how to set up and configure Apache Hadoop, Apache Spark, Apache Kafka, Interactive Query, or Apache HBase or in HDInsight. Also, learn how to customize clusters and add security by joining them to a domain.
A Hadoop cluster consists of several virtual machines (nodes) that are used for distributed processing of tasks. Azure HDInsight handles implementation details of installation and configuration of individual nodes, so you only have to provide general configuration information.
-> [!IMPORTANT]
+> [!IMPORTANT]
> HDInsight cluster billing starts once a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it is no longer in use. Learn how to [delete a cluster.](hdinsight-delete-cluster.md) If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](interactive-query/apache-hive-warehouse-connector.md).
You don't need to specify the cluster location explicitly: The cluster is in the
Azure HDInsight currently provides the following cluster types, each with a set of components to provide certain functionalities.
-> [!IMPORTANT]
-> HDInsight clusters are available in various types, each for a single workload or technology. There is no supported method to create a cluster that combines multiple types, such as Storm and HBase on one cluster. If your solution requires technologies that are spread across multiple HDInsight cluster types, an [Azure virtual network](../virtual-network/index.yml) can connect the required cluster types.
+> [!IMPORTANT]
+> HDInsight clusters are available in various types, each for a single workload or technology. There is no supported method to create a cluster that combines multiple types, such HBase on one cluster. If your solution requires technologies that are spread across multiple HDInsight cluster types, an [Azure virtual network](../virtual-network/index.yml) can connect the required cluster types.
| Cluster type | Functionality | | | |
Azure HDInsight currently provides the following cluster types, each with a set
| [Interactive Query](./interactive-query/apache-interactive-query-get-started.md) |In-memory caching for interactive and faster Hive queries | | [Kafka](kafk) | A distributed streaming platform that can be used to build real-time streaming data pipelines and applications | | [Spark](spark/apache-spark-overview.md) |In-memory processing, interactive queries, micro-batch stream processing |
-| [Storm](storm/apache-storm-overview.md) |Real-time event processing |
#### Version
With HDInsight clusters, you can configure two user accounts during cluster crea
The HTTP username has the following restrictions: * Allowed special characters: `_` and `@`
-* Characters not allowed: #;."',\/:`!*?$(){}[]<>|&--=+%~^space
+* Characters not allowed: #;."',/:`!*?$(){}[]<>|&--=+%~^space
* Max length: 20 The SSH username has the following restrictions: * Allowed special characters:`_` and `@`
-* Characters not allowed: #;."',\/:`!*?$(){}[]<>|&--=+%~^space
+* Characters not allowed: #;."',/:`!*?$(){}[]<>|&--=+%~^space
* Max length: 64
-* Reserved names: hadoop, users, oozie, hive, mapred, ambari-qa, zookeeper, tez, hdfs, sqoop, yarn, hcat, ams, hbase, storm, administrator, admin, user, user1, test, user2, test1, user3, admin1, 1, 123, a, actuser, adm, admin2, aspnet, backup, console, david, guest, john, owner, root, server, sql, support, support_388945a0, sys, test2, test3, user4, user5, spark
+* Reserved names: hadoop, users, oozie, hive, mapred, ambari-qa, zookeeper, tez, hdfs, sqoop, yarn, hcat, ams, hbase, administrator, admin, user, user1, test, user2, test1, user3, admin1, 1, 123, a, actuser, adm, admin2, aspnet, backup, console, david, guest, john, owner, root, server, sql, support, support_388945a0, sys, test2, test3, user4, user5, spark
## Storage
HDInsight clusters can use the following storage options:
For more information on storage options with HDInsight, see [Compare storage options for use with Azure HDInsight clusters](hdinsight-hadoop-compare-storage-options.md).
-> [!WARNING]
+> [!WARNING]
> Using an additional storage account in a different location from the HDInsight cluster is not supported. During configuration, for the default storage endpoint you specify a blob container of an Azure Storage account or Data Lake Storage. The default storage contains application and system logs. Optionally, you can specify additional linked Azure Storage accounts and Data Lake Storage accounts that the cluster can access. The HDInsight cluster and the dependent storage accounts must be in the same Azure location.
During configuration, for the default storage endpoint you specify a blob contai
> [!IMPORTANT] > Enabling secure storage transfer after creating a cluster can result in errors using your storage account and is not recommended. It is better to create a new cluster using a storage account with secure transfer already enabled.
-> [!Note]
+> [!Note]
> Azure HDInsight does not automatically transfer, move or copy your data stored in Azure Storage from one region to another. ### Metastore settings
You can create optional Hive or Apache Oozie metastores. However, not all cluste
For more information, see [Use external metadata stores in Azure HDInsight](./hdinsight-use-external-metadata-stores.md).
-> [!IMPORTANT]
+> [!IMPORTANT]
> When you create a custom metastore, don't use dashes, hyphens, or spaces in the database name. This can cause the cluster creation process to fail. #### SQL database for Hive
To increase performance when using Oozie, use a custom metastore. A metastore ca
Ambari is used to monitor HDInsight clusters, make configuration changes, and store cluster management information as well as job history. The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari in an external database that you manage. For more information, see [Custom Ambari DB](./hdinsight-custom-ambari-db.md).
-> [!IMPORTANT]
+> [!IMPORTANT]
> You cannot reuse a custom Oozie metastore. To use a custom Oozie metastore, you must provide an empty Azure SQL Database when creating the HDInsight cluster. ## Security + networking
For more information, see [Managed identities in Azure HDInsight](./hdinsight-ma
## Configuration + pricing You're billed for node usage for as long as the cluster exists. Billing starts when a cluster is created and stops when the cluster is deleted. Clusters can't be de-allocated or put on hold.
Each cluster type has its own number of nodes, terminology for nodes, and defaul
| | | | | Hadoop |Head node (2), Worker node (1+) |:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/hdinsight-hadoop-cluster-type-nodes.png" alt-text="HDInsight Hadoop cluster nodes" border="false"::: | | HBase |Head server (2), region server (1+), master/ZooKeeper node (3) |:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/hdinsight-hbase-cluster-type-setup.png" alt-text="HDInsight HBase cluster type setup" border="false"::: |
-| Storm |Nimbus node (2), supervisor server (1+), ZooKeeper node (3) |:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/hdinsight-storm-cluster-type-setup.png" alt-text="HDInsight storm cluster type setup" border="false"::: |
| Spark |Head node (2), Worker node (1+), ZooKeeper node (3) (free for A1 ZooKeeper VM size) |:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/hdinsight-spark-cluster-type-setup.png" alt-text="HDInsight spark cluster type setup" border="false"::: | For more information, see [Default node configuration and virtual machine sizes for clusters](hdinsight-supported-node-configuration.md) in "What are the Hadoop components and versions in HDInsight?"
For more information, see [Default node configuration and virtual machine sizes
The cost of HDInsight clusters is determined by the number of nodes and the virtual machines sizes for the nodes. Different cluster types have different node types, numbers of nodes, and node sizes:+ * Hadoop cluster type default:
- * Two *head nodes*
- * Four *Worker nodes*
-* Storm cluster type default:
- * Two *Nimbus nodes*
- * Three *ZooKeeper nodes*
- * Four *supervisor nodes*
+ * Two *head nodes*
++
+ * Four *Worker nodes*
If you're just trying out HDInsight, we recommend you use one Worker node. For more information about HDInsight pricing, see [HDInsight pricing](https://go.microsoft.com/fwLink/?LinkID=282635&clcid=0x409).
-> [!NOTE]
+> [!NOTE]
> The cluster size limit varies among Azure subscriptions. Contact [Azure billing support](../azure-portal/supportability/how-to-create-azure-support-request.md) to increase the limit. When you use the Azure portal to configure the cluster, the node size is available through the **Configuration + pricing** tab. In the portal, you can also see the cost associated with the different node sizes.
When you deploy clusters, choose compute resources based on the solution you pla
To find out what value you should use to specify a VM size while creating a cluster using the different SDKs or while using Azure PowerShell, see [VM sizes to use for HDInsight clusters](../cloud-services/cloud-services-sizes-specs.md#size-tables). From this linked article, use the value in the **Size** column of the tables.
-> [!IMPORTANT]
+> [!IMPORTANT]
> If you need more than 32 Worker nodes in a cluster, you must select a head node size with at least 8 cores and 14 GB of RAM. For more information, see [Sizes for virtual machines](../virtual-machines/sizes.md). For information about pricing of the various sizes, see [HDInsight pricing](https://azure.microsoft.com/pricing/details/hdinsight).
+### Disk attachment
+
+On each of the **NodeManager** machines, **LocalResources** are ultimately localized in the target directories.
+
+By normal configuration only the default disk is added as the local disk in NodeManager. For large applications this disk space may not be enough which can result in job failure.
+
+If the cluster is expected to run large data application, you can choose to add extra disks to the **NodeManager**.
+
+You can add number of disks per VM and each disk will be of 1 TB size.
+
+1. From **Configuration + pricing** tab
+1. Select **Enable managed disk** option
+1. From **Standard disks**, Enter the **Number of disks**
+1. Choose your **Worker node**
+
+You can verify the number of disks from **Review + create** tab, under **Cluster configuration**
+ ### Add application
-An HDInsight application is an application that users can install on a Linux-based HDInsight cluster. You can use applications provided by Microsoft, third parties, or that you develop yourself. For more information, see [Install third-party Apache Hadoop applications on Azure HDInsight](hdinsight-apps-install-applications.md).
+HDInsight application is an application, that users can install on a Linux-based HDInsight cluster. You can use applications provided by Microsoft, third parties, or developed by you. For more information, see [Install third-party Apache Hadoop applications on Azure HDInsight](hdinsight-apps-install-applications.md).
Most of the HDInsight applications are installed on an empty edge node. An empty edge node is a Linux virtual machine with the same client tools installed and configured as in the head node. You can use the edge node for accessing the cluster, testing your client applications, and hosting your client applications. For more information, see [Use empty edge nodes in HDInsight](hdinsight-apps-use-edge-node.md).
You can install additional components or customize cluster configuration by usin
Some native Java components, like Apache Mahout and Cascading, can be run on the cluster as Java Archive (JAR) files. These JAR files can be distributed to Azure Storage and submitted to HDInsight clusters with Hadoop job submission mechanisms. For more information, see [Submit Apache Hadoop jobs programmatically](hadoop/submit-apache-hadoop-jobs-programmatically.md).
-> [!NOTE]
+> [!NOTE]
> If you have issues deploying JAR files to HDInsight clusters, or calling JAR files on HDInsight clusters, contact [Microsoft Support](https://azure.microsoft.com/support/options/).
->
+>
> Cascading is not supported by HDInsight and is not eligible for Microsoft Support. For lists of supported components, see [What's new in the cluster versions provided by HDInsight](hdinsight-component-versioning.md). Sometimes, you want to configure the following configuration files during the creation process:
-* clusterIdentity.xml
-* core-site.xml
-* gateway.xml
-* hbase-env.xml
-* hbase-site.xml
-* hdfs-site.xml
-* hive-env.xml
-* hive-site.xml
-* mapred-site
-* oozie-site.xml
-* oozie-env.xml
-* storm-site.xml
-* tez-site.xml
-* webhcat-site.xml
-* yarn-site.xml
+ * clusterIdentity.xml
+ * core-site.xml
+ * gateway.xml
+ * hbase-env.xml
+ * hbase-site.xml
+ * hdfs-site.xml
+ * hive-env.xml
+ * hive-site.xml
+ * mapred-site
+ * oozie-site.xml
+ * oozie-env.xml
+ * tez-site.xml
+ * webhcat-site.xml
+ * yarn-site.xml
For more information, see [Customize HDInsight clusters using Bootstrap](hdinsight-hadoop-customize-cluster-bootstrap.md).
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
Considering customer feedback, the Azure HDInsight team invested in integration
> [!NOTE]
-> New Azure Montitor integration is in Public Preview. It is only available in East US and West Europe regions.
+> New Azure Montitor integration is in Public Preview across all regions where HDInsight is available.
## Benefits of the new Azure Monitor integration
This document outlines the changes to the Azure Monitor integration and provides
**Redesigned schemas**: The schema formatting for the new Azure Monitor integration is better organized and easy to understand. There are two-thirds fewer schemas to remove as much ambiguity in the legacy schemas as possible.
-**Selective Logging (releasing soon)**: There are logs and metrics available through Log Analytics. To help you save on monitoring costs, we'll be releasing a new selective logging feature. Use this feature to turn on and off different logs and metric sources. With this feature, you'll only have to pay for what you use.
+**Selective Logging**: There are logs and metrics available through Log Analytics. To help you save on monitoring costs, we'll be releasing a new selective logging feature. Use this feature to turn on and off different logs and metric sources. With this feature, you'll only have to pay for what you use. For more details see [Selective Logging](selective-logging-analysis.md)
**Logs cluster portal integration**: The **Logs** pane is new to the HDInsight Cluster portal. Anyone with access to the cluster can go to this pane to query any table that the cluster resource sends records to. Users don't need access to the Log Analytics workspace anymore to see the records for a specific cluster resource.
hdinsight Selective Logging Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/selective-logging-analysis.md
+
+ Title: Use selective logging feature with script action in Azure HDInsight clusters
+description: Learn how to use Selective logging feature using script action to monitor logs.
+++ Last updated : 07/31/2022++
+# Learn how to use selective logging feature with script action in Azure HDInsight
+
+[Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) is an Azure Monitor service that monitors your cloud and on-premises environments. The monitoring is to maintain their availability and performance. It collects data generated by resources in your cloud, on-premises environments and from other monitoring tools. The data is used to provide analysis across multiple sources by enabling selective logging feature using script action in Azure portal in HDInsight.
+
+## About selective logging
+
+Selective logging is a part of Azure overall monitoring system. You can connect your cluster to a log analytics workspace. Once enabled, you can see the logs and metrics like HDInsight Security Logs, Yarn Resource Manager, System Metrics etc. You can monitor workloads and see how they're affecting cluster stability.
+Selective logging allows you to enable/disable all the tables or enable selective tables in log analytics workspace. You can adjust the source type for each table, since in new version of Geneva monitoring one table has multiple sources.
+
+> [!NOTE]
+> If log analytics is reinstalled in a cluster, then, you'll have to disable all the tables/log types again, since the reinstallation resets all the configuration files to its original state.
+
+## Using script action
+
+* The Geneva monitoring system uses mdsd(MDS daemon) which is a monitoring agent and fluentd for collecting logs using unified logging layer.
+* Selective Logging uses script action to disable/enable tables and their log types. Since it doesn't open any new ports or change any existing security setting hence, there are no security changes.
+* Script Action runs in parallel on all specified nodes and changes the configuration files for disabling/enabling tables and their log types.
+
+## Prerequisites
+
+* A Log Analytics workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create a Log Analytics workspace](../azure-monitor/vm/monitor-virtual-machine.md).
+* An Azure HDInsight cluster. Currently, you can use selective logging feature with the following HDInsight cluster types:
+ * Hadoop
+ * HBase
+ * Interactive Query
+ * Spark
+
+For the instructions on how to create an HDInsight cluster, see [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md).
+
+## Enable/disable logs using script action for multiple tables and log types
+
+1. Go to script action in your cluster and create a new Script Action for disabling/enabling table and log type.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-submit-script-action.png" alt-text="Screenshot showing select submit script action.":::
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/submit-script-action-window.png" alt-text="Screenshot showing submit script action window.":::
+
+1. In the script type, select **custom**.
+1. Name the script. For example, **Disable two tables and two sources**.
+1. Bash Script URL must be the link of the [selectiveLoggingScript.sh](https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/selectiveLoggingScripts/selectiveLoggingScript.sh).
+1. Select all the nodes of the cluster. For example, Head Node, Worker node, Zookeepr node.
+1. Define the parameters in the parameter box. For example:
+ - Spark: `spark HDInsightSparkLogs:SparkExecutorLog --disable`
+ - Interactivehive: `interactivehive HDInsightSparkLogs:SparkExecutorLog --enable`
+ - Hadoop: `hadoop HDInsightSparkLogs:SparkExecutorLog --disable`
+ - HBase: `hbase HDInsightSparkLogs: HDInsightHBaseLogs --enable`
+
+ For more details, see [Parameters](#parameters-syntax) section.
+
+1. Select Create.
+1. After a few minutes, you'll see a green tick next to your script action history, which means script has successfully run.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/enable-table-and-log-types.png" alt-text="Screenshot showing enable table and log types.":::
+
+You will see the desired changes in the log analytics workspace.
+
+## Troubleshooting
+
+### Scenario 1
+
+If Script Action is submitted but there are no changes in the log analytics workspace.
+
+1. Go to Ambari Home and check debug information.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-dashboard-ambari-home.png" alt-text="Screenshot showing select dashboard ambari home.":::
+
+1. Select settings button.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/ambari-dash-board.png" alt-text="Screenshot showing ambari dash board.":::
+
+1. You will get your latest script run at the top of the list.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/background-operations.png" alt-text="Screenshot showing background operations.":::
+
+1. Verify the script run status in all the nodes individually.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/background-operations-all.png" alt-text="Screenshot showing background operations all.":::
+
+1. Check if the parameter syntax from the parameter syntax section is correct.
+1. Check if the log analytics workspace is connected to the cluster and log analytics monitoring is turned on.
+1. Check if the script that you run from script action was checked as persisted.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/script-action-persists.png" alt-text="Screenshot showing script action persists.":::
+
+1. It's possible, that a new node has been added to the cluster recently.
+
+ > [!NOTE]
+ > For the script to run in the latest cluster, and the script must persist the script.
+
+1. Make sure all the node types are selected while running the script action.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-node-types.png" alt-text="Screenshot showing select node types.":::
+
+### Scenario 2
+
+If the script action is showing a Failed status in the script action history
+
+1. Make sure the parameter syntax is correct while using the parameter syntax section.
+1. Check that the script link is correct.
+1. Correct link for the script: https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/selectiveLoggingScripts/selectiveLoggingScript.sh
+
+## Table names
+
+### Spark cluster
+
+Different log types(sources) inside **Spark** tables
+
+| S.no | Table Name | Log Types | Description |
+| | | | |
+| 1. | HDInsightAmbariCluster Alerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
+| 3. | HDInsightHadoopAnd YarnLogs | Head Node: MRJobSummary, Resource Manager, TimelineServer Worker Node: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightSecurityLogs | AmbariAuditLog, AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+| 5. | HDInsightSparkLogs | Head Node: JupyterLog, LivyLog, SparkThriftDriverLog Worker Node: SparkExecutorLog, SparkDriverLog | This table contains all logs related to Spark and its related component: Livy and Jupyter. |
+| 6. | HDInsightHadoopAnd YarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 7. | HDInsightOozieLogs | Oozie | This table contains all logs generated from the Oozie framework. |
+
+### Interactive query cluster
+
+Different log types(sources) inside **interactive query** tables
+
+| S.no | Table Name | Log Types | Description |
+| | | | |
+| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head Node** : MRJobSummary, Resource Manager, TimelineServer **WorkerNode:** NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightHadoopAndYarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 5. | HDInsightHiveAndLLAPLogs | Head Node: InteractiveHiveHSILog, InteractiveHiveMetastoreLog, ZeppelinLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
+| 6. | HDInsightHiveAndLLAPmetrics | No log types | This table contains JMX metrics from the Hive and LLAP frameworks. It contains all the same JMX metrics as the old Custom Logs tables. It contains one metric per record. |
+| 7. | HDInsightHiveTezAppStats | No log types |
+| 8. | HDInsightSecurityLogs | **Head Node:** AmbariAuditLog, AuthLog **Zookeeper Node, Worker Node:** AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+
+### HBase cluster
+
+Different log types(sources) inside **HBase** tables
+
+| S.no | Table Name | Log Types | Description |
+| | | | |
+| 1. | HDInsightAmbariClusterAlerts | No other log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
+| 2. | HDInsightAmbariSystem Metrics | No other log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head Node** : MRJobSummary, Resource Manager, TimelineServer **WorkerNode:** NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightSecurityLogs | **Head Node:** AmbariAuditLog, AuthLog **Worker Node:** AuthLog **ZooKeper Node:** AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+| 5. | HDInsightHBaseLogs | **Head Node** : HDFSGarbageCollectorLog, HDFSNameNodeLog **WorkerNode:** PhoenixServerLog, HBaseRegionServerLog, HBaseRestServerLog **Zookeeper Node:** HBaseMasterLog | This table contains logs from HBase and its related components: Phoenix and HDFS. |
+| 6. | HDInsightHBaseMetrics | No log types | This table contains JMX metrics from HBase. It contains all the same JMX metrics from the tables listed in the Old Schema column. In contrast from the old tables, each row contains one metric. |
+| 7. | HDInsightHadoopAndYarn Metrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+
+### Hadoop cluster
+
+Different log types(sources) inside **Hadoop** tables
+
+| S.no | Table Name | Log Types | Description |
+| | | | |
+| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head Node:** MRJobSummary, Resource Manager, TimelineServer Worker Node: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightHadoopAndYarnMetrics | No Log Types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 5. | HDInsightHiveAndLLAPLogs | **Head Node:** HiveMetastoreLog, HiveServer2Log, WebHcatLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
+| 6. | HDInsight Hive And LLAP Metrics | No log types | This table contains JMX metrics from the Hive and LLAP frameworks. It contains all the same JMX metrics as the old Custom Logs tables. It contains one metric per record. |
+| 7. | HDInsight Security Logs | Head Node: AmbariAuditLog, AuthLog Zookeeper Node: AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+
+## Parameters syntax
+
+Parameters define the cluster type, table names, source names and the action.
++
+Parameter contains three parts:
+- Cluster type
+- Tables and Log types
+- Action (The action can be either `--disable` or `--enable`.)
+
+* Multiple tables syntax
+Rule: The tables are separated with a (,) or comma.
+
+For example,
+
+`spark HDInsightSecurityLogs, HDInsightAmbariSystemMetrics --disable`
+
+`hbase HDInsightSecurityLogs, HDInsightAmbariSystemMetrics --enable`
+
+> [!NOTE]
+> The tables are separated with a comma.
+
+* Multiple source type/log type
+Rule:The source types/log types are separated with a space.
+Rule:For disabling a source the user must write the table name in which the log types is then followed by a colon, then the real log type name.
+TableName : LogTypeName
+
+For example,
+
+spark HDInsightSecurityLogs is a table, which has two log types AmbariAuditLog and AuthLog.
+For Disabling both the log types the correct syntax would be:
+spark HDInsightSecurityLogs: AmbariAuditLog AuthLog --disable
+
+> [!NOTE]
+>* The source/log types are separated by a space.
+>* Table and its source types are separated by a colon.
+
+* Multiple tables and source types
+If there are two tables and two source types, which we need to be disabled
+
+- Spark: InteractiveHiveMetastoreLog logtype in HDInsightHiveAndLLAPLogs table
+- Hbase: InteractiveHiveHSILog logtype in HDInsightHiveAndLLAPLogs table
+- Hadoop: HDInsightHiveAndLLAPMetrics table
+- Hadoop: HDInsightHiveTezAppStats table
+
+Correct Parameter syntax for such cases would be
+
+```
+interactivehive HDInsightHiveAndLLAPLogs: InteractiveHiveMetastoreLog, HDInsightHiveAndLLAPMetrics, HDInsightHiveTezAppStats, HDInsightHiveAndLLAPLogs: InteractiveHiveHSILog --enable
+```
+
+> [!NOTE]
+>* Different tables are separated with a comma(,).
+>* Sources are denoted with a colon(:) after the table name in which they reside.
+
+## Next steps
+
+* [Query Azure Monitor logs to monitor HDInsight clusters](hdinsight-hadoop-oms-log-analytics-use-queries.md)
+* [How to monitor cluster availability with Apache Ambari and Azure Monitor logs](./hdinsight-cluster-availability.md)
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Title: What is FHIR service?
-description: The FHIR service enables rapid exchange of data through FHIR APIs. Ingest, manage, and persist Protected Health Information PHI with a managed cloud service.
+ Title: What is the FHIR service in Azure Health Data Services?
+description: The FHIR service enables rapid exchange of health data through FHIR APIs. Ingest, manage, and persist Protected Health Information (PHI) with a managed cloud service.
Previously updated : 06/06/2022 Last updated : 08/01/2022
-# What is FHIR&reg; service?
+# What is the FHIR service in Azure Health Data Services?
-FHIR service in Azure Health Data Services (hereby called the FHIR service) enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud:
+The FHIR service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. Offered as a managed Platform-as-a-Service (PaaS) for the storage and exchange of FHIR data, the FHIR service makes it easy for anyone working with health data to securely manage Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
-- Managed FHIR service, provisioned in the cloud in minutes -- Enterprise-grade, FHIR-based endpoint in Azure for data access, and storage in FHIR format
+The FHIR service offers the following:
+
+- Managed FHIR-compliant server, provisioned in the cloud in minutes
+- Enterprise-grade FHIR API endpoint for FHIR data access and storage
- High performance, low latency - Secure management of Protected Health Information (PHI) in a compliant cloud environment-- SMART on FHIR for mobile and web implementations-- Control your own data at scale with role-based access control (RBAC)-- Audit log tracking for access, creation, modification, and reads within each data store
+- SMART on FHIR for mobile and web clients
+- Controlled access to FHIR data at scale with Azure Active Directory-backed Role-Based Access Control (RBAC)
+- Audit log tracking for access, creation, modification, and reads within the FHIR service data store
-FHIR service allows you to create and deploy a FHIR server in just minutes to leverage the elastic scale of the cloud. The Azure services that power the FHIR service are designed for rapid performance no matter what size datasets you're managing.
+The FHIR service allows you to quickly create and deploy a FHIR server in just minutes to leverage the elastic scale of the cloud for ingesting, persisting, and querying FHIR data. The Azure services that power the FHIR service are designed for high performance no matter how much data you're working with.
-The FHIR API and compliant data store enable you to securely connect and interact with any system that utilizes FHIR APIs. Microsoft takes on the operations, maintenance, updates, and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.
+The FHIR API provisioned in the FHIR service enables any FHIR-compliant system to securely connect and interact with FHIR data. As a PaaS offering, Microsoft takes on the operations, maintenance, update, and compliance requirements for the FHIR service so you can free up your own operational and development resources.
## Leveraging the power of your data with FHIR
-The healthcare industry is rapidly transforming health data to the emerging standard of [FHIR&reg;](https://hl7.org/fhir) (Fast Healthcare Interoperability Resources). FHIR enables a robust, extensible data model with standardized semantics and data exchange that enables all systems using FHIR to work together. Transforming your data to FHIR allows you to quickly connect existing data sources such as the electronic health record systems or research databases. FHIR also enables the rapid exchange of data in modern implementations of mobile and web development. Most importantly, FHIR can simplify data ingestion and accelerate development with analytics and machine learning tools.
+The healthcare industry is rapidly adopting [FHIR®](https://hl7.org/fhir) as the industry-wide standard for health data storage, querying, and exchange. FHIR provides a robust, extensible data model with standardized semantics that all FHIR-compliant systems can use interchangeably. With FHIR, organizations can unify disparate electronic health record systems (EHRs) and other health data repositories – allowing for all data to be persisted and exchanged in a single, universal format. With the addition of SMART on FHIR, user-facing mobile and web-based applications can securely interact with FHIR data – opening a new range of possibilities for health data access. Most of all, FHIR simplifies the process of assembling large health datasets for research – providing a path for researchers and clinicians to unlock health insights through machine learning and analytics.
### Securely manage health data in the cloud
-FHIR service allows for the exchange of data via consistent, RESTful, FHIR APIs based on the HL7 FHIR specification. Backed by a managed PaaS offering in Azure, it also provides a scalable and secure environment for the management and storage of Protected Health Information (PHI) data in the native FHIR format.
+The FHIR service in Azure Health Data Services makes FHIR data available to clients through a FHIR RESTful API ΓÇô an implementation of the HL7 FHIR API specification. Provisioned as a managed PaaS offering in Azure, the FHIR service gives organizations a scalable and secure environment for the storage and exchange of Protected Health Information (PHI) in the native FHIR format.
### Free up your resources to innovate
-You could invest resources building and running your own FHIR server, but with FHIR service in Azure Health Data Services, Microsoft takes on the workload of operations, maintenance, updates and compliance requirements, allowing you to free up your own operational and development resources.
+You could invest resources building and running your own FHIR server, but with the FHIR service in Azure Health Data Services, Microsoft handles setting up the server's components, ensuring all compliance requirements are met so you can focus on building innovative solutions.
### Enable interoperability with FHIR
-Using the FHIR service enables to you connect with any system that leverages FHIR APIs for read, write, search, and other functions. It can be used as a powerful tool to consolidate, normalize, and apply machine learning with clinical data from electronic health records, clinician and patient dashboards, remote monitoring programs, or with databases outside of your system that have FHIR APIs.
+The FHIR service enables connection with any health data system or application capable of sending FHIR API requests. Coupled with other parts of the Azure ecosystem, the FHIR service forms a link between electronic health records systems (EHRs) and Azure's powerful suite of data analytics and machine learning tools ΓÇô enabling organizations to build patient and provider-facing applications that harness the full power of the Microsoft cloud.
### Control Data Access at Scale
-You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.
+With the FHIR service, you control your data ΓÇô at scale. The FHIR service's Role-Based Access Control (RBAC) is rooted in Azure AD identities management, which means you can grant or deny access to health data based on the roles given to individuals in your organization. These RBAC settings for the FHIR service are configurable in Azure Health Data Services at the workspace level. This simplifies system management and guarantees your organization's PHI is safe within a HIPAA and HITRUST-compliant environment.
### Secure your data
-Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. FHIR service implements a layered, in-depth defense and advanced threat protection for your data.
+As part of the Azure family of services, the FHIR service protects your organization's PHI with an unparalleled level of security. In Azure Health Data Services, your FHIR data is isolated to a unique database per FHIR service instance and protected with multi-region failover. On top of this, FHIR service implements a layered, in-depth defense and advanced threat protection for your data ΓÇô giving you peace of mind that your organization's PHI is guarded by Azure's industry-leading security.
## Applications for the FHIR service
-FHIR servers are key tools for interoperability of health data. The FHIR service is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases will continue to grow, but some initial customer applications where FHIR service is useful are below:
+FHIR servers are essential for interoperability of health data. The FHIR service is designed as a managed FHIR server with a RESTful API for connecting to a broad range of client systems and applications. Some of the key use cases for the FHIR service are listed below:
-- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage FHIR service as a fully managed backend service. The FHIR service provides a valuable resource in that customers can manage and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
+- **Startup App Development:** Customers developing a patient- or provider-centric app (mobile or web) can leverage FHIR service as a fully managed backend for their health data transactions. The FHIR service enables secure transfer of PHI, and with SMART on FHIR, app developers can take advantage of the robust identities management in Azure AD for authorization of FHIR RESTful API actions.
-- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the FHIR service as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
+- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another (often because the data is stored in different formats). Utilizing the FHIR service as a conversion layer between these systems allows organizations to standardize data in the FHIR format. Ingesting and persisting in FHIR enables health data querying and exchange across multiple disparate systems.
-- **Research:** Healthcare researchers will find the FHIR standard in general and the FHIR service useful as it normalizes data around a common FHIR data model and reduces the workload for machine learning and data sharing.
-Exchange of data via the FHIR service provides audit logs and access controls that help control the flow of data and who has access to what data types.
+- **Research:** Health researchers have embraced the FHIR standard as it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the FHIR service's data conversion and PHI de-identification capabilities, researchers can prepare HIPAA-compliant secondary-use data before sending it to Azure machine learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.
-## FHIR from Microsoft
+## FHIR platforms from Microsoft
FHIR capabilities from Microsoft are available in three configurations:
-* The FHIR service in Azure Health Data Services is a platform as a service (PaaS) offering in Azure that's easily provisioned in the Azure portal and managed by Microsoft. Includes the ability to provision other datasets, such as DICOM in the same workspace.
-* Azure API for FHIR - A PaaS offering in Azure, easily provisioned in the Azure portal and managed by Microsoft. This implementation only includes FHIR data and is a GA product.
-* FHIR Server for Azure ΓÇô an open-source project that can be deployed into your Azure subscription, available on GitHub at https://github.com/Microsoft/fhir-server.
+* The **FHIR service** is a managed platform as a service (PaaS) that operates as part of Azure Health Data Services. In addition to the FHIR service, Azure Health Data Services includes managed services for other types of health data, such as the DICOM service for medical imaging data and the MedTech service for medical IoT data. All services (FHIR service, DICOM service, and MedTech service) can be connected and administered within an Azure Health Data Services workspace.
+* **Azure API for FHIR** is a managed FHIR server offered as a PaaS in Azure ΓÇô easily provisioned in the Azure portal. Azure API for FHIR is not part of Azure Health Data Services and lacks some of the features of the FHIR service.
+* **FHIR Server for Azure**, an open-source FHIR server that can be deployed into your Azure subscription, is available on GitHub at https://github.com/Microsoft/fhir-server.
-For use cases that requires extending or customizing FHIR server or require access the underlying servicesΓÇösuch as the databaseΓÇöwithout going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose FHIR service.
+For use cases that require customizing a FHIR server or that require access to the underlying services ΓÇô such as access to the database without going through the FHIR API, developers should choose the open-source FHIR Server for Azure. For implementation of a turnkey, production-ready FHIR API with a provisioned database backend (i.e., data can only be accessed through the FHIR API - not the database directly), developers should choose the FHIR service.
## Next Steps
-To start working with the FHIR service, follow the 5-minute quickstart to deploy FHIR service.
+To start working with the FHIR service, follow the 5-minute quickstart instructions for FHIR service deployment.
>[!div class="nextstepaction"] >[Deploy FHIR service](fhir-portal-quickstart.md)
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Last updated 06/29/2022-+ # Release notes: Azure Health Data Services
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
industrial-iot Reference Command Line Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/reference-command-line-arguments.md
Title: Microsoft OPC Publisher Command-line Arguments
-description: This article provides an overview of the OPC Publisher Command-line Arguments
+ Title: Microsoft OPC Publisher command-line arguments
+description: This article provides an overview of the OPC Publisher Command-line Arguments.
Last updated 3/22/2021
-# Command-line Arguments
-
-In the following, there are several Command-line Arguments described that can be used to set global settings for OPC Publisher.
-
-## OPC Publisher Command-line Arguments for Version 2.5 and below
-
-* Usage: opcpublisher.exe \<applicationname> [\<iothubconnectionstring>] [\<options>]
-
-* applicationname: the OPC UA application name to use, required
- The application name is also used to register the publisher under this name in the
- IoT Hub device registry.
-
-* iothubconnectionstring: the IoT Hub owner connectionstring, optional. Typically you specify the IoTHub owner connectionstring only on the first start of the application. The connection string is encrypted and stored in the platforms certificate store.
-On subsequent calls, it's read from there and reused. If you specify the connectionstring on each start, the device, which is created for the application in the IoT Hub device registry is removed and recreated each time.
-
-There are a couple of environment variables, which can be used to control the application:
-```
- _HUB_CS: sets the IoTHub owner connectionstring
- _GW_LOGP: sets the filename of the log file to use
- _TPC_SP: sets the path to store certificates of trusted stations
- _GW_PNFP: sets the filename of the publishing configuration file
-```
-
-> [!NOTE]
-> Command-line Arguments overrule environment variable settings.
-
-```
- --pf, --publishfile=VALUE
- the filename to configure the nodes to publish.
- Default: '/appdata/publishednodes.json'
- --tc, --telemetryconfigfile=VALUE
- the filename to configure the ingested telemetry
- Default: ''
- -s, --site=VALUE
- the site OPC Publisher is working in. if specified this domain is appended (delimited by a ':' to the 'ApplicationURI' property when telemetry is sent to IoTHub.
- The value must follow the syntactical rules of a
- DNS hostname.
- Default: not set
- --ic, --iotcentral
- OPC Publisher sends OPC UA data in IoTCentral
- compatible format (DisplayName of a node is used
- as key, this key is the Field name in IoTCentral)
- . you need to ensure that all DisplayName's are
- unique. (Auto enables fetch display name)
- Default: False
- --sw, --sessionconnectwait=VALUE
- specify the wait time in seconds publisher is
- trying to connect to disconnected endpoints and
- starts monitoring unmonitored items
- Min: 10
- Default: 10
-
- --mq, --monitoreditemqueuecapacity=VALUE
- specify how many notifications of monitored items
- can be stored in the internal queue, if the data
- can not be sent quick enough to IoTHub
- Min: 1024
- Default: 8192
- --di, --diagnosticsinterval=VALUE
- shows publisher diagnostic info at the specified
- interval in seconds (need log level info).
- -1 disables remote diagnostic log and diagnostic
- output
- 0 disables diagnostic output
- Default: 0
- --ns, --noshutdown=VALUE
- same as runforever.
- Default: False
- --rf, --runforever
- OPC Publisher can not be stopped by pressing a key on
- the console, but runs forever.
- Default: False
- --lf, --logfile=VALUE
- the filename of the logfile to use.
- Default: './<hostname>-publisher.log'
- --lt, --logflushtimespan=VALUE
- the timespan in seconds when the logfile should be
- flushed.
- Default: 00:00:30 sec
- --ll, --loglevel=VALUE
- the loglevel to use (allowed: fatal, error, warn,
- info, debug, verbose).
- Default: info
- --ih, --iothubprotocol=VALUE
- the protocol to use for communication with IoTHub (allowed values: Amqp, Http1, Amqp_WebSocket_Only,
- Amqp_Tcp_Only, Mqtt, Mqtt_WebSocket_Only, Mqtt_
- Tcp_Only) or IoT EdgeHub (allowed values: Mqtt_
- Tcp_Only, Amqp_Tcp_Only).
- Default for IoTHub: Mqtt_WebSocket_Only
- Default for IoT EdgeHub: Amqp_Tcp_Only
- --ms, --iothubmessagesize=VALUE
- the max size of a message which can be sent to
- IoTHub. When telemetry of this size is available
- it is sent.
- 0 enforces immediate send when telemetry is
- available
- Min: 0
- Max: 262144
- Default: 262144
- --si, --iothubsendinterval=VALUE
- the interval in seconds when telemetry should be
- sent to IoTHub. If 0, then only the
- iothubmessagesize parameter controls when
- telemetry is sent.
- Default: '10'
- --dc, --deviceconnectionstring=VALUE
- if publisher is not able to register itself with
- IoTHub, you can create a device with name <
- applicationname> manually and pass in the
- connectionstring of this device.
- Default: none
- -c, --connectionstring=VALUE
- the IoTHub owner connectionstring.
- Default: none
- --hb, --heartbeatinterval=VALUE
- the publisher is using this as default value in
- seconds for the heartbeat interval setting of
- nodes without
- a heartbeat interval setting.
- Default: 0
- --sf, --skipfirstevent=VALUE
- the publisher is using this as default value for
- the skip first event setting of nodes without
- a skip first event setting.
- Default: False
- --pn, --portnum=VALUE
- the server port of the publisher OPC server
- endpoint.
- Default: 62222
- --pa, --path=VALUE
- the enpoint URL path part of the publisher OPC
- server endpoint.
- Default: '/UA/Publisher'
- --lr, --ldsreginterval=VALUE
- the LDS(-ME) registration interval in ms. If 0,
- then the registration is disabled.
- Default: 0
- --ol, --opcmaxstringlen=VALUE
- the max length of a string opc can transmit/
- receive.
- Default: 131072
- --ot, --operationtimeout=VALUE
- the operation timeout of the publisher OPC UA
- client in ms.
- Default: 120000
- --oi, --opcsamplinginterval=VALUE
- the publisher is using this as default value in
- milliseconds to request the servers to sample
- the nodes with this interval
- this value might be revised by the OPC UA
- servers to a supported sampling interval.
- please check the OPC UA specification for
- details how this is handled by the OPC UA stack.
- a negative value sets the sampling interval
- to the publishing interval of the subscription
- this node is on.
- 0 configures the OPC UA server to sample in
- the highest possible resolution and should be
- taken with care.
- Default: 1000
- --op, --opcpublishinginterval=VALUE
- the publisher is using this as default value in
- milliseconds for the publishing interval setting
- of the subscriptions established to the OPC UA
- servers.
- please check the OPC UA specification for
- details how this is handled by the OPC UA stack.
- a value less than or equal zero lets the
- server revise the publishing interval.
- Default: 0
- --ct, --createsessiontimeout=VALUE
- specify the timeout in seconds used when creating
- a session to an endpoint. On unsuccessful
- connection attemps a backoff up to 5 times the
- specified timeout value is used.
- Min: 1
- Default: 10
- --ki, --keepaliveinterval=VALUE
- specify the interval in seconds the publisher is
- sending keep alive messages to the OPC servers
- on the endpoints it is connected to.
- Min: 2
- Default: 2
- --kt, --keepalivethreshold=VALUE
- specify the number of keep alive packets a server
- can miss, before the session is disconneced
- Min: 1
- Default: 5
- --aa, --autoaccept
- the OPC Publisher trusts all servers it is
- establishing a connection to.
- Default: False
- --tm, --trustmyself=VALUE
- same as trustowncert.
- Default: False
- --to, --trustowncert
- the OPC Publisher certificate is put into the trusted
- certificate store automatically.
- Default: False
- --fd, --fetchdisplayname=VALUE
- same as fetchname.
- Default: False
- --fn, --fetchname
- enable to read the display name of a published
- node from the server. this increases the
- runtime.
- Default: False
- --ss, --suppressedopcstatuscodes=VALUE
- specifies the OPC UA status codes for which no
- events should be generated.
- Default: BadNoCommunication,
- BadWaitingForInitialData
- --at, --appcertstoretype=VALUE
- the own application cert store type.
- (allowed values: Directory, X509Store)
- Default: 'Directory'
- --ap, --appcertstorepath=VALUE
- the path where the own application cert should be
- stored
- Default (depends on store type):
- X509Store: 'CurrentUser\UA_MachineDefault'
- Directory: 'pki/own'
- --tp, --trustedcertstorepath=VALUE
- the path of the trusted cert store
- Default: 'pki/trusted'
- --rp, --rejectedcertstorepath=VALUE
- the path of the rejected cert store
- Default 'pki/rejected'
- --ip, --issuercertstorepath=VALUE
- the path of the trusted issuer cert store
- Default 'pki/issuer'
- --csr
- show data to create a certificate signing request
- Default 'False'
- --ab, --applicationcertbase64=VALUE
- update/set this applications certificate with the
- certificate passed in as bas64 string
- --af, --applicationcertfile=VALUE
- update/set this applications certificate with the
- certificate file specified
- --pb, --privatekeybase64=VALUE
- initial provisioning of the application
- certificate (with a PEM or PFX fomat) requires a
- private key passed in as base64 string
- --pk, --privatekeyfile=VALUE
- initial provisioning of the application
- certificate (with a PEM or PFX fomat) requires a
- private key passed in as file
- --cp, --certpassword=VALUE
- the optional password for the PEM or PFX or the
- installed application certificate
- --tb, --addtrustedcertbase64=VALUE
- adds the certificate to the applications trusted
- cert store passed in as base64 string (multiple
- comma-separated strings supported)
- --tf, --addtrustedcertfile=VALUE
- adds the certificate file(s) to the applications
- trusted cert store passed in as base64 string (
- multiple comma-separated filenames supported)
- --ib, --addissuercertbase64=VALUE
- adds the specified issuer certificate to the
- applications trusted issuer cert store passed in
- as base64 string (multiple comma-separated strings supported)
- --if, --addissuercertfile=VALUE
- adds the specified issuer certificate file(s) to
- the applications trusted issuer cert store (
- multiple comma-separated filenames supported)
- --rb, --updatecrlbase64=VALUE
- update the CRL passed in as base64 string to the
- corresponding cert store (trusted or trusted
- issuer)
- --uc, --updatecrlfile=VALUE
- update the CRL passed in as file to the
- corresponding cert store (trusted or trusted
- issuer)
- --rc, --removecert=VALUE
- remove cert(s) with the given thumbprint(s) (
- multiple comma-separated thumbprints supported)
- --dt, --devicecertstoretype=VALUE
- the iothub device cert store type.
- (allowed values: Directory, X509Store)
- Default: X509Store
- --dp, --devicecertstorepath=VALUE
- the path of the iot device cert store
- Default Default (depends on store type):
- X509Store: 'My'
- Directory: 'CertificateStores/IoTHub'
- -i, --install
- register OPC Publisher with IoTHub and then exits.
- Default: False
- -h, --help
- show this message and exit
- --st, --opcstacktracemask=VALUE
- ignored.
- --sd, --shopfloordomain=VALUE
- same as site option
- The value must follow the syntactical rules of a
- DNS hostname.
- Default: not set
- --vc, --verboseconsole=VALUE
- ignored.
- --as, --autotrustservercerts=VALUE
- same as autoaccept
- Default: False
- --tt, --trustedcertstoretype=VALUE
- ignored.
- the trusted cert store always resides in a
- directory.
- --rt, --rejectedcertstoretype=VALUE
- ignored.
- the rejected cert store always resides in a
- directory.
- --it, --issuercertstoretype=VALUE
- ignored.
- the trusted issuer cert store always
- resides in a directory.
-```
--
-## OPC Publisher Command-line Arguments for Version 2.6 and above
-```
- --pf, --publishfile=VALUE
- the filename to configure the nodes to publish.
- If this Option is specified it puts OPC Publisher into stadalone mode.
- --lf, --logfile=VALUE
- the filename of the logfile to use.
- --ll. --loglevel=VALUE
- the log level to use (allowed: fatal, error,
- warn, info, debug, verbose).
- --me, --messageencoding=VALUE
- the messaging encoding for outgoing messages
- allowed values: Json, Uadp
- --mm, --messagingmode=VALUE
- the messaging mode for outgoing messages
- allowed values: PubSub, Samples
- --fm, --fullfeaturedmessage=VALUE
- the full featured mode for messages (all fields filled in).
- Default is 'true', for legacy compatibility use 'false'
- --aa, --autoaccept
- the publisher trusted all servers it is establishing a connection to
- --bs, --batchsize=VALUE
- the number of OPC UA data-change messages to be cached for batching.
- --si, --iothubsendinterval=VALUE
- the trigger batching interval in seconds.
- --ms, --iothubmessagesize=VALUE
- the maximum size of the (IoT D2C) message.
- --om, --maxoutgressmessages=VALUE
- the maximum size of the (IoT D2C) message egress buffer.
- --di, --diagnosticsinterval=VALUE
- shows publisher diagnostic info at the specified interval in seconds
- (need log level info). -1 disables remote diagnostic log and diagnostic output
- --lt, --logflugtimespan=VALUE
- the timespan in seconds when the logfile should be flushed.
- --ih, --iothubprotocol=VALUE
- protocol to use for communication with the hub.
- allowed values: AmqpOverTcp, AmqpOverWebsocket, MqttOverTcp,
- MqttOverWebsocket, Amqp, Mqtt, Tcp, Websocket, Any
- --hb, --heartbeatinterval=VALUE
- the publisher is using this as default value in seconds for the
- heartbeat interval setting of nodes without a heartbeat interval setting.
- --ot, --operationtimeout=VALUE
- the operation timeout of the publisher OPC UA client in ms.
- --ol, --opcmaxstringlen=VALUE
- the max length of a string opc can transmit/receive.
- --oi, --opcsamplinginterval=VALUE
- default value in milliseconds to request the servers to sample values
- --op, --opcpublishinginterval=VALUE
- default value in milliseconds for the publishing interval setting
- of the subscriptions against the OPC UA server.
- --ct, --createsessiontimeout=VALUE
- the interval in seconds the publisher is sending keep alive
- messages to the OPC servers on the endpoints it is connected to.
- --kt, --keepalivethresholt=VALUE
- specify the number of keep alive packets a server can miss,
- before the session is disconnected.
- --tm, --trustmyself
- the publisher certificate is put into the trusted store automatically.
- --at, --appcertstoretype=VALUE
- the own application cert store type (allowed: Directory, X509Store).
-```
-
-## OPC Publisher Command-line Arguments for Version 2.8.2 and above
-
-The following OPC Publisher configuration can be applied by Command Line Interface (CLI) options or as environment variable settings.
-The `Alternative` field, where present, refers to the CLI argument applicable in **standalone mode only**. When both environment variable and CLI argument are provided, the latest will overrule the env variable.
-```
- PublishedNodesFile=VALUE
- The file used to store the configuration of the nodes to be published
- along with the information to connect to the OPC UA server sources
- When this file is specified, or the default file is accessible by
- the module, OPC Publisher will start in standalone mode
- Alternative: --pf, --publishfile
- Mode: Standalone only
- Type: string - file name, optionally prefixed with the path
- Default: publishednodes.json
-
- site=VALUE
- The site OPC Publisher is assigned to
- Alternative: --s, --site
- Mode: Standalone and Orchestrated
- Type: string
- Default: <not set>
-
- LogFileName==VALUE
- The filename of the logfile to use
- Alternative: --lf, --logfile
- Mode: Standalone only
- Type: string - file name, optionally prefixed with the path
- Default: <not set>
-
- LogFileFlushTimeSpan=VALUE
- The time span in seconds when the logfile should be flushed in the storage
- Alternative: --lt, --logflushtimespan
- Mode: Standalone only
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in seconds
- Default: {00:00:30}
-
- loglevel=Value
- The level for logs to pe persisted in the logfile
- Alternative: --ll --loglevel
- Mode: Standalone only
- Type: string enum - Fatal, Error, Warning, Information, Debug, Verbose
- Default: info
-
- EdgeHubConnectionString=VALUE
- An IoT Edge Device or IoT Edge module connection string to use,
- when deployed as module in IoT Edge, the environment variable
- is already set as part of the container deployment
- Alternative: --dc, --deviceconnectionstring
- --ec, --edgehubconnectionstring
- Mode: Standalone and Orchestrated
- Type: connection string
- Default: <not set> <set by iotedge runtime>
-
- Transport=VALUE
- Protocol to use for upstream communication to edgeHub or IoTHub
- Alternative: --ih, --iothubprotocol
- Mode: Standalone and Orchestrated
- Type: string enum: Any, Amqp, Mqtt, AmqpOverTcp, AmqpOverWebsocket,
- MqttOverTcp, MqttOverWebsocket, Tcp, Websocket.
- Default: MqttOverTcp
-
- BypassCertVerification=VALUE
- Enables/disables bypass of certificate verification for upstream communication to edgeHub
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: false
-
- EnableMetrics=VALUE
- Enables/disables upstream metrics propagation
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: true
-
- DefaultPublishingInterval=VALUE
- Default value for the OPC UA publishing interval of OPC UA subscriptions
- created to an OPC UA server. This value is used when no explicit setting
- is configured.
- Alternative: --op, --opcpublishinginterval
- Mode: Standalone only
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in milliseconds
- Default: {00:00:01} (1000)
-
- DefaultSamplingInterval=VALUE
- Default value for the OPC UA sampling interval of nodes to publish.
- This value is used when no explicit setting is configured.
- Alternative: --oi, --opcsamplinginterval
- Mode: Standalone only
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in milliseconds
- Default: {00:00:01} (1000)
-
- DefaultQueueSize=VALUE
- Default setting value for the monitored item's queue size to be used when
- not explicitly specified in pn.json file
- Alternative: --mq, --monitoreditemqueuecapacity
- Mode: Standalone only
- Type: integer
- Default: 1
-
- DefaultHeartbeatInterval=VALUE
- Default value for the heartbeat interval setting of published nodes
- having no explicit setting for heartbeat interval.
- Alternative: --hb, --heartbeatinterval
- Mode: Standalone
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in seconds
- Default: {00:00:00} meaning heartbeat is disabled
-
- MessageEncoding=VALUE
- The messaging encoding for outgoing telemetry.
- Alternative: --me, --messageencoding
- Mode: Standalone only
- Type: string enum - Json, Uadp
- Default: Json
-
- MessagingMode=VALUE
- The messaging mode for outgoing telemetry.
- Alternative: --mm, --messagingmode
- Mode: Standalone only
- Type: string enum - PubSub, Samples
- Default: Samples
-
- FetchOpcNodeDisplayName=VALUE
- Fetches the DisplayName for the nodes to be published from
- the OPC UA Server when not explicitly set in the configuration.
- Note: This has high impact on OPC Publisher startup performance.
- Alternative: --fd, --fetchdisplayname
- Mode: Standalone only
- Type: boolean
- Default: false
-
- FullFeaturedMessage=VALUE
- The full featured mode for messages (all fields filled in the telemetry).
- Default is 'false' for legacy compatibility.
- Alternative: --fm, --fullfeaturedmessage
- Mode: Standalone only
- Type:boolean
- Default: false
-
- BatchSize=VALUE
- The number of incoming OPC UA data change messages to be cached for batching.
- When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled.
- Alternative: --bs, --batchsize
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 50
-
- BatchTriggerInterval=VALUE
- The batching trigger interval.
- When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled.
- Alternative: --si, --iothubsendinterval
- Mode: Standalone and Orchestrated
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in seconds
- Default: {00:00:10}
-
- IoTHubMaxMessageSize=VALUE
- The maximum size of the (IoT D2C) telemetry message.
- Alternative: --ms, --iothubmessagesize
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 0
-
- DiagnosticsInterval=VALUE
- Shows publisher diagnostic info at the specified interval in seconds
- (need log level info). -1 disables remote diagnostic log and
- diagnostic output
- Alternative: --di, --diagnosticsinterval
- Mode: Standalone only
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in seconds
- Default: {00:00:60}
-
- LegacyCompatibility=VALUE
- Forces the Publisher to operate in 2.5 legacy mode, using
- `"application/opcua+uajson"` for `ContentType` on the IoT Hub
- Telemetry message.
- Alternative: --lc, --legacycompatibility
- Mode: Standalone only
- Type: boolean
- Default: false
-
- PublishedNodesSchemaFile=VALUE
- The validation schema filename for published nodes file.
- Alternative: --pfs, --publishfileschema
- Mode: Standalone only
- Type: string
- Default: <not set>
-
- MaxNodesPerDataSet=VALUE
- Maximum number of nodes within a DataSet/Subscription.
- When more nodes than this value are configured for a
- DataSetWriter, they will be added in a separate DataSet/Subscription.
- Alternative: N/A
- Mode: Standalone only
- Type: integer
- Default: 1000
-
- ApplicationName=VALUE
- OPC UA Client Application Config - Application name as per
- OPC UA definition. This is used for authentication during communication
- init handshake and as part of own certificate validation.
- Alternative: --an, --appname
- Mode: Standalone and Orchestrated
- Type: string
- Default: "Microsoft.Azure.IIoT"
-
- ApplicationUri=VALUE
- OPC UA Client Application Config - Application URI as per
- OPC UA definition.
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"urn:localhost:{ApplicationName}:microsoft:"
-
- ProductUri=VALUE
- OPC UA Client Application Config - Product URI as per
- OPC UA definition.
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: string
- Default: "https://www.github.com/Azure/Industrial-IoT"
-
- DefaultSessionTimeout=VALUE
- OPC UA Client Application Config - Session timeout in seconds
- as per OPC UA definition.
- Alternative: --ct --createsessiontimeout
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 0, meaning <not set>
-
- MinSubscriptionLifetime=VALUE
- OPC UA Client Application Config - Minimum subscription lifetime in seconds
- as per OPC UA definition.
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 0, <not set>
-
- KeepAliveInterval=VALUE
- OPC UA Client Application Config - Keep alive interval in seconds
- as per OPC UA definition.
- Alternative: --ki, --keepaliveinterval
- Mode: Standalone and Orchestrated
- Type: integer milliseconds
- Default: 10,000 (10s)
-
- MaxKeepAliveCount=VALUE
- OPC UA Client Application Config - Maximum count of keep alive events
- as per OPC UA definition.
- Alternative: --kt, --keepalivethreshold
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 50
-
- PkiRootPath=VALUE
- OPC UA Client Security Config - PKI certificate store root path
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: string
- Default: "pki"
-
- ApplicationCertificateStorePath=VALUE
- OPC UA Client Security Config - application's
- own certificate store path
- Alternative: --ap, --appcertstorepath
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"{PkiRootPath}/own"
-
- ApplicationCertificateStoreType=VALUE
- OPC UA Client Security Config - application's
- own certificate store type
- Alternative: --at, --appcertstoretype
- Mode: Standalone and Orchestrated
- Type: enum string : Directory, X509Store
- Default: Directory
-
- ApplicationCertificateSubjectName=VALUE
- OPC UA Client Security Config - the subject name
- in the application's own certificate
- Alternative: --sn, --appcertsubjectname
- Mode: Standalone and Orchestrated
- Type: string
- Default: "CN=Microsoft.Azure.IIoT, C=DE, S=Bav, O=Microsoft, DC=localhost"
-
- TrustedIssuerCertificatesPath=VALUE
- OPC UA Client Security Config - trusted certificate issuer
- store path
- Alternative: --ip, --issuercertstorepath
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"{PkiRootPath}/issuers"
-
- TrustedIssuerCertificatesType=VALUE
- OPC UA Client Security Config - trusted issuer certificates
- store type
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: enum string : Directory, X509Store
- Default: Directory
-
- TrustedPeerCertificatesPath=VALUE
- OPC UA Client Security Config - trusted peer certificates
- store path
- Alternative: --tp, --trustedcertstorepath
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"{PkiRootPath}/trusted"
-
- TrustedPeerCertificatesType=VALUE
- OPC UA Client Security Config - trusted peer certificates
- store type
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: enum string : Directory, X509Store
- Default: Directory
-
- RejectedCertificateStorePath=VALUE
- OPC UA Client Security Config - rejected certificates
- store path
- Alternative: --rp, --rejectedcertstorepath
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"{PkiRootPath}/rejected"
-
- RejectedCertificateStoreType=VALUE
- OPC UA Client Security Config - rejected certificates
- store type
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: enum string : Directory, X509Store
- Default: Directory
-
- AutoAcceptUntrustedCertificates=VALUE
- OPC UA Client Security Config - auto accept untrusted
- peer certificates
- Alternative: --aa, --autoaccept
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: false
-
- RejectSha1SignedCertificates=VALUE
- OPC UA Client Security Config - reject deprecated Sha1
- signed certificates
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: false
-
- MinimumCertificateKeySize=VALUE
- OPC UA Client Security Config - minimum accepted
- certificates key size
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 1024
-
- AddAppCertToTrustedStore=VALUE
- OPC UA Client Security Config - automatically copy own
- certificate's public key to the trusted certificate store
- Alternative: --tm, --trustmyself
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: true
-
- SecurityTokenLifetime=VALUE
- OPC UA Stack Transport Secure Channel - Security token lifetime in milliseconds
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer (milliseconds)
- Default: 3,600,000 (1h)
-
- ChannelLifetime=VALUE
- OPC UA Stack Transport Secure Channel - Channel lifetime in milliseconds
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer (milliseconds)
- Default: 300,000 (5 min)
-
- MaxBufferSize=VALUE
- OPC UA Stack Transport Secure Channel - Max buffer size
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 65,535 (64KB -1)
-
- MaxMessageSize=VALUE
- OPC UA Stack Transport Secure Channel - Max message size
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 4,194,304 (4 MB)
-
- MaxArrayLength=VALUE
- OPC UA Stack Transport Secure Channel - Max array length
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 65,535 (64KB - 1)
-
- MaxByteStringLength=VALUE
- OPC UA Stack Transport Secure Channel - Max byte string length
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 1,048,576 (1MB);
-
- OperationTimeout=VALUE
- OPC UA Stack Transport Secure Channel - OPC UA Service call
- operation timeout
- Alternative: --ot, --operationtimeout
- Mode: Standalone and Orchestrated
- Type: integer (milliseconds)
- Default: 120,000 (2 min)
-
- MaxStringLength=VALUE
- OPC UA Stack Transport Secure Channel - Maximum length of a string
- that can be send/received over the OPC UA Secure channel
- Alternative: --ol, --opcmaxstringlen
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 130,816 (128KB - 256)
-
- RuntimeStateReporting=VALUE
- Enables reporting of OPC Publisher restarts.
- Alternative: --rs, --runtimestatereporting
- Mode: Standalone
- Type: boolean
- Default: false
-
- EnableRoutingInfo=VALUE
- Adds the routing info to telemetry messages. The name of the property is
- `$$RoutingInfo` and the value is the `DataSetWriterGroup` for that particular message.
- When the `DataSetWriterGroup` is not configured, the `$$RoutingInfo` property will
- not be added to the message even if this argument is set.
- Alternative: --ri, --enableroutinginfo
- Mode: Standalone
- Type: boolean
- Default: false
-```
+# OPC Publisher command-line arguments
+
+This article describes the command-line arguments that you can use to set global settings for Open Platform Communications (OPC) Publisher.
+
+## Command-line arguments for version 2.5 and earlier
+
+* **Usage**: opcpublisher.exe \<applicationname> [\<iothubconnectionstring>] [\<options>]
+
+* **applicationname**: (Required) The OPC Unified Architecture (OPC UA) application name to use.
+
+ You also use the application name to register the publisher in the IoT hub device registry.
+
+* **iothubconnectionstring**: (Optional) The IoT hub owner connection string.
+
+ You ordinarily specify the connection string only when you start the application for the first time. The connection string is encrypted and stored in the platforms certificate store.
+
+ On subsequent calls, the connection string is read from the platforms certificate store and reused. If you specify the connection string on each start, the device, which is created for the application in the IoT hub device registry, is removed and re-created each time.
+
+To control the application, you can use any of several of environment variables:
+
+* `_HUB_CS`: Sets the IoT hub owner connection string
+* `_GW_LOGP`: Sets the file name of the log file to use
+* `_TPC_SP`: Sets the path to store certificates of trusted stations
+* `_GW_PNFP`: Sets the file name of the publishing configuration file
+
+> [!NOTE]
+> Command-line arguments overrule environment variable settings.
+
+| Argument | Description |
+| | |
+| `--pf, --publishfile=VALUE` | The file name to use to configure the nodes to publish.<br>Default: `/appdata/publishednodes.json` |
+| `--tc, --telemetryconfigfile=VALUE` | The file name to use to configure the ingested telemetry.<br>Default: '' |
+| `-s, --site=VALUE` | The site that OPC Publisher is working in. If it's specified, this domain is appended (delimited by a `:` to the `ApplicationURI` property when telemetry is sent to the IoT hub. The value must follow the syntactical rules of a DNS hostname.<br>Default: \<not set> |
+| `--ic, --iotcentral` | OPC Publisher sends OPC UA data in an Azure IoT Central-compatible format (`DisplayName` of a node is used as a key, which is the field name in Azure IoT Central). Ensure that all `DisplayName` values are unique (automatically enables the fetch display name).<br>Default: `false` |
+| `--sw, --sessionconnectwait=VALUE` | Specifies the wait time, in seconds, that the publisher is trying to connect to disconnected endpoints and starts monitoring unmonitored items.<br>Minimum: 10<br>Default: `10` |
+| `--mq, --monitoreditemqueuecapacity=VALUE` | Specifies how many notifications of monitored items can be stored in the internal queue if the data can't be sent quickly enough to the IoT hub.<br>Minimum: `1024`<br>Default: `8192` |
+| `--di, --diagnosticsinterval=VALUE` | Shows OPC Publisher diagnostics information at the specified interval, in seconds (need log level information). `-1` disables the remote diagnostics log and diagnostics output. `0` disables the diagnostics output.<br>Default: `0` |
+| `--ns, --noshutdown=VALUE` | Same as `runforever`.<br>Default: `false` |
+| `--rf, --runforever` | You can't stop OPC Publisher by pressing a key on the console. It runs forever.<br>Default: `false` |
+| `--lf, --logfile=VALUE` | The file name of the log file to use.<br>Default: `<hostname>-publisher.log` |
+| `--lt, --logflushtimespan=VALUE` | The timespan, in seconds, when the log file should be flushed.<br>Default: `00:00:30` |
+| `--ll, --loglevel=VALUE` | The log level to use. Allowed: `fatal`, `error`, `warn`, `info`, `debug`, `verbose`.<br>Default: `info` |
+| `--ih, --iothubprotocol=VALUE` | The protocol to use for communication with the IoT hub (allowed values: `Amqp`, `Http1`, `Amqp_WebSocket_Only`, `Amqp_Tcp_Only`, `Mqtt`, `Mqtt_WebSocket_Only`, `Mqtt_ Tcp_Only`) or the Azure IoT Edge hub (allowed values: `Mqtt_Tcp_Only`, `Amqp_Tcp_Only`).<br>Default for the IoT hub: `Mqtt_WebSocket_Only`<br>Default for the IoT Edge hub: `Amqp_Tcp_Only` |
+| `--ms, --iothubmessagesize=VALUE` | The maximum size of a message that can be sent to the IoT hub. When telemetry of this size is available, it is sent. `0` enforces immediate send when telemetry is available.<br>Minimum: `0`<br>Maximum: `262144`<br>Default: `262144` |
+| `--si, --iothubsendinterval=VALUE` | The interval, in seconds, when telemetry should be sent to the IoT hub. If the interval is `0`, only the `iothubmessagesize` parameter controls when telemetry is sent.<br>Default: `10` |
+| `--dc, --deviceconnectionstring=VALUE` | If OPC Publisher can't register itself with the IoT hub, you can create a device with the name `\<applicationname>` manually and pass in the connection string of this device.<br>Default: none |
+| `-c, --connectionstring=VALUE` | The IoT hub owner connection string.<br>Default: none |
+| `--hb, --heartbeatinterval=VALUE` | OPC Publisher uses this as a default value, in seconds, for the heartbeat interval setting of nodes without a heartbeat interval setting.<br>Default: `0` |
+| `--sf, --skipfirstevent=VALUE` | OPC Publisher uses this as default value for the `skipfirstevent` setting of nodes without a `skipfirstevent` setting.<br>Default: `false` |
+| `--pn, --portnum=VALUE` | The server port of the publisher OPC server endpoint.<br>Default: `62222` |
+| `--pa, --path=VALUE` | The endpoint URL path part of the publisher OPC server endpoint.<br>Default: `/UA/Publisher` |
+| `--lr, --ldsreginterval=VALUE` | The LDS(-ME) registration interval, in milliseconds (ms). If `0`, the registration is disabled.<br>Default: `0` |
+| `--ol, --opcmaxstringlen=VALUE` | The maximum string length that OPC can transmit or receive.<br>Default: `131072` |
+| `--ot, --operationtimeout=VALUE` | The operation time-out of the publisher OPC UA client, in milliseconds.<br>Default: `120000` |
+| `--oi, --opcsamplinginterval=VALUE` | OPC Publisher uses this as default value, in milliseconds, to request the servers to sample the nodes with this interval. This value might be revised by the OPC UA servers to a supported sampling interval. Check the OPC UA specification for details about how this is handled by the OPC UA stack.<br>A negative value sets the sampling interval to the publishing interval of the subscription this node is on.<br>`0` configures the OPC UA server to sample in the highest possible resolution and should be used with care.<br>Default: `1000` |
+| `--op, --opcpublishinginterval=VALUE` | OPC Publisher uses this as default value, in milliseconds, for the publishing interval setting of the subscriptions established to the OPC UA servers. Check the OPC UA specification for details about how this is handled by the OPC UA stack.<br>A value less than or equal to `0` lets the server revise the publishing interval.<br>Default: `0` |
+| `--ct, --createsessiontimeout=VALUE` | Specifies the time-out, in seconds, that's used when you create a session to an endpoint. On unsuccessful connection, it attempts a backoff up to five times the specified time-out value.<br>Minimum: `1`<br>Default: `10` |
+| `--ki, --keepaliveinterval=VALUE` | Specifies the interval, in seconds, that the publisher sends keep-alive messages to the OPC servers on the endpoints that it's connected to.<br>Minimum: `2`<br>Default: `2` |
+| `--kt, --keepalivethreshold=VALUE` | Specifies the number of keep-alive packets that a server can miss before the session is disconnected.<br>Minimum: `1`<br>Default: `5` |
+| `--aa, --autoaccept` | OPC Publisher trusts all servers that it establishes a connection to.<br>Default: `false` |
+| `--tm, --trustmyself=VALUE` | Same as `trustowncert`.<br>Default: `false` |
+| `--to, --trustowncert` | The OPC Publisher certificate is put into the trusted certificate store automatically.<br>Default: `false` |
+| `--fd, --fetchdisplayname=VALUE` | Same as `fetchname`.<br>Default: `false` |
+| `--fn, --fetchname` | Enable reading the display name of a published node from the server. This setting increases the run time.<br>Default: `false` |
+| `--ss, --suppressedopcstatuscodes=VALUE` | Specifies the OPC UA status codes for which no events should be generated.<br>Default: `BadNoCommunication`, `BadWaitingForInitialData` |
+| `--at, --appcertstoretype=VALUE` | The owned application certificate store type.<br>Allowed values: `Directory`, `X509Store`<br>Default: `Directory` |
+| `--ap, --appcertstorepath=VALUE` | The path where the owned application certificate should be stored.<br>Default (depends on store type):<br>X509Store: `CurrentUser\UA_MachineDefault`<br>Directory: `pki/own` |
+| `--tp, --trustedcertstorepath=VALUE` | The path of the trusted certificate store.<br>Default: `pki/trusted` |
+| `--rp, --rejectedcertstorepath=VALUE` | The path of the rejected certificate store.<br>Default: `pki/rejected` |
+| `--ip, --issuercertstorepath=VALUE` | The path of the trusted issuer certificate store.<br>Default: `pki/issuer` |
+| `--csr` | Shows data to create a certificate signing request.<br>Default: `false` |
+| `--ab, --applicationcertbase64=VALUE` | Updates or sets this application's certificate with the certificate that's passed in as a Base64 string. |
+| `--af, --applicationcertfile=VALUE` | Updates or sets this application's certificate with the specified certificate file. |
+| `--pb, --privatekeybase64=VALUE` | Initially provisions the application certificate (in PEM or PFX format). Requires a private key, which is passed in as a Base64 string. |
+| `--pk, --privatekeyfile=VALUE` | Initially provisions the application certificate (in PEM or PFX format). Requires a private key, which is passed in as file. |
+| `--cp, --certpassword=VALUE` | The optional password for the PEM or PFX of the installed application certificate. |
+| `--tb, --addtrustedcertbase64=VALUE` | Adds the certificate to the application's trusted certificate store, passed in as a Base64 string (multiple comma-separated strings supported). |
+| `--tf, --addtrustedcertfile=VALUE` | Adds the certificate file to the application's trusted certificate store, passed in as a Base64 string (multiple comma-separated file names supported). |
+| `--ib, --addissuercertbase64=VALUE` | Adds the specified issuer certificate to the application's trusted issuer certificate store, passed in as a Base64 string (multiple comma-separated strings supported). |
+| `--if, --addissuercertfile=VALUE` | Adds the specified issuer certificate file to the application's trusted issuer certificate store (multiple comma-separated file names supported). |
+| `--rb, --updatecrlbase64=VALUE` | Updates the certificate revocation list (CRL), passed in as a Base64 string to the corresponding certificate store (trusted or trusted issuer). |
+| `--uc, --updatecrlfile=VALUE` | Updates the CRL, passed in as file to the corresponding certificate store (trusted or trusted issuer). |
+| `--rc, --removecert=VALUE` | Removes certificates with the specified thumbprints (multiple comma-separated thumbprints supported). |
+| `--dt, --devicecertstoretype=VALUE` | The IoT hub device certificate store type.<br>Allowed values: `Directory`, `X509Store`<br>Default: `X509Store` |
+| `--dp, --devicecertstorepath=VALUE` | The path of the IoT device certificate store<br>Default (depends on store type): `X509Store`<br>`My` Directory: `CertificateStores/IoTHub` |
+| `-i, --install` | Registers OPC Publisher with the IoT hub and then exits.<br>Default: `false` |
+| `-h, --help` | Shows this message and exits. |
+| `--st, --opcstacktracemask=VALUE` | Ignored. |
+| `--sd, --shopfloordomain=VALUE` | Same as the site option. The value must follow the syntactical rules of a DNS hostname.<br>Default: \<not set> |
+| `--vc, --verboseconsole=VALUE` | Ignored. |
+| `--as, --autotrustservercerts=VALUE` | Same as `--aa, --autoaccept`.<br>Default: `false` |
+| `--tt, --trustedcertstoretype=VALUE` | Ignored. The trusted certificate store always resides in a directory. |
+| `--rt, --rejectedcertstoretype=VALUE` | Ignored. The rejected certificate store always resides in a directory. |
+| `--it, --issuercertstoretype=VALUE` | Ignored. The trusted issuer certificate store always resides in a directory. |
+
+## Command-line arguments for version 2.6 and later
+
+| Argument | Description |
+| | |
+| `--pf, --publishfile=VALUE` | The file name to configure the nodes to publish. If this option is specified, it puts OPC Publisher into *standalone* mode. |
+| `--lf, --logfile=VALUE` | The file name of the log file to use. |
+| `--ll. --loglevel=VALUE` | The log level to use. Allowed: `fatal`, `error`, `warn`, `info`, `debug`, `verbose`. |
+| `--me, --messageencoding=VALUE` | The messaging encoding for outgoing messages. Allowed values: `Json`, `Uadp`. |
+| `--mm, --messagingmode=VALUE` | The messaging mode for outgoing messages. Allowed values: `PubSub`, `Samples`. |
+| `--fm, --fullfeaturedmessage=VALUE` | The full-featured mode for messages (all fields filled in).<br>Default is `true`. For legacy compatibility, use `false`. |
+| `--aa, --autoaccept` | OPC Publisher trusts all servers that it establishes a connection to. |
+| `--bs, --batchsize=VALUE` | The number of OPC UA data-change messages to be cached for batching. |
+| `--si, --iothubsendinterval=VALUE` | The trigger batching interval, in seconds. |
+| `--ms, --iothubmessagesize=VALUE` | The maximum size of the IoT D2C message. |
+| `--om, --maxoutgressmessages=VALUE` | The maximum size of the IoT D2C message egress buffer. |
+| `--di, --diagnosticsinterval=VALUE` | Shows OPC Publisher diagnostics information at the specified interval, in seconds (need log level information). `-1` disables the remote diagnostics log and diagnostics output. |
+| `--lt, --logflugtimespan=VALUE` | The timespan, in seconds, when the log file should be flushed. |
+| `--ih, --iothubprotocol=VALUE` | The protocol to use for communication with the hub. Allowed values: `AmqpOverTcp`, `AmqpOverWebsocket`, `MqttOverTcp`, `MqttOverWebsocket`, `Amqp`, `Mqtt`, `Tcp`, `Websocket`, `Any`. |
+| `--hb, --heartbeatinterval=VALUE` | OPC Publisher uses this as default value, in seconds, for the heartbeat interval setting of nodes without a heartbeat interval setting. |
+| `--ot, --operationtimeout=VALUE` | The operation time-out of the publisher OPC UA client, in milliseconds (ms). |
+| `--ol, --opcmaxstringlen=VALUE` | The maximum length of a string that OPC Publisher can transmit or receive. |
+| `--oi, --opcsamplinginterval=VALUE` | The default value, in milliseconds, to request the servers to sample values. |
+| `--op, --opcpublishinginterval=VALUE` | The default value, in milliseconds, for the publishing interval setting of the subscriptions against the OPC UA server. |
+| `--ct, --createsessiontimeout=VALUE` | The interval, in seconds, that OPC Publisher sends keep-alive messages to the OPC servers on the endpoints that it's connected to. |
+| `--kt, --keepalivethresholt=VALUE` | Specifies the number of keep-alive packets that a server can miss before a session is disconnected. |
+| `--tm, --trustmyself` | Automatically puts the OPC Publisher certificate into the trusted store. |
+| `--at, --appcertstoretype=VALUE` | The owned application certificate store type. Allowed: `Directory`, `X509Store`. |
+
+## Command-line arguments for version 2.8.2 and later
+
+The following OPC Publisher configuration can be applied by command-line interface (CLI) options or as environment variable settings.
+
+The `Alternative` field, when it's present, refers to the applicable CLI argument in *standalone mode only*. When both the environment variable and the CLI argument are provided, the latest argument overrules the environment variable.
+
+| Argument | Description |
+| | |
+| `PublishedNodesFile=VALUE` | The file that's used to store the configuration of the nodes to be published along with the information to connect to the OPC UA server sources. When this file is specified, or the default file is accessible by the module, OPC Publisher starts in *standalone* mode.<br>Alternative: `--pf, --publishfile`<br>Mode: Standalone only<br>Type: `string` - file name, optionally prefixed with the path<br>Default: `publishednodes.json` |
+| `site=VALUE` | The site that OPC Publisher is assigned to.<br>Alternative: `--s, --site`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: \<not set> |
+| `LogFile name==VALUE` | The file name of the log file to use<br>Alternative: `--lf, --logfile`<br>Mode: Standalone only<br>Type: `string` - file name, optionally prefixed with the path<br>Default: \<not set> |
+| `LogFileFlushTimeSpan=VALUE` | The timespan, in seconds, when the log file should be flushed in the storage account.<br>Alternative: `--lt, --logflushtimespan`<br>Mode: Standalone only<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in seconds<br>Default: `{00:00:30}` |
+| `loglevel=Value` | The level for logs to be persisted in the log file.<br>Alternative: `--ll` `--loglevel`<br>Mode: Standalone only<br>Type: `string enum` - `fatal`, `error`, `warning`, `information`, `debug`, `verbose`<br>Default: `info` |
+| `EdgeHubConnectionString=VALUE` | An IoT Edge Device or IoT Edge module connection string to use. When it's deployed as a module in IoT Edge, the environment variable is already set as part of the container deployment.<br>Alternative: `--dc, --deviceconnectionstring` \| `--ec, --edgehubconnectionstring`<br>Mode: Standalone, orchestrated<br>Type: connection string<br>Default: \<not set> \<set by iotedge run time> |
+| `Transport=VALUE` | The protocol to use for upstream communication to the IoT Edge hub or the IoT hub.<br>Alternative: `--ih, --iothubprotocol`<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Any`, `Amqp`, `Mqtt`, `AmqpOverTcp`, `AmqpOverWebsocket`, `MqttOverTcp`, `MqttOverWebsocket`, `Tcp`, `Websocket`<br>Default: `MqttOverTcp` |
+| `BypassCertVerification=VALUE` | Enables/disables the bypassing of certificate verification for upstream communication to EdgeHub.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `false` |
+| `EnableMetrics=VALUE` | Enables/disables upstream metrics propagation.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `true` |
+| `DefaultPublishingInterval=VALUE` | The default value for the OPC UA publishing interval of OPC UA subscriptions created to an OPC UA server. This value is used when no explicit setting is configured.<br>Alternative: `--op, --opcpublishinginterval`<br>Mode: Standalone only<br> Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in milliseconds<br>Default: `{00:00:01}` (1000) |
+| `DefaultSamplingInterval=VALUE` | The default value for the OPC UA sampling interval of nodes to publish. This value is used when no explicit setting is configured.<br>Alternative: `--oi, --opcsamplinginterval`<br>Mode: Standalone only<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in milliseconds<br>Default: `{00:00:01}` (1000) |
+| `DefaultQueueSize=VALUE` | The default value for the monitored item's queue size, to be used when it isn't explicitly specified in the *pn.json* file.<br>Alternative: `--mq, --monitoreditemqueuecapacity`<br>Mode: Standalone only<br>Type: `integer`<br>Default: `1` |
+| `DefaultHeartbeatInterval=VALUE` | The default value for the heartbeat interval setting of published nodes that have no explicit setting for heartbeat interval.<br>Alternative: `--hb, --heartbeatinterval`<br>Mode: Standalone<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in seconds<br>Default: `{00:00:00}`, which means that heartbeat is disabled |
+| `MessageEncoding=VALUE` | The messaging encoding for outgoing telemetry.<br>Alternative: `--me, --messageencoding`<br>Mode: Standalone only<br>Type: `string enum` - `Json`, `Uadp`<br>Default: `Json` |
+| `MessagingMode=VALUE` | The messaging mode for outgoing telemetry.<br>Alternative: `--mm, --messagingmode`<br>Mode: Standalone only<br>Type: `string enum` - `PubSub`, `Samples`<br>Default: `Samples` |
+| `FetchOpcNodeDisplayName=VALUE` | Fetches the display name for the nodes to be published from the OPC UA server when it isn't explicitly set in the configuration.<br>**Note**: This argument has a high impact on OPC Publisher startup performance.<br>Alternative: `--fd, --fetchdisplayname`<br>Mode: Standalone only<br>Type: Boolean<br>Default: `false` |
+| `FullFeaturedMessage=VALUE` | The full-featured mode for messages (all fields filled in the telemetry).<br>Default is `false` for legacy compatibility.<br>Alternative: `--fm, --fullfeaturedmessage`<br>Mode: Standalone only<br>Type: Boolean<br>Default: `false` |
+| `BatchSize=VALUE` | The number of incoming OPC UA data change messages to be cached for batching. When `BatchSize` is `1` or `TriggerInterval` is set to `0`, batching is disabled.<br>Alternative: `--bs, --batchsize`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `50` |
+| `BatchTriggerInterval=VALUE` | The batching trigger interval. When `BatchSize` is `1` or `TriggerInterval` is set to `0`, batching is disabled.<br>Alternative: `--si, --iothubsendinterva`l<br>Mode: Standalone, orchestrated<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br> Alternative argument type: `integer`, in seconds<br>Default: `{00:00:10}` |
+| `IoTHubMaxMessageSize=VALUE` | The maximum size of the IoT D2C telemetry message.<br>Alternative: `--ms, --iothubmessagesize`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `0` |
+| `DiagnosticsInterval=VALUE` | Shows OPC Publisher diagnostics information at the specified interval, in seconds (need log level information). `-1` disables the remote diagnostics log and diagnostics output.<br>Alternative: `--di, --diagnosticsinterval`<br>Mode: Standalone only<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in seconds<br>Default: `{00:00:60}` |
+| `LegacyCompatibility=VALUE` | Forces OPC Publisher to operate in 2.5 legacy mode by using `application/opcua+uajson` for `ContentType` on the IoT hub.<br>Telemetry message.<br>Alternative: `--lc, --legacycompatibility`<br>Mode: Standalone only<br>Type: Boolean<br>Default: `false` |
+| `PublishedNodesSchemaFile=VALUE` | The validation schema file name for the published nodes file.<br>Alternative: `--pfs, --publishfileschema`<br>Mode: Standalone only<br>Type: `string`<br>Default: \<not set> |
+| `MaxNodesPerDataSet=VALUE` | The maximum number of nodes within a dataset or subscription. When more nodes than this value are configured for `DataSetWriter`, they're added in a separate dataset or subscription.<br>Alternative: N/A<br>Mode: Standalone only<br>Type: `integer`<br>Default: `1000` |
+| `ApplicationName=VALUE` | The OPC UA Client Application Configuration application name, as per the OPC UA definition. It's used for authentication during the initial communication handshake and as part of owned certificate validation.<br>Alternative: `--an, --appname`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `Microsoft.Azure.IIoT` |
+| `ApplicationUri=VALUE` | The OPC UA Client Application Configuration application URI, as per the OPC UA definition.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"urn:localhost:{ApplicationName}:microsoft:"` |
+| `ProductUri=VALUE` | The OPC UA Client Application Configuration product URI, as per OPC UA definition.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `https://www.github.com/Azure/Industrial-IoT` |
+| `DefaultSessionTimeout=VALUE` | The OPC UA Client Application Configuration session time-out, in seconds, as per OPC UA definition.<br>Alternative: `--ct, --createsessiontimeout`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `0`, which means \<not set> |
+| `MinSubscriptionLifetime=VALUE` | The OPC UA Client Application Configuration minimum subscription lifetime, in seconds, as per OPC UA definition.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `0`, \<not set> |
+| `KeepAliveInterval=VALUE` | The OPC UA Client Application Configuration keep-alive interval, in seconds, as per OPC UA definition.<br>Alternative: `--ki, --keepaliveinterval`<br>Mode: Standalone, orchestrated<br>Type: `integer`, in milliseconds<br>Default: `10,000` (10 sec) |
+| `MaxKeepAliveCount=VALUE` | The OPC UA Client Application Configuration maximum number of keep-alive events, as per OPC UA definition.<br>Alternative: `--kt, --keepalivethreshold`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `50` |
+| `PkiRootPath=VALUE` | The OPC UA Client Security Configuration PKI (public key infrastructure) certificate store root path.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `pki` |
+| `ApplicationCertificateStorePath=VALUE` | The OPC UA Client Security Configuration application's owned certificate store path.<br>Alternative: `--ap, --appcertstorepath`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"{PkiRootPath}/own"` |
+| `ApplicationCertificateStoreType=VALUE` | The OPC UA Client Security Configuration application's owned certificate store type.<br>Alternative: `--at, --appcertstoretype`<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Directory`, `X509Store`<br>Default: `Directory` |
+| `ApplicationCertificateSubjectName=VALUE` | The OPC UA Client Security Configuration subject name in the application's owned certificate.<br>Alternative: `--sn, --appcertsubjectname`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `"CN=Microsoft.Azure.IIoT, C=DE, S=Bav, O=Microsoft, DC=localhost"` |
+| `TrustedIssuerCertificatesPath=VALUE` | The OPC UA Client Security Configuration trusted certificate issuer store path.<br>Alternative: `--ip, --issuercertstorepath`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"{PkiRootPath}/issuers"` |
+| `TrustedIssuerCertificatesType=VALUE` | The OPC UA Client Security Configuration trusted issuer certificates store type.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Directory`, `X509Store`<br>Default: `Directory` |
+| `TrustedPeerCertificatesPath=VALUE` | The OPC UA Client Security Configuration trusted peer certificates store path.<br>Alternative: `--tp, --trustedcertstorepath`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"{PkiRootPath}/trusted"` |
+| `TrustedPeerCertificatesType=VALUE` | The OPC UA Client Security Configuration trusted peer certificates store type.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Directory`, `X509Store`<br>Default: `Directory` |
+| `RejectedCertificateStorePath=VALUE` | The OPC UA Client Security Configuration rejected certificates store path.<br>Alternative: `--rp, --rejectedcertstorepath`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"{PkiRootPath}/rejected"` |
+| `RejectedCertificateStoreType=VALUE` | The OPC UA Client Security Configuration rejected certificates store type.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Directory`, `X509Store`<br>Default: `Directory` |
+| `AutoAcceptUntrustedCertificates=VALUE` | The OPC UA Client Security Configuration auto accept untrusted peer certificates.<br>Alternative: `--aa, --autoaccept`<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `false` |
+| `RejectSha1SignedCertificates=VALUE` | The OPC UA Client Security Configuration reject deprecated Sha1 signed certificates.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `false` |
+| `MinimumCertificateKeySize=VALUE` | The OPC UA Client Security Configuration minimum accepted certificates key size.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `1024` |
+| `AddAppCertToTrustedStore=VALUE` | The OPC UA Client Security Configuration automatically copy the owned certificate's public key to the trusted certificate store.<br>Alternative: `--tm, --trustmyself`<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `true` |
+| `SecurityTokenLifetime=VALUE` | The OPC UA Stack Transport Secure Channel security token lifetime. <br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`, in milliseconds<br>Default: `3,600,000` (1 hour) |
+| `ChannelLifetime=VALUE` | The OPC UA Stack Transport Secure Channel channel lifetime, in milliseconds.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`, in milliseconds<br>Default: `300,000` (5 minutes) |
+| `MaxBufferSize=VALUE` | The OPC UA Stack Transport Secure Channel maximum buffer size.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`, in kilobytes<br>Default: `65,535` (64 KB -1) |
+| `MaxMessageSize=VALUE` | The OPC UA Stack Transport Secure Channel maximum message size.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `4,194,304` (4 MB) |
+| `MaxArrayLength=VALUE` | The OPC UA Stack Transport Secure Channel maximum array length. <br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `65,535` (64 KB - 1) |
+| `MaxByteStringLength=VALUE` | The OPC UA Stack Transport Secure Channel maximum byte string length.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `1,048,576` (1 MB) |
+| `OperationTimeout=VALUE` | The OPC UA Stack Transport Secure Channel service call operation timeout.<br>Alternative: `--ot, --operationtimeout`<br>Mode: Standalone, orchestrated<br>Type: `integer`, in milliseconds<br>Default: `120,000` (2 min) |
+| `MaxStringLength=VALUE` | The OPC UA Stack Transport Secure Channel maximum length of a string that can be sent/received over the OPC UA secure channel.<br>Alternative: `--ol, --opcmaxstringlen`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `130,816` (128 KB - 256) |
+| `RuntimeStateReporting=VALUE` | Enables reporting of OPC Publisher restarts.<br>Alternative: `--rs, --runtimestatereporting`<br>Mode: Standalone<br>Type: Boolean<br>Default: `false` |
+| `EnableRoutingInfo=VALUE` | Adds the routing information to telemetry messages. The name of the property is `$$RoutingInfo`, and the value is `DataSetWriterGroup` for that particular message. When `DataSetWriterGroup` isn't configured, the `$$RoutingInfo` property isn't added to the message even if this argument is set.<br>Alternative: `--ri, --enableroutinginfo`<br>Mode: Standalone<br>Type: Boolean<br>Default: `false` |
## Next steps
-Further resources can be found in the GitHub repositories:
+
+For additional resources, go to the following GitHub repositories:
> [!div class="nextstepaction"] > [OPC Publisher GitHub repository](https://github.com/Azure/Industrial-IoT) > [!div class="nextstepaction"]
-> [IIoT Platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
+> [Industrial IoT platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
industrial-iot Tutorial Configure Industrial Iot Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-configure-industrial-iot-components.md
Title: Configure the Azure Industrial IoT components
-description: In this tutorial, you learn how to change the default values of the configuration.
+ Title: Configure Azure Industrial IoT components
+description: In this tutorial, you learn how to change the default values of the Azure Industrial IoT configuration.
Last updated 3/22/2021
-# Tutorial: Configure the Industrial IoT components
+# Tutorial: Configure Industrial IoT components
-The deployment script automatically configures all components to work with each other using default values. However, the settings of the default values can be changed to meet your requirements.
+The deployment script automatically configures all Azure Industrial IoT components to work with each other using default values. However, you can change the settings to meet your requirements.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Customize the configuration of the components
--
-Here are some of the more relevant customization settings for the components:
-* IoT Hub
- * Networking→Public access: Configure Internet access, for example, IP filters
- * Networking → Private endpoint connections: Create an endpoint that's not accessible
- through the Internet and can be consumed internally by other Azure services or on-premises devices (for example, through a VPN connection)
- * IoT Edge: Manage the configuration of the edge devices that are connected to the OPC
-UA servers
-* Cosmos DB
- * Replicate data globally: Configure data-redundancy
- * Firewall and virtual networks: Configure Internet and VNET access, and IP filters
- * Private endpoint connections: Create an endpoint that is not accessible through the
-Internet
-* Key Vault
- * Secrets: Manage platform settings
- * Access policies: Manage which applications and users may access the data in the Key
-Vault and which operations (for example, read, write, list, delete) they are allowed to perform on the network, firewall, VNET, and private endpoints
-* Microsoft Azure Active Directory (Azure AD)→App registrations
- * <APP_NAME>-web → Authentication: Manage reply URIs, which is the list of URIs that
-can be used as landing pages after authentication succeeds. The deployment script may be unable to configure this automatically under certain scenarios, such as lack of Azure AD admin rights. You may want to add or modify URIs when changing the hostname of the Web app, for example, the port number used by the localhost for debugging
-* App Service
- * Configuration: Manage the environment variables that control the services or UI
-* Virtual machine
- * Networking: Configure supported networks and firewall rules
- * Serial console: SSH access to get insights or for debugging, get the credentials from the
-output of deployment script or reset the password
-* IoT Hub → IoT Edge
- * Manage the identities of the IoT Edge devices that may access the hub, configure which modules are installed and which configuration they use, for example, encoding parameters for the OPC Publisher
-* IoT Hub → IoT Edge → \<DEVICE> → Set Modules → OpcPublisher (for standalone OPC Publisher operation only)
-
-## Configuration via Command-line Arguments for OPC Publisher 2.8.2 and above
-
-There are [several Command-line Arguments](reference-command-line-arguments.md#opc-publisher-command-line-arguments-for-version-282-and-above) that can be used to set global settings for OPC Publisher.
-Refer to the `mode` part in the command line description to check if a Command-line Argument is applicable to orchestrated or standalone mode.
+> * Customize the configuration of Azure Industrial IoT components
+
+## Customization settings
+
+Here are some of the more relevant customization settings for the components.
+
+### IoT Hub
+
+* Networking (public access): Configure internet access (for example, IP filters).
+* Networking(private endpoint connections): Create an endpoint that's inaccessible through the internet but that can be consumed internally by other Azure services or on-premises devices (for example, through a VPN connection).
+* Azure IoT Edge: Manage the configuration of the edge devices that are connected to the OPC Unified Architecture (OPC UA) servers.
+
+### Azure Cosmos DB
+
+* Replicate data globally: Configure data redundancy.
+* Firewall and virtual networks: Configure internet and virtual network access, and IP filters.
+* Private endpoint connections: Create an endpoint that's inaccessible through the internet.
+
+### Azure Key Vault
+
+* Secrets: Manage platform settings.
+* Access policies: Manage which applications and users may access the data in the key vault and which operations (for example, read, write, list, delete) they are allowed to perform on the network, firewall, virtual network, and private endpoints.
+
+### Azure Active Directory app registrations
+
+* <APP_NAME>-web (authentication): Manage reply URIs, which are the lists of URIs that can be used as landing pages after authentication succeeds. The deployment script might be unable to configure this automatically under certain scenarios, such as lack of Azure Active Directory (Azure AD) administrator rights. You might want to add or modify URIs when you're changing the hostname of the web app (for example, the port number that's used by the localhost for debugging).
+
+### Azure App Service
+
+* Configuration: Manage the environment variables that control the services or the user interface.
+
+### Azure Virtual Machines
+
+* Networking: Configure supported networks and firewall rules.
+* Serial console: Get Secure Shell (SSH) access for insights or for debugging, get the credentials from the output of deployment script, or reset the password.
+
+### Azure IoT Hub → Azure IoT Edge
+
+* Manage the identities of the IoT Edge devices that can access the hub. Also, configure which modules are installed and identify which configuration they use (for example, encoding parameters for OPC Publisher).
+
+### IoT Hub → IoT Edge → \<DEVICE> → Set Modules → OpcPublisher
+* This setting applies to *standalone* OPC Publisher operation only.
+
+## Command-line arguments for OPC Publisher version 2.8.2 and later
+
+To establish global settings for OPC Publisher, you can use any of [several command-line arguments](reference-command-line-arguments.md#command-line-arguments-for-version-282-and-later). To learn whether a particular argument applies to *standalone* or *orchestrated* mode, refer to the "Mode" designation in the argument **Description** column of the table.
## Next steps
-Now that you have learned how to change the default values of the configuration, you can
+
+Now that you've learned how to change the default values of the configuration, you can:
> [!div class="nextstepaction"]
-> [Pull IIoT data into ADX](tutorial-industrial-iot-azure-data-explorer.md)
+> [Pull Industrial IoT data into ADX](tutorial-industrial-iot-azure-data-explorer.md)
> [!div class="nextstepaction"]
-> [Visualize and analyze the data using Time Series Insights](tutorial-visualize-data-time-series-insights.md)
+> [Visualize and analyze the data by using Time Series Insights](tutorial-visualize-data-time-series-insights.md)
iot-edge How To Access Dtpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-dtpm.md
A Trusted platform module (TPM) chip is a secure crypto-processor that is designed to carry out cryptographic operations. This technology is designed to provide hardware-based, security-related functions. The Azure IoT Edge for Linux on Windows (EFLOW) virtual machine doesn't have a virtual TPMs attached to the VM. However, the user can enable or disable the TPM passthrough feature, that allows the EFLOW virtual machine to use the Windows host OS TPM. The TPM passthrough feature enables two main scenarios: -- Use TPM technology for IoT Edge device provisioning using Device Provision Service (DPS)
+- Use TPM technology for IoT Edge device provisioning using Device Provisioning Service (DPS)
- Read-only access to cryptographic keys stored inside the TPM. This article describes how to develop a sample code in C# to read cryptographic keys stored inside the device TPM.
The following steps show you how to create a sample executable to access a TPM i
1. In **Solution Explorer**, right-click the project name and select **Manage NuGet Packages**.
-1. Select **Browse** and then search for `Microsoft.TSS`.
+1. Select **Browse** and then search for `Microsoft.TSS`. For more information about this package, see [Microsoft.TSS](https://www.nuget.org/packages/Microsoft.TSS).
1. Choose the **Microsoft.TSS** package from the list then select **Install**.
iot-edge How To Configure Iot Edge For Linux On Windows Iiot Dmz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz.md
EFLOW uses the [route](https://man7.org/linux/man-pages/man8/route.8.html) servi
>[!TIP] >The previous image shows the route command output with the two NIC's assigned (*eth0* and *eth1*). The virtual machine creates two different *default* destinations rules with different metrics. A lower metric value has a higher priority. This routing table will vary depending on the networking scenario configured in the previous steps.
-### Static routes fix
+### Static routes configuration
-Every time EFLOW VM starts, the networking services recreates all routes, and any previously assigned priority could change. To work around this issue, you can assign the desired priority for each route every time the EFLOW VM starts. You can create a service that executes every time the VM starts and use the `route` command to set the desired route priorities.
+Every time EFLOW VM starts, the networking services recreates all routes, and any previously assigned priority could change. To work around this issue, you can assign the desired priority for each route every time the EFLOW VM starts. You can create a service that executes on every VM boot and uses the `route` command to set the desired route priorities.
First, create a bash script that executes the necessary commands to set the routes. For example, following the networking scenario mentioned earlier, the EFLOW VM has two NICs (offline and online networks). NIC *eth0* is connected using the gateway IP xxx.xxx.xxx.xxx. NIC *eth1* is connected using the gateway IP yyy.yyy.yyy.yyy.
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
Azure IoT Edge for Linux on Windows emphasizes interoperability between the Linu
For samples that demonstrate communication between Windows applications and Azure IoT Edge modules, see [EFLOW GitHub](https://aka.ms/AzEFLOW-Samples).
+Also, you can use your IoT Edge for Linux on Windows device to act as a transparent gateway for other edge devices. For more information on how to configure EFLOW as a transparent gateway, see [Configure an IoT Edge device to act as a transparent gateway](./how-to-create-transparent-gateway.md).
+ ## Support Use the Azure IoT Edge support and feedback channels to get assistance with Azure IoT Edge for Linux on Windows.
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
key-vault Common Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/common-error-codes.md
Title: Common error codes for Azure Key Vault | Microsoft Docs description: Common error codes for Azure Key Vault -+ tags: azure-resource-manager
The error codes listed in the following table may be returned by an operation on
| Error code | User message | |--|--|
-| VaultAlreadyExists | Your attempt to create a new key vault with the specified name has failed since the name is already in use. If you recently deleted a key vault with this name, it may still be in the soft deleted state. You can verify if it existis in soft-deleted state [here](./key-vault-recovery.md?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault) |
+| VaultAlreadyExists | Your attempt to create a new key vault with the specified name has failed since the name is already in use. If you recently deleted a key vault with this name, it may still be in the soft deleted state. You can verify if it exists in soft-deleted state [here](./key-vault-recovery.md?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault) |
| VaultNameNotValid | The vault name should be string of 3 to 24 characters and can contain only numbers (0-9), letters (a-z, A-Z), and hyphens (-) | | AccessDenied | You may be missing permissions in access policy to do that operation. |
-| ForbiddenByFirewall | Client address is not authorized and caller is not a trusted service. |
-| ConflictError | You're requesting multiple operations on same item. |
-| RegionNotSupported | Specified azure region is not supported for this resource. |
-| SkuNotSupported | Specified SKU type is not supported for this resource. |
-| ResourceNotFound | Specified azure resource is not found. |
-| ResourceGroupNotFound | Specified azure resource group is not found. |
+| ForbiddenByFirewall | Client address isn't authorized and caller isn't a trusted service. |
+| ConflictError | You're requesting multiple operations on the same item, e.g., Key Vault, secret, key, certificate, or common components within a Key Vault like VNET. It's recommended to sequence operations or to implement retry logic. |
+| RegionNotSupported | Specified Azure region isn't supported for this resource. |
+| SkuNotSupported | Specified SKU type isn't supported for this resource. |
+| ResourceNotFound | Specified Azure resource isn't found. |
+| ResourceGroupNotFound | Specified Azure resource group isn't found. |
| CertificateExpired | Check the expiration date and validity period of the certificate. |
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
load-balancer Gateway Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-partners.md
Azure has a growing ecosystem of partners offering their network appliances for
**Tobias Kunze - Co-founder & CEO**
-[Learn more](https://glasnostic.com/blog/announcing-glasnostic-for-azure-gateway-load-balancer)
+[Learn more](https://glasnostic.com/blog/announcing-glasnostic-for-azure-gwlb)
### Palo Alto Networks
logic-apps Block Connections Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-across-tenants.md
+
+ Title: Block access from other tenants
+description: Block connections shared by other tenants in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 08/01/2022
+# Customer intent: As a developer, I want to prevent shared connections with other Azure Active Directory tenants.
++
+# Block connections shared from other tenants in Azure Logic Apps (Preview)
+
+> [!NOTE]
+> This capability is in preview and is subject to the
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Logic Apps includes many connectors for you to build integration apps and workflows and to access various data, apps, services, systems, and other resources. These connectors authorize your access to these resources by using Azure Active Directory (Azure AD) to authenticate your credentials.
+
+When you create a connection from your workflow to access a resource, you can share that connection with others in the same Azure AD tenant or different tenant by sending a consent link. This shared connection provides access to same resource but creates a security vulnerability.
+
+As a security measure to prevent this scenario, you can block access to and from your own Azure AD tenant through such shared connections. You can also permit but restrict connections only to specific tenants. By setting up a tenant isolation policy, you can better control data movement between your tenant and resources that require Azure AD authorized access.
+
+## Prerequisites
+
+- An Azure subscription and account with owner permissions to set up a new policy or make changes to existing tenant policies.
+
+ > [!NOTE]
+ >
+ > You can apply policies that affect only your own tenant, not other tenants.
+
+- Collect the following information:
+
+ - The tenant ID for your Azure AD tenant.
+
+ - The choice whether to enforce two-way tenant isolation for connections that don't have a client tenant ID.
+
+ For example, some legacy connections might not have an associated tenant ID. So, you have to choose whether to block or allow such connections.
+
+ - The choice whether to enable or disable the isolation policy.
+
+ - The tenant IDs for any tenants where you want to allow connections to or from your tenant.
+
+ If you choose to allow such connections, include the following information:
+
+ - The choice whether to allow inbound connections to your tenant from each allowed tenant.
+
+ - The choice whether to allow inbound connections from your tenant to each allowed tenant.
+
+- To test the tenant isolation policy, you need a second Azure AD tenant. From this tenant, you'll try connecting to and from the isolated tenant after the isolation policy takes effect.
+
+## Request an isolation policy for your tenant
+
+To start this process, you'll request a new isolation policy or update your existing isolation policy for your tenant. Only Azure subscription owners can request new policies or changes to existing policies.
+
+1. Open a Customer Support ticket to request a new isolation policy or update your existing isolation policy for your tenant.
+
+1. Wait for the request to finish verification and processing by the person who handles the support ticket.
+
+ > [!NOTE]
+ >
+ > Policies take effect immediately in the West Central US region. However, these changes
+ > might take up to four hours to replicate in all other regions.
+
+## Test the isolation policy
+
+After the policy takes effect in a region, test the policy. You can try immediately in the West Central US region.
+
+### Test inbound connections to your tenant
+
+1. Sign in to your "other" Azure AD tenant.
+
+1. Create logic app workflow with a connection, such as Office 365 Outlook.
+
+1. Try to sign in to your isolated tenant.
+
+ You get a message that the connection to the isolated tenant has failed authorization due to a tenant isolation configuration.
+
+### Test outbound connections from your tenant
+
+1. Sign in to your isolated tenant.
+
+1. Create a logic app workflow with a connection, such as Office 365 Outlook.
+
+1. Try to sign in to your other tenant.
+
+ You get a message that the connection to your other tenant has failed authorization due to a tenant isolation configuration.
+
+## Next steps
+
+[Block connector usage in Azure Logic Apps](block-connections-connectors.md)
logic-apps Block Connections Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-connectors.md
Title: Block connections for specific API connectors
-description: Restrict creating and using API connections in Azure Logic Apps.
+ Title: Block connector usage
+description: Block creating and using specific API connections in Azure Logic Apps.
ms.suite: integration
Last updated 05/18/2022
-# Block connections created by connectors in Azure Logic Apps
+# Block connector usage in Azure Logic Apps
If your organization doesn't permit connecting to restricted or unapproved resources using their [managed connectors](../connectors/managed.md) in Azure Logic Apps, you can block the capability to create and use those connections in logic app workflows. With [Azure Policy](../governance/policy/overview.md), you can define and enforce [policies](../governance/policy/overview.md#policy-definition) that prevent creating or using connections for connectors that you want to block. For example, for security reasons, you might want to block connections to specific social media platforms or other services and systems.
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
ms.suite: integration Previously updated : 07/14/2022 Last updated : 07/30/2022
The built-in HTTP trigger or action can use the system-assigned identity that yo
As a specific example, suppose that you want to run the [Snapshot Blob operation](/rest/api/storageservices/snapshot-blob) on a blob in the Azure Storage account where you previously set up access for your identity. However, the [Azure Blob Storage connector](/connectors/azureblob/) doesn't currently offer this operation. Instead, you can run this operation by using the [HTTP action](logic-apps-workflow-actions-triggers.md#http-action) or another [Blob Service REST API operation](/rest/api/storageservices/operations-on-blobs). > [!IMPORTANT]
-> To access Azure storage accounts behind firewalls by using HTTP requests and managed identities,
-> make sure that you also set up your storage account with the [exception that allows access by trusted Microsoft services](../connectors/connectors-create-api-azureblobstorage.md#access-blob-storage-in-same-region-with-managed-identities).
+> To access Azure storage accounts behind firewalls by using the Azure Blob connector and managed identities,
+> make sure that you also set up your storage account with the [exception that allows access by trusted Microsoft services](../connectors/connectors-create-api-azureblobstorage.md#access-blob-storage-in-same-region-with-system-managed-identities).
To run the [Snapshot Blob operation](/rest/api/storageservices/snapshot-blob), the HTTP action specifies these properties:
logic-apps Handle Long Running Stored Procedures Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/handle-long-running-stored-procedures-sql-connector.md
Here are the steps to add:
@step_timeout_seconds = 30, @command= N' IF NOT EXISTS(SELECT [jobid] FROM [dbo].[LongRunningState]
- WHERE jobid = $(job_execution_id)
+ WHERE jobid = $(job_execution_id))
THROW 50400, ''Failed to locate call parameters (Step1)'', 1', @credential_name='JobRun', @target_group_name='DatabaseGroupLongRunning'
Here are the steps to add:
DECLARE @timespan char(8) DECLARE @callparams NVARCHAR(MAX) SELECT @callparams = [parameters] FROM [dbo].[LongRunningState]
- WHERE jobid = $(job_execution_id))
+ WHERE jobid = $(job_execution_id)
SET @timespan = @callparams EXECUTE [dbo].[WaitForIt] @delay = @timespan', @credential_name='JobRun',
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
marketplace Create New Saas Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer-plans.md
Title: Create plans for a SaaS offer in Azure Marketplace
+ Title: Create plans for a SaaS offer in Azure Marketplace | Azure Marketplace
description: Create plans for a new software as a service (SaaS) offer in Azure Marketplace. Previously updated : 03/16/2022 Last updated : 07/15/2022 # Create plans for a SaaS offer
Every plan must be available in at least one market. On the **Pricing and availa
You must associate a pricing model with each plan: either _flat rate_ or _per user_. All plans in the same offer must use the same pricing model. For example, an offer cannot have one plan that's flat rate and another plan thatΓÇÖs per user. For more information, see [SaaS pricing models](plan-saas-offer.md#saas-pricing-models). > [!IMPORTANT]
-> After your offer is published, you cannot change the pricing model. In addition, all plans for the same offer must share the same pricing model.
+> After your offer is published, you cannot change the pricing model. In addition, all plans for the same offer must share the same pricing model.<br><br>
+> If you choose to configure a 2-year or 3-year billing term, or a 1-year billing term with a monthly payment option, your offer will be published to Azure Marketplace only. If you update an offer that is currently published live on AppSource with a multi-year billing term, the offer will be delisted from AppSource and published to Azure Marketplace only.
### Configure flat rate pricing 1. On the **Pricing and availability** tab, under **Pricing**, select **Flat rate**.
-1. Select either the **Monthly** or **Annual** check box, or both and then enter the price.
+1. Configure the **Billing terms** you want. You can add 1-month, 1-year, 2-year, and 3-year billing terms. For each billing term you add, configure the _payment option_ to set the payment schedule. One payment option per billing term is supported for the same plan.
+1. Enter the price for each payment occurrence.
+
+> [!NOTE]
+> To add another payment option for the same term, create a new plan.
### Add a custom meter dimension
This option is available only if you selected flat rate pricing. For more inform
1. In the **Display Name** box, enter the display name associated with the dimension. For example, "text messages sent". 1. In the **Unit of Measure** box, enter the description of the billing unit. For example, "per text message" or "per 100 emails". 1. In the **Price per unit in USD** box, enter the price for one unit of the dimension.
-1. In the **Monthly quantity included in base** box, enter the quantity (as an integer) of the dimension that's included each month for customers who pay the recurring monthly fee. To set an unlimited quantity, select the check box instead.
-1. In the **Annual quantity included in base** box, enter the quantity of the dimension (as an integer) that's included each month for customers who pay the recurring annual fee. To set an unlimited quantity, select the check box instead.
-1. To add another custom meter dimension, select the **Add another Dimension** link, and then repeat steps 1 through 7.
+1. For each billing term you enable on the plan, in the corresponding **quantity included in base** box, enter the quantity (as an integer) of the dimension that's included for the entire billing term. To set an unlimited quantity, select the check box instead.
+1. To add another custom meter dimension, select the **Add another Dimension** link, and then repeat steps 1 through 6.
> [!IMPORTANT] > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee.
This option is available only if you selected flat rate pricing. For more inform
### Configure per user pricing 1. On the **Pricing and availability** tab, under **Pricing**, select **Per User**.
-2. If applicable, under **User limits**, specify the minimum and maximum number of users for this plan.
-3. Under **Billing term**, specify a monthly price, annual price, or both.
+1. If applicable, under **User limits**, specify the minimum and maximum number of users for this plan.
+1. Add the _billing terms_ you want: 1-month, 1-year, 2-year, and 3-year billing terms can be added.
+1. For each billing term, select the _payment option_ to set the payment schedule. Only one payment option per term can be configured on the same plan.
+1. Enter the price for each payment occurrence.
+
+> [!NOTE]
+> To add another payment option for the same billing term, create a new plan.
### Validate custom prices
To set custom prices in an individual market, export, modify, and then import th
3. In the dialog box that appears, click **Yes**. 4. Select the exportedPrice.xlsx file you updated, and then click **Open**.
+> [!NOTE]
+> For billing terms that support multiple payment options, only one payment option is allowed per billing term. The prices for a given billing term must be defined for the same payment option across all markets that are selected for the plan.
+ ### Enable a free trial You can configure a free trial for each plan in your offer. Select the check box to allow a one-month free trial. This check box isn't available for plans that use the marketplace metering service. For more information, see [Free trials](plans-pricing.md#free-trials).
marketplace Dynamics 365 Review Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-review-publish.md
Previously updated : 05/25/2022 Last updated : 08/01/2022 # Review and publish a Dynamics 365 offer
Your offer's publish status will change as it moves through the publication proc
| Pending stop distribution | Publisher selected "stop distribution" on an offer or plan, but the action has not yet been completed. | | Not available in the marketplace | A previously published offer in the marketplace has been removed. |
+> [!TIP]
+> After publishing an offer, the [owner](user-roles.md) of your developer account is notified of the publishing status and required actions through email and the Action Center in Partner Center. For more information about Action Center, see [Action Center Overview](/partner-center/action-center-overview).
+ ## Preview and subscribe to the offer When the offer is ready for you to test in the preview environment, weΓÇÖll send you an email to request that you review and approve your offer preview. You can also refresh the **Offer overview** page in your browser to see if your offer has reached the Publisher sign-off phase. If it has, the **Go live** button and preview link will be available. If you chose to sell your offer through Microsoft, anyone who has been added to the preview audience can test the acquisition and deployment of your offer to ensure it meets your requirements during this stage.
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
Title: Microsoft commercial marketplace transact capabilities
+ Title: Microsoft commercial marketplace transact capabilities | Azure Marketplace
description: This article describes pricing, billing, invoicing, and payout considerations for the commercial marketplace transact option. Previously updated : 07/18/2022 Last updated : 07/27/2022
The transact publishing option is currently supported for the following offer ty
| | - | - | - | | Azure Application <br>(Managed application) | Monthly | Yes | Usage-based | | Azure Virtual Machine | Monthly [1] | No | Usage-based, BYOL |
-| Software as a service (SaaS) | Monthly and annual | Yes | Flat rate, per user, usage-based. |
-| Dynamics 365 apps on Dataverse and Power Apps [2] | Monthly and annual | No | Per user |
-| Power BI visual [3] | Monthly and annual | No | Per user |
+| Software as a service (SaaS) | One-time upfront monthly, annual [2,3] | Yes | Flat rate, per user, usage-based. |
+| Dynamics 365 apps on Dataverse and Power Apps [4] | Monthly and annual | No | Per user |
+| Power BI visual [5] | Monthly and annual | No | Per user |
[1] Azure Virtual Machine offers support usage-based billing plans. These plans are billed monthly for hourly use of the subscription based on per core, per core size, or per market and core size usage.
-[2] Dynamics 365 apps on Dataverse and Power Apps offers that you transact through Microsoft are automatically enabled for license management. See [ISV app license management for Dynamics 365 apps on Dataverse and Power Apps](isv-app-license.md).
+[2] SaaS plans support monthly, 1-year, 2-year, and 3-year terms that can be billed either monthly or for the entire term upfront. See [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md#plans).
-[3] Power BI visual offers that you transact through Microsoft are automatically enabled for license management. See [ISV app license management for Power BI visual offers](isv-app-license-power-bi-visual.md).
+[3] If you choose to configure a 2-year or 3-year billing term, or a 1-year billing term with a monthly payment option, your offer will be published to Azure Marketplace only. If you update an offer that is currently published live on AppSource with a multi-year billing term, the offer will be delisted from AppSource and published to Azure Marketplace only.
+
+[4] Dynamics 365 apps on Dataverse and Power Apps offers that you transact through Microsoft are automatically enabled for license management. See [ISV app license management for Dynamics 365 apps on Dataverse and Power Apps](isv-app-license.md).
+
+[5] Power BI visual offers that you transact through Microsoft are automatically enabled for license management. See [ISV app license management for Power BI visual offers](isv-app-license-power-bi-visual.md).
### Metered billing
For **SaaS Apps**, the publisher must account for Azure infrastructure usage fee
Depending on the transaction option used, subscription charges are as follows: -- **Subscription pricing**: Software license fees are presented as a monthly or annual, recurring subscription fee billed as a flat rate or per-seat. Recurrent subscription fees are not prorated for mid-term customer cancellations, or unused services. Recurrent subscription fees may be prorated if the customer upgrades or downgrades their subscription in the middle of the subscription term.
+- **Subscription pricing**: Software license fees are presented as a recurring subscription fee billed as a flat rate or per-seat:
+ - SaaS plans support monthly, 1-year, 2-year, and 3-year terms that can be billed either monthly or for the entire term upfront. See [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md#plans).
+ - Azure virtual machine plans support monthly, 1-year, and 3-year plans that are billed monthly. See [Plan a virtual machine offer](marketplace-virtual-machines.md#plans-pricing-and-trials).
+ - Azure application (Managed application) plans are billed monthly. See [Plan a managed application](plan-azure-app-managed-app.md#define-pricing).
- **Usage-based pricing**: For Azure Virtual Machine offers, customers are charged based on the extent of their use of the offer. For Virtual Machine images, customers are charged an hourly Azure Marketplace fee, as set by the publisher, for use of virtual machines deployed from the VM images. The hourly fee may be uniform or varied across virtual machine sizes. Partial hours are charged by the minute. Plans are billed monthly.-- **Metered pricing**: For Azure Application offers and SaaS offers, publishers can use the [Marketplace metering service](marketplace-metering-service-apis.md) to bill for consumption based on the custom meter dimensions they configure. These changes are in addition to monthly or annual charges included in the contract (entitlement). Examples of custom meter dimensions are bandwidth, tickets, or emails processed. Publishers can define one or more metered dimensions for each plan but a maximum of 30 per offer. Publishers are responsible for tracking individual customer usage, with each meter defined in the offer. Events should be reported to Microsoft within an hour of occurrence. Microsoft charges customers based on the usage information reported by publishers for the applicable billing period.
+- **Metered pricing**: For Azure Application offers and SaaS offers, publishers can use the [Marketplace metering service](marketplace-metering-service-apis.md) to bill for consumption based on the custom meter dimensions they configure. These changes are in addition to the flat rate charges included in the contract (entitlement). Examples of custom meter dimensions are bandwidth, tickets, or emails processed. Publishers can define one or more metered dimensions for each plan but a maximum of 30 per offer. Publishers are responsible for tracking individual customer usage, with each meter defined in the offer. Events should be reported to Microsoft within an hour of occurrence. Microsoft charges customers based on the usage information reported by publishers for the applicable billing period.
> [!NOTE] > Offers that are billed according to consumption after a solution has been used are not eligible for refunds.
In this scenario, Microsoft bills $0.14 per hour for use of your published VM im
**SaaS app subscription**
-SaaS subscriptions can be priced at a flat rate or per user on a monthly or annual basis. If you enable the **Sell through Microsoft** option for a SaaS offer, you have the following cost structure:
+SaaS subscriptions can be priced at a flat rate or per user. If you enable the **Sell through Microsoft** option for a SaaS offer, you have the following cost structure:
| **Your license cost** | **$100.00 per month** | |--||
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
Title: Plan a SaaS offer for the Microsoft commercial marketplace - Azure Marketplace
+ Title: Plan a SaaS offer for the Microsoft commercial marketplace | Azure Marketplace
description: Plan a new software as a service (SaaS) offer for selling in Microsoft AppSource, Azure Marketplace, or through the Cloud Solution Provider (CSP) program using the commercial marketplace program in Microsoft Partner Center. Previously updated : 06/29/2022 Last updated : 07/15/2022 # Plan a SaaS offer for the commercial marketplace
If youΓÇÖre creating a transactable offer, you'll need to gather the following i
> [!NOTE] > Inside the Azure portal, we require that you create a single-tenant [Azure Active Directory (Azure AD) app registration](../active-directory/develop/howto-create-service-principal-portal.md). Use the app registration details to authenticate your solution when calling the marketplace APIs. To find the [tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in), go to your Azure Active Directory and select **Properties**, then look for the Directory ID number thatΓÇÖs listed. For example, `50c464d3-4930-494c-963c-1e951d15360e`. -- **Azure Active Directory tenant ID**: (also known as directory ID). Inside the Azure portal, we require you to [register an Azure Active Directory (AD) app](../active-directory/develop/howto-create-service-principal-portal.md) so we can add it to the access control list (ACL) of the API to make sure you are authorized to call it. To find the tenant ID for your Azure Active Directory (AD) app, go to the [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) blade in Azure Active Directory. In the **Display name** column, select the app. Then look for the **Directory (tenant) ID** number listed (for example, `50c464d3-4930-494c-963c-1e951d15360e`).
+- **Azure Active Directory tenant ID**: (also known as directory ID). Inside the Azure portal, we require you to [register an Azure Active Directory (Azure AD) app](../active-directory/develop/howto-create-service-principal-portal.md) so we can add it to the access control list (ACL) of the API to make sure you are authorized to call it. To find the tenant ID for your Azure Active Directory app, go to the [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) blade in Azure Active Directory. In the **Display name** column, select the app. Then look for the **Directory (tenant) ID** number listed (for example, `50c464d3-4930-494c-963c-1e951d15360e`).
- **Azure Active Directory application ID**: You also need your [application ID](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in). To get its value, go to the [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) blade in Azure Active Directory. In the **Display name** column, select the app. Then look for the Application (client) ID number listed (for example, `50c464d3-4930-494c-963c-1e951d15360e`).
If you choose to use the standard contract, you have the option to add universal
## Microsoft 365 integration Integration with Microsoft 365 allows your SaaS offer to provide connected experience across multiple Microsoft 365 App surfaces through related free add-ins like Teams apps, Office add-ins, and SharePoint Framework solutions. You can help your customers easily discover all facets of your E2E solution (web service + related add-ins) and deploy them within one process by providing the following information.
- - If your SaaS offer integrates with Microsoft Graph, then provide the Azure Active Directory (AAD) App ID used by your SaaS offer for the integration. Administrators can review access permissions required for the proper functioning of your SaaS offer as set on the AAD App ID and grant access if advanced admin permission is needed at deployment time.
+ - If your SaaS offer integrates with Microsoft Graph, then provide the Azure Active Directory (Azure AD) App ID used by your SaaS offer for the integration. Administrators can review access permissions required for the proper functioning of your SaaS offer as set on the Azure AD App ID and grant access if advanced admin permission is needed at deployment time.
- If you choose to sell your offer through Microsoft, then this is the same AAD App ID that you have registered to use on your landing page to get basic user information needed to complete customer subscription activation. For detailed guidance, see [Build the landing page for your transactable SaaS offer in the commercial marketplace](azure-ad-transactable-saas-landing-page.md).
+ If you choose to sell your offer through Microsoft, then this is the same Azure AD App ID that you have registered to use on your landing page to get basic user information needed to complete customer subscription activation. For detailed guidance, see [Build the landing page for your transactable SaaS offer in the commercial marketplace](azure-ad-transactable-saas-landing-page.md).
- Provide a list of related add-ins that work with your SaaS offer you want to link. Customers will be able to discover your E2E solution on Microsoft AppSource and administrators can deploy both the SaaS and all the related add-ins you have linked in the same process via Microsoft 365 admin center.
IT admins can review and deploy both the SaaS and linked add-ins within the same
Discovery as a single E2E solution is supported on AppSource for all cases, however, simplified deployment of the E2E solution as described above via the Microsoft 365 admin center is not supported for the following scenarios:
- - ΓÇ£Contact meΓÇ¥ list-only offers.
+ - ΓÇ£Contact meΓÇ¥ list-only offers.
- The same add-in is linked to more than one SaaS offer.
- - The SaaS offer is linked to add-ins, but it does not integrate with Microsoft Graph and no AAD App ID is provided.
- - The SaaS offer is linked to add-ins, but AAD App ID provided for Microsoft Graph integration is shared across multiple SaaS offers.
-
+ - The SaaS offer is linked to add-ins, but it does not integrate with Microsoft Graph and no Azure AD App ID is provided.
+ - The SaaS offer is linked to add-ins, but Azure AD App ID provided for Microsoft Graph integration is shared across multiple SaaS offers.
+ ## Offer listing details When you [create a new SaaS offer](create-new-saas-offer.md) in Partner Center, you will enter text, images, optional videos, and other details on the **Offer listing** page. This is the information that customers will see when they discover your offer listing in the commercial marketplace, as shown in the following example.
SaaS offers can use one of two pricing models with each plan: either _flat rate_
> [!IMPORTANT] > After your offer is published, you cannot change the pricing model. In addition, all plans for the same offer must share the same pricing model.
+### SaaS billing terms and payment options
+
+The _billing term_ is the plan duration the customer is committing to, and the payment option is the payment schedule that the customer follows to pay for the entire term. SaaS apps support 1-month, 1-year, 2-year, and 3-year billing terms with options to pay one-time upfront, or with equal payments (if applicable).
+
+This table shows the payment options for SaaS offers in the commercial marketplace.
+
+| Billing term | One-time upfront payment | Monthly equal payments | Annual equal payments |
+| | - | - | - |
+| 1-month | Yes | NA | NA |
+| 1-year | Yes | Yes | NA |
+| 2-year | Yes | Yes | Yes |
+| 3-year | Yes | Yes | Yes |
+
+> [!IMPORTANT]
+> If you choose to configure a 2-year or 3-year billing term, or a 1-year billing term with a monthly payment option, your offer will be published to Azure Marketplace only. If you update an offer that is currently published live on AppSource with a multi-year billing term, the offer will be delisted from AppSource and published to Azure Marketplace only.
+
+You can choose to configure one or more billing terms on a plan. For each billing term you define, you can select one payment option (monthly or annual payments) and set the price for each payment option. For example, to encourage a potential customer to subscribe to a longer billing term, you could offer a 2-year billing term for $100.00 and a 3-year billing term for $90.00.
+
+> [!NOTE]
+> Only one payment option is supported for a billing term on a given plan. To offer an additional payment option for the same term, you can create another plan.
+
+For billing terms with equal payments, payment collection will be enforced for the entire term and the [standard refund policy](/marketplace/refund-policies) applies. For more information about SaaS subscription management, see [SaaS subscription lifecycle management](/marketplace/saas-subscription-lifecycle-management).
+ ### SaaS billing For SaaS apps that run in your (the publisherΓÇÖs) Azure subscription, infrastructure usage is billed to you directly; customers do not see actual infrastructure usage fees. You should bundle Azure infrastructure usage fees into your software license pricing to compensate for the cost of the infrastructure you deployed to run the solution.
-SaaS app offers that are sold through Microsoft support monthly or annual billing based on a flat fee, per user, or consumption charges using the [metered billing service](./partner-center-portal/saas-metered-billing.md). The commercial marketplace operates on an agency model, whereby publishers set prices, Microsoft bills customers, and Microsoft pays revenue to publishers while withholding an agency fee.
+SaaS app offers that are sold through Microsoft support one-time upfront monthly or annual billing (payment option) based on a flat fee, per user, or consumption charges using the [metered billing service](./partner-center-portal/saas-metered-billing.md). The commercial marketplace operates on an agency model, whereby publishers set prices, Microsoft bills customers, and Microsoft pays revenue to publishers while withholding an agency fee.
The following example shows a sample breakdown of costs and payouts to demonstrate the agency model. In this example, Microsoft bills $100.00 to the customer for your software license and pays out $97.00 to the publisher.
The following example shows a sample breakdown of costs and payouts to demonstra
| **Microsoft bills** | **$100 per month** | | Microsoft charges a 3% Marketplace Service Fee and pays you 97% of your license cost | $97.00 per month |
-A preview audience can access your offer prior to being published live in the online stores. They can see how your offer will look in the commercial marketplace and test the end-to-end functionality before you publish it live.
+## Preview audience
+
+A preview audience can access your offer prior to being published live in the online stores. They can see how your offer will look in the commercial marketplace and test the end-to-end functionality before you publish it live.
On the **Preview audience** page, you can define a limited preview audience. This setting is not available if you choose to process transactions independently instead of selling your offer through Microsoft. If so, you can skip this section and go to [Additional sales opportunities](#additional-sales-opportunities).
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plans-pricing.md
Title: Plans and pricing for commercial marketplace offers
+ Title: Plans and pricing for commercial marketplace offers | Azure Marketplace
description: Learn about plans for Microsoft commercial marketplace offers in Partner Center. Previously updated : 04/01/2022 Last updated : 07/27/2022 # Plans and pricing for commercial marketplace offers
The commercial marketplace operates on an agency model, whereby publishers set p
### Pricing models
-You must associate a pricing model with each plan for the following offer types. Each of these offer types have different available pricing models:
+You must associate a pricing model with each plan for the following offer types. Each of these offer types has different available pricing models:
- **Azure managed application**: flat rate (monthly) and usage-based pricing (metering service dimensions).-- **Software as a service**: flat rate (monthly or annual), per user, and usage-based pricing (metering service dimensions).
+- **Software as a service**: flat rate (1-month to 3-year terms), per user (1-month to 3-year terms), and usage-based pricing (metering service dimensions).
+
+ > [!NOTE]
+ > For flat rate and per user pricing models, customers can pay either monthly, annually, or one-time upfront for the entire 1-year, 2-year, or 3-year term.
+ - **Azure virtual machine**: Bring your own license (BYOL) and usage-based pricing. For a usage-based pricing model, you can charge per core, per core size, or per market and core size. A BYOL license model does not allow for additional, usage-based charges. (BYOL virtual machine offers do not require a pricing model.) An offer can have only one pricing model. For example, a SaaS offer cannot have one plan that's flat rate and another plan thatΓÇÖs per user. However, a SaaS offer can have some plans with flat rate with metered billing and other flat rate plans without metered billing. See specific offer documentation for detailed information.
marketplace Review Publish Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/review-publish-offer.md
Previously updated : 06/29/2022 Last updated : 08/01/2022 # Review and publish an offer to the commercial marketplace
You can review your offer status on the **Overview** tab of the commercial marke
| Pending Stop distribution | Publisher selected "Stop distribution" on an offer or plan, but the action has not yet been completed. | | Not available in the marketplace | A previously published offer in the marketplace has been removed. |
+> [!TIP]
+> After publishing an offer, the [owner](user-roles.md) of your developer account is notified of the publishing status and required actions through email and the Action Center in Partner Center. For more information about Action Center, see [Action Center Overview](/partner-center/action-center-overview).
+ ## Validation and publishing steps When you are ready to submit an offer for publishing, select **Review and publish** at the upper-right corner of the portal. The **Review and publish** page shows the status of each page for your offer, which can be one of the following:
The offer is tested across various platforms and versions to ensure itΓÇÖs robus
### Certification failure report
-If your offer fails any of the listing, technical, or policy checks, or if you are not eligible to submit an offer of that type, we email a certification failure report to you.
+If your offer fails any of the listing, technical, or policy checks, or if you are not eligible to submit an offer of that type, we provide a certification failure report to you through email and the Action Center in Partner Center.
This report contains descriptions of any policies that failed, along with review notes. Review this email report, address any issues, make updates to your offer where needed, and resubmit the offer using the [commercial marketplace portal](https://go.microsoft.com/fwlink/?linkid=2165935) in Partner Center. You can resubmit the offer as many times as needed until passing certification.
marketplace Test Publish Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/test-publish-saas-offer.md
This article explains how to use Partner Center to submit your SaaS offer for pu
Your offer's publish status will change as it moves through the publication process. For detailed information on this process, see [Validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+> [!TIP]
+> After publishing an offer, the [owner](user-roles.md) of your developer account is notified of the publishing status and required actions through email and the Action Center in Partner Center. For more information about Action Center, see [Action Center Overview](/partner-center/action-center-overview).
+ ## Preview and test your offer When the offer is ready for your sign off, weΓÇÖll send you an email to request that you review and approve your offer preview. You can also refresh the **Offer overview** page in your browser to see if your offer has reached the Publisher sign-off phase. If it has, the **Go live** button and preview links will be available. There will be a link for either Microsoft AppSource preview, Azure Marketplace preview, or both depending on the options you chose when creating your offer. If you chose to sell your offer through Microsoft, anyone who has been added to the preview audience can test the acquisition and deployment of your offer to ensure it meets your requirements during this stage.
marketplace Update Existing Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/update-existing-offer.md
Previously updated : 04/21/2022 Last updated : 06/01/2022
This article explains how to make updates to existing offers and plans, and also how to remove an offer from the commercial marketplace. You can view your offers in the [Commercial Marketplace portal](https://go.microsoft.com/fwlink/?linkid=2165935.) in Partner Center.
+> [!TIP]
+> After publishing an offer, the [owner](user-roles.md) of your developer account is notified of the publishing status and required actions through email and the Action Center in Partner Center. For more information about Action Center, see [Action Center Overview](/partner-center/action-center-overview).
+ ## Request access to manage an offer If you see an offer you need to update but donΓÇÖt have access, contact the publisher owner(s) associated with the offer. On the [**Marketplace offers**](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, the owner list for an inaccessible offer is available by selecting **Request access** in the **Status** column of the table. A publisher owner can grant you the _developer_ or _manager_ role for the offer by following the instructions to [add existing users](add-manage-users.md#add-existing-users) to their account.
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
Previously updated : 07/26/2022 Last updated : 08/01/2022 # Azure Policy Regulatory Compliance controls for Azure Database for MySQL
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
na Previously updated : 11/06/2020 Last updated : 07/31/2022
Microsoft has partnered with internet service providers (ISPs), internet exchang
This article provides information on the connectivity providers that are partnered with Microsoft to offer Azure Peering Service connection to customers. - ## Peering Service partners list The table in this article provides information on the Peering Service connectivity partners and their associated markets. | **Partners** | **Market**| |--||
-| [Lumen Technologies](https://www.ctl.io/microsoft-azure-peering-services/) |North America, Europe, Asia|
+| [Atman](https://www.atman.pl/en/atman-internet-maps/) |Europe|
| [BBIX](https://www.bbix.net/en/service/) |Japan |
+| [BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/microsoft-azure-cloud-connect/) |Europe|
| [CCL](https://concepts.co.nz/news/general-news/) |Oceania |
+| [CMC Networks](https://www.cmcnetworks.net/products/microsoft-azure-peering-services.html) |Africa|
| [Colt](https://www.colt.net/why-colt/strategic-alliances/microsoft-partnership/)|Europe, Asia|
+| [Converge ICT](https://www.convergeict.com/enterprise/microsoft-azure-peering-service-maps/) |Asia|
| [DE-CIX](https://www.de-cix.net/)|Europe, North America | | [IIJ](https://www.iij.ad.jp/en/) | Japan | | [Intercloud](https://intercloud.com/microsoft-saas-applications/)|Europe | | [Kordia](https://www.kordia.co.nz/cloudconnect) |Oceania |
+| [LINX](https://www.linx.net/services/microsoft-azure-peering/) |Europe|
| [Liquid Telecom](https://liquidcloud.africa/keep-expanding-365-direct/) | Africa |
+| [Lumen Technologies](https://www.ctl.io/microsoft-azure-peering-services/) |North America, Europe, Asia|
+| [MainOne](https://www.mainone.net/connectivity-services/) |Africa|
+| [NAP Africa](https://www.napafrica.net/technical/microsoft-azure-peering-service/) |Africa|
| [NTT Communications](https://www.ntt.com/en/services/network/software-defined-network.html) | Japan, Indonesia | | [PCCW](https://www.pccwglobal.com/en/enterprise/products/network/ep-global-internet-access) |Asia | | [Singtel](https://www.singtel.com/business/campaign/singnet-cloud-connect-microsoft-direct) |Asia |
-| [NAP Africa](https://www.napafrica.net/technical/microsoft-azure-peering-service/) |Africa|
| [Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/wireline/ip-plus.html) |Europe|
-| [CMC Networks](https://www.cmcnetworks.net/products/microsoft-azure-peering-services.html) |Africa|
-| [MainOne](https://www.mainone.net/connectivity-services/) |Africa|
-| [BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/microsoft-azure-cloud-connect/) |Europe|
| [Telstra International](https://www.telstra.com.sg/en/products/global-networks/global-internet/global-internet-direct) |Asia, Europe |
-| [Atman](https://www.atman.pl/en/atman-internet-maps/) |Europe|
-| [LINX](https://www.linx.net/services/microsoft-azure-peering/) |Europe|
-| [Converge ICT](https://www.convergeict.com/enterprise/microsoft-azure-peering-service-maps/) |Asia|
-- > [!NOTE] >For more information about enlisting with the Peering Service Partner program, reach out to peeringservice@microsoft.com.
postgresql Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-versions.md
PostgreSQL - Hyperscale (Citus).
Depending on which version of PostgreSQL is running in a server group, different [versions of PostgreSQL extensions](reference-extensions.md)
-will be installed as well. In particular, PostgreSQL versions 12-14 come with
+will be installed as well. In particular, PostgreSQL 14 comes with Citus 11, PostgreSQL versions 12 and 13 come with
Citus 10, and earlier PostgreSQL versions come with Citus 9.5. ## Next steps
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
Previously updated : 07/26/2022 Last updated : 08/01/2022 # Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table for the packet core instance that
|The core technology type the packet core instance should support (5G or 4G). |**Technology type**| |The custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. You commissioned the AKS-HCI cluster as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).</br></br> If you're going to create your site using the Azure portal, collect the name of the custom location.</br></br> If you're going to create your site using an ARM template, collect the full resource ID of the custom location.|**Custom location**| -- ## Collect access network values Collect all the values in the following table to define the packet core instance's connection to the access network over the control plane and user plane interfaces. The field name displayed in the Azure portal will depend on the value you have chosen for **Technology type**, as described in [Collect packet core configuration values](#collect-packet-core-configuration-values). > [!IMPORTANT]
-> For all values in this table, you must use the same values you used when deploying the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device for this site. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
+> Where noted, you must use the same values you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
|Value |Field name in Azure portal | |||
- | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface. |**N2 address (signaling)** (for 5G) or **S1-MME address** (for 4G).|
- | The IP address for the user plane interface on the access network. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface. |N/A. You'll only need this value if you're using an ARM template to create the site.|
- | The network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. |**N2 subnet** and **N3 subnet** (for 5G), or **S1-MME subnet** and **S1-U subnet** (for 4G).|
- | The access subnet default gateway. |**N2 gateway** and **N3 gateway** (for 5G), or **S1-MME gateway** and **S1-U gateway** (for 4G).|
+ | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 address (Signaling)** (for 5G) or **S1-MME address** (for 4G). |
+ | The name for the control plane interface on the access network. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface. The name must match the corresponding virtual network name on port 5 on your Azure Stack Edge Pro device. | **N2 interface name** (for 5G) or **S1-MME interface name** (for 4G). |
+ | The name for the user plane interface on the access network. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface. The name must match the corresponding virtual network name on port 5 on your Azure Stack Edge Pro device. | **N3 interface name** (for 5G) or **S1-U interface name** (for 4G). |
+ | The network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 subnet** and **N3 subnet** (for 5G), or **S1-MME subnet** and **S1-U subnet** (for 4G).|
+ | The access subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 gateway** and **N3 gateway** (for 5G), or **S1-MME gateway** and **S1-U gateway** (for 4G).|
## Collect data network values
Collect all the values in the following table to define the packet core instance
|Value |Field name in Azure portal | |||
- |The name of the data network. |**Data network**|
- | The IP address for the user plane interface on the data network. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface. You identified the IP address for this interface in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
- |The network address of the data subnet in CIDR notation. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6/SGi subnet**|
- |The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6/SGi gateway**|
+ | The name of the data network. |**Data network name**|
+ | The name for the user plane interface on the data network. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface. The name must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device. | **N6 interface name** (for 5G) or **SGi interface name** (for 4G). |
+ | The network address of the data subnet in CIDR notation. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. | **N6 subnet** (for 5G) or **SGi subnet** (for 4G). |
+ |The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. | **N6 gateway** (for 5G) or **SGi gateway** (for 4G). |
| The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**| | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**| |Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses. |**NAPT**|
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
This how-to guide takes you through the process of collecting the information you'll need to deploy a private mobile network through Azure Private 5G Core Preview. - You can use this information to deploy a private mobile network through the [Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md).-- Alternatively, you can use the information to quickly deploy a private mobile network with a single site using an [Azure Resource Manager template (ARM template)](deploy-private-mobile-network-with-site-arm-template.md) In this case, you'll also need to [collect information for the site](collect-required-information-for-a-site.md).
+- Alternatively, you can use the information to quickly deploy a private mobile network with a single site using an [Azure Resource Manager template (ARM template)](deploy-private-mobile-network-with-site-arm-template.md). In this case, you'll also need to [collect information for the site](collect-required-information-for-a-site.md).
## Prerequisites
Each SIM resource represents a physical SIM or eSIM that will be served by the p
As part of creating your private mobile network, you can provision one or more SIMs that will use it. If you decide not to provision SIMs at this point, you can do so after deploying your private mobile network using the instructions in [Provision SIMs](provision-sims-azure-portal.md).
-If you want to provision SIMs as part of deploying your private mobile network, you must choose one of the following provisioning methods:
+If you want to provision SIMs as part of deploying your private mobile network, take the following steps.
-- Manually entering values for each SIM into fields in the Azure portal. This option is best when provisioning a few SIMs.-- Importing a JSON file containing values for one or more SIM resources. This option is best when provisioning a large number of SIMs. The file format required for this JSON file is given in [JSON file format for provisioning SIMs](#json-file-format-for-provisioning-sims). You'll need to use this option if you're deploying your private mobile network with an ARM template.
+1. Choose a name for a new SIM group to which all of the SIMs you provision will be added. If you need more than one SIM group, you can create additional SIM groups after you've deployed your private mobile network using the instructions in [Manage SIM groups](manage-sim-groups.md).
+
+1. Choose one of the following methods for provisioning your SIMs:
-You must then collect each of the values given in the following table for each SIM resource you want to provision.
+ - Manually entering values for each SIM into fields in the Azure portal. This option is best when provisioning a few SIMs.
+ - Importing a JSON file containing values for one or more SIM resources. This option is best when provisioning a large number of SIMs. The file format required for this JSON file is given in [JSON file format for provisioning SIMs](#json-file-format-for-provisioning-sims). You'll need to use this option if you're deploying your private mobile network with an ARM template.
- |Value |Field name in Azure portal | JSON file parameter name |
+1. Collect each of the values given in the following table for each SIM resource you want to provision.
+
+ |Value |Field name in Azure portal | JSON file parameter name |
|||| |The name for the SIM resource. The name must only contain alphanumeric characters, dashes, and underscores. |**SIM name**|`simName`| |The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. It's a unique numerical value between 19 and 20 digits in length, beginning with 89. |**ICCID**|`integratedCircuitCardIdentifier`|
private-5g-core Collect Required Information For Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-sim-policy.md
# Collect the required information for a SIM policy for Azure Private 5G Core Preview
-SIM policies allow you to define different sets of policies and interoperability settings. Each SIM policy can be assigned to a different group of SIMs. This allows you to offer different quality of service (QoS) policy settings to different groups of SIMs on the same data network.
+SIM policies allow you to define different sets of policies and interoperability settings. Each SIM policy can be assigned to a different set of SIMs. This allows you to offer different quality of service (QoS) policy settings to different SIMs on the same data network.
In this how-to guide, we'll collect all the required information to configure a SIM policy.
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
In this how-to guide, you'll carry out each of the tasks you need to complete be
## Get access to Azure Private 5G Core for your Azure subscription
-Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you do not already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP).
+Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP).
Once your trials engineer has confirmed your access, register the Mobile Network resource provider (Microsoft.MobileNetwork) for your subscription, as described in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md). ## Choose the core technology type (5G or 4G)
-Choose whether each site in the private mobile network should provide coverage for 5G or 4G user equipment (UEs). A single site cannot support 5G and 4G UEs simultaneously. If you're deploying multiple sites, you can choose to have some sites support 5G UEs and others support 4G UEs.
+Choose whether each site in the private mobile network should provide coverage for 5G or 4G user equipment (UEs). A single site can't support 5G and 4G UEs simultaneously. If you're deploying multiple sites, you can choose to have some sites support 5G UEs and others support 4G UEs.
## Allocate subnets and IP addresses
Azure Private 5G Core supports the following IP address allocation methods for U
- Dynamic. Dynamic IP address allocation automatically assigns a new IP address to a UE each time it connects to the private mobile network. -- Static. Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. This is useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you may configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you will not need to reconfigure the video analysis application with new IP addresses each time the cameras restart. You'll allocate static IP addresses to a UE as part of [provisioning its SIM](provision-sims-azure-portal.md).
+- Static. Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. This is useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you may configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you won't need to reconfigure the video analysis application with new IP addresses each time the cameras restart. You'll allocate static IP addresses to a UE as part of [provisioning its SIM](provision-sims-azure-portal.md).
You can choose to support one or both of these methods for each site in your private mobile network.
For each site you're deploying, do the following:
- Decide which IP address allocation methods you want to support. - For each method you want to support, identify an IP address pool from which IP addresses can be allocated to UEs. You'll need to provide each IP address pool in CIDR notation.
- If you decide to support both methods for a particular site, ensure that the IP address pools are of the same size and do not overlap.
+ If you decide to support both methods for a particular site, ensure that the IP address pools are of the same size and don't overlap.
- Decide whether you want to enable Network Address and Port Translation (NAPT) for the data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses.
For each site you're deploying, do the following.
- Ensure you have at least one network switch with at least three ports available. You'll connect each Azure Stack Edge Pro device to the switch(es) in the same site as part of the instructions in [Order and set up your Azure Stack Edge Pro device(s)](#order-and-set-up-your-azure-stack-edge-pro-devices). - If you're not enabling NAPT as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated to the packet core instance's user plane interface on the data network.
+### Ports required for local access
+
+The following table contains the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling.
+
+You should set these up in addition to [the ports required for Azure Stack Edge (ASE)](../databox-online/azure-stack-edge-gpu-system-requirements.md#networking-port-requirements).
+
+| Port | ASE interface | Description|
+|--|--|--|
+| TCP 443 Inbound | Management (LAN) | Access to local monitoring tools (packet core dashboards and distributed tracing). |
+| SCTP 38412 Inbound | Port 5 (Access network) | Control plane access signaling (N2 interface). </br>Only required for 5G deployments. |
+| SCTP 36412 Inbound | Port 5 (Access network) | Control plane access signaling (S1-MME interface). </br>Only required for 4G deployments. |
+| UDP 2152 In/Outbound | Port 5 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G). |
+| All IP traffic | Port 6 (Data network) | Data network user plane data (N6 interface for 5G, SGi for 4G). |
+ ## Order and set up your Azure Stack Edge Pro device(s) Do the following for each site you want to add to your private mobile network. Detailed instructions for how to carry out each step are included in the **Detailed instructions** column where applicable.
private-5g-core Configure Service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-azure-portal.md
In this how-to guide, we'll configure a service using the Azure portal.
In this step, you'll configure basic settings for your new service using the Azure portal.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. Search for and select the **Mobile Network** resource representing the private mobile network for which you want to configure a service. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource."::: 1. In the **Resource** menu, select **Services**.
private-5g-core Configure Service Sim Policy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-sim-policy-arm-template.md
Two Azure resources are defined in the template.
- **Subscription:** select the Azure subscription you used to create your private mobile network. - **Resource group:** select the resource group containing the Mobile Network resource representing your private mobile network. - **Region:** select **East US**.
- - **Location:** leave this field unchanged.
+ - **Location:** enter *eastus*.
- **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network. - **Existing Slice Name:** enter **slice-1**. - **Existing Data Network Name:** enter the name of the data network to which your private mobile network connects.
Two Azure resources are defined in the template.
You can now assign the SIM policy to your SIMs to bring them into service. -- [Assign a SIM policy to a SIM](provision-sims-azure-portal.md#assign-sim-policies)
+- [Assign a SIM policy to a SIM](manage-existing-sims.md#assign-sim-policies)
private-5g-core Configure Sim Policy Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-sim-policy-azure-portal.md
# Configure a SIM policy for Azure Private 5G Core Preview - Azure portal
-*SIM policies* allow you to define different sets of policies and interoperability settings that can each be assigned to a group of SIMs. You'll need to assign a SIM policy to a SIM before the user equipment (UE) using that SIM can access the private mobile network. In this how-to-guide, you'll learn how to configure a SIM policy.
+*SIM policies* allow you to define different sets of policies and interoperability settings that can each be assigned to one or more SIMs. You'll need to assign a SIM policy to a SIM before the user equipment (UE) using that SIM can access the private mobile network. In this how-to-guide, you'll learn how to configure a SIM policy.
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope. - Identify the name of the Mobile Network resource corresponding to your private mobile network. - Collect all the configuration values in [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md) for your chosen SIM policy.-- Decide whether you want to assign this SIM policy to any SIMs as part of configuring it. If you do, you must have provisioned these SIMs following the instructions in [Provision SIMs - Azure portal](provision-sims-azure-portal.md).
+- Decide whether you want to assign this SIM policy to any SIMs as part of configuring it. If you do, you must have already provisioned the SIMs (as described in [Provision SIMs - Azure portal](provision-sims-azure-portal.md)).
## Configure the SIM policy
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. Search for and select the **Mobile Network** resource representing the private mobile network for which you want to configure a SIM policy. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
:::image type="content" source="media/configure-sim-policy-azure-portal/network-scope-configuration.png" alt-text="Screenshot of the Azure portal. It shows the Create a SIM policy screen. The Network scope section is highlighted.":::
-1. If you want to assign this SIM policy to one or more existing provisioned SIMs, select **Next : Assign to SIMs**, and then select your chosen SIMs from the list that appears.
+1. If you want to assign this SIM policy to one or more existing provisioned SIMs, select **Next : Assign to SIMs**, and then select your chosen SIMs from the list that appears. You can choose to search this list based on any field, including SIM name, SIM group, and device type.
:::image type="content" source="media/configure-sim-policy-azure-portal/assign-to-sims-tab.png" alt-text="Screenshot of the Azure portal. It shows the Assign to SIMs tab for a SIM policy.":::
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Azure Private 5G Core Preview private mobile networks include one or more *sites
In this step, you'll create the mobile network site resource representing the physical enterprise location of your Azure Stack Edge device, which will host the packet core instance.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. Search for and select the **Mobile Network** resource representing the private mobile network to which you want to add a site. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a mobile network resource.":::
In this step, you'll create the mobile network site resource representing the ph
1. Use the information you collected in [Collect site resource values](collect-required-information-for-a-site.md#collect-mobile-network-site-resource-values) to fill out the fields on the **Basics** configuration tab, and then select **Next : Packet core >**.
- :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-site-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab for a site resource.":::
+ :::image type="content" source="media/create-a-site/create-site-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab for a site resource.":::
1. You'll now see the **Packet core** configuration tab.
- :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-site-packet-core-tab.png" alt-text="Screenshot of the Azure portal showing the Packet core configuration tab for a site resource.":::
+ :::image type="content" source="media/create-a-site/create-site-packet-core-tab.png" alt-text="Screenshot of the Azure portal showing the Packet core configuration tab for a site resource.":::
1. In the **Packet core** section, set the fields as follows: - Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) to fill out the **Technology type** and **Custom location** fields.
- - Leave the **Version** field blank unless you've been instructed to do otherwise by your support representative.
+ - Select the recommended packet core version in the **Version** field.
1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. Note the following: - Use the same value for both the **N2 subnet** and **N3 subnet** fields (if this site will support 5G user equipment (UEs)). - Use the same value for both the **N2 gateway** and **N3 gateway** fields (if this site will support 5G UEs). - Use the same value for both the **S1-MME subnet** and **S1-U subnet** fields (if this site will support 4G UEs).
- - Use the same value for both the **S1-MME gateway** and **S1-U gateway** fields (if this site will support 4G UEs).
+ - Use the same value for both the **S1-MME gateway** and **S1-U gateway** fields (if this site will support 4G UEs).
+
+1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields and select **Submit**. Note that you can only connect the packet core instance to a single data network.
+
+ :::image type="content" source="media/create-a-site/create-site-add-data-network.png" alt-text="Screenshot of the Azure portal showing the Add data network screen.":::
-1. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields in the **Attached data networks** section. Note that you can only connect the packet core instance to a single data network.
1. Select **Review + create**. 1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
- :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-site-validation.png" alt-text="Screenshot of the Azure portal showing successful validation of configuration values for a site resource.":::
+ :::image type="content" source="media/create-a-site/create-site-validation.png" alt-text="Screenshot of the Azure portal showing successful validation of configuration values for a site resource.":::
If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
In this step, you'll create the mobile network site resource representing the ph
- A **Packet Core Data Plane** resource representing the data plane function of the packet core instance in the site. - An **Attached Data Network** resource representing the site's view of the data network.
- :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-related-resources.png":::
+ :::image type="content" source="media/create-a-site/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/create-a-site/site-related-resources.png":::
## Next steps
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Four Azure resources are defined in the template.
| **Subscription** | Select the Azure subscription you used to create your private mobile network. | | **Resource group** | Select the resource group containing the mobile network resource representing your private mobile network. | | **Region** | Select **East US**. |
- | **Location** | Leave this field unchanged. |
+ | **Location** | Enter *eastus*. |
| **Existing Mobile Network Name** | Enter the name of the mobile network resource representing your private mobile network. | | **Existing Data Network Name** | Enter the name of the data network to which your private mobile network connects. | | **Site Name** | Enter a name for your site. |
- | **Control Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
+ | **Platform Type** | Ensure **AKS-HCI** is selected. |
+ | **Control Plane Access Interface Name** | Enter the name of the control plane interface on the access network. This must match the corresponding virtual network name on port 5 on your Azure Stack Edge Pro device. |
| **Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
- | **User Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- | **User Plane Access Interface Ip Address** | Enter the IP address for the user plane interface on the access network. |
+ | **User Plane Access Interface Name** | Enter the name of the user plane interface on the access network. This must match the corresponding virtual network name on port 5 on your Azure Stack Edge Pro device. |
+ | **User Plane Access Interface Ip Address** | Leave this field blank. |
| **Access Subnet** | Enter the network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. | | **Access Gateway** | Enter the access subnet default gateway. |
- | **User Plane Data Interface Name** | Enter the name of the interface that corresponds to port 6 on your Azure Stack Edge Pro device. |
- | **User Plane Data Interface Ip Address** | Enter the IP address for the user plane interface on the data network. |
+ | **User Plane Data Interface Name** | Enter the name of the user plane interface on the data network. This must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device. |
+ | **User Plane Data Interface Ip Address** | Leave this field blank. |
| **User Plane Data Interface Subnet** | Enter the network address of the data subnet in CIDR notation. | | **User Plane Data Interface Gateway** | Enter the data subnet default gateway. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- | **Core Network Technology** | Enter `5GC` for 5G, or `EPC` for 4G. |
+ | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
| **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. | | **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. |
Four Azure resources are defined in the template.
- A **Packet Core Data Plane** resource representing the data plane function of the packet core instance in the site. - An **Attached Data Network** resource representing the site's view of the data network.
- :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-related-resources.png":::
+ :::image type="content" source="media/create-a-site/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/create-a-site/site-related-resources.png":::
## Next steps
private-5g-core Default Service Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/default-service-sim-policy.md
The following tables provide the settings for the default service and its associ
|Setting |Value | |||
-|The service name. |*Allow-all-traffic* |
+|The service name. |*Allow_all_traffic* |
|A precedence value that the packet core instance must use to decide between services when identifying the QoS values to offer.|*253* | |The Maximum Bit Rate (MBR) for uploads across all service data flows that will be included in data flow policy rules configured on this service.|*2 Gbps* | |The Maximum Bit Rate (MBR) for downloads across all service data flows that will be included in data flow policy rules configured on this service. |*2 Gbps* |
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
Azure Private 5G Core is an Azure cloud service for deploying and managing 5G co
- A private mobile network. - A site. - The default service and SIM policy (as described in [Default service and SIM policy](default-service-sim-policy.md)).-- Optionally, one or more SIMs.
+- Optionally, one or more SIMs, and a SIM group.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
The following Azure resources are defined in the template.
- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the user plane interface on the access network. - [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the control plane interface on the access network. - [**Microsoft.MobileNetwork/mobileNetworks**](/azure/templates/microsoft.mobilenetwork/mobilenetworks): a resource representing the private mobile network as a whole.-- [**Microsoft.MobileNetwork/sims:**](/azure/templates/microsoft.mobilenetwork/sims) a resource representing a physical SIM or eSIM.
+- [**Microsoft.MobileNetwork/simGroups**](/azure/templates/microsoft.mobilenetwork/simGroups): a resource representing a SIM group.
+- [**Microsoft.MobileNetwork/simGroups/sims**](/azure/templates/microsoft.mobilenetwork/simGroups/sims): a resource representing a physical SIM or eSIM.
## Deploy the template
The following Azure resources are defined in the template.
|**Mobile Network Code** | Enter the mobile network code for the private mobile network. | |**Site Name** | Enter a name for your site. | |**Service Name** | Leave this field unchanged. |
- |**SIM Resources** | If you want to provision SIMs, paste in the contents of the JSON file containing your SIM information. Otherwise, leave this field unchanged. |
|**Sim Policy Name** | Leave this field unchanged. | |**Slice Name** | Leave this field unchanged. |
- |**Control Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
+ |**Sim Group Name** | If you want to provision SIMs, enter the name of the SIM group to which the SIMs will be added. Otherwise, leave this field blank. |
+ |**Sim Resources** | If you want to provision SIMs, paste in the contents of the JSON file containing your SIM information. Otherwise, leave this field unchanged. |
+ | **Platform Type** | Ensure **AKS-HCI** is selected. |
+ |**Control Plane Access Interface Name** | Enter the name of the control plane interface on the access network. This must match the corresponding virtual network name on port 5 on your Azure Stack Edge Pro device. |
|**Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
- |**User Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- |**User Plane Access Interface Ip Address** | Enter the IP address for the user plane interface on the access network. |
+ |**User Plane Access Interface Name** | Enter the name of the user plane interface on the access network. This must match the corresponding virtual network name on port 5 on your Azure Stack Edge Pro device. |
+ | **User Plane Access Interface Ip Address** | Leave this field blank. |
|**Access Subnet** | Enter the network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. | |**Access Gateway** | Enter the access subnet default gateway. |
- |**User Plane Data Interface Name** | Enter the name of the interface that corresponds to port 6 on your Azure Stack Edge Pro device. |
- |**User Plane Data Interface Ip Address** | Enter the IP address for the user plane interface on the data network. |
+ |**User Plane Data Interface Name** | Enter the name of the user plane interface on the data network. This must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device. |
+ | **User Plane Data Interface Ip Address** | Leave this field blank. |
|**User Plane Data Interface Subnet** | Enter the network address of the data subnet in CIDR notation. | |**User Plane Data Interface Gateway** | Enter the data subnet default gateway. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- |**Core Network Technology** | Enter `5GC` for 5G, or `EPC` for 4G. |
+ |**Data Network Name** | Enter the name of the data network. |
+ |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
|**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.| |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.|
The following Azure resources are defined in the template.
- An **Attached Data Network** resource representing the site's view of the data network. - A **Service** resource representing the default service. - A **SIM Policy** resource representing the default SIM policy.
- - One or more **SIM** resources representing physical SIMs or eSIMs (if you provisioned any).
+ - A **SIM Group** resource (if you provisioned any SIMs).
:::image type="content" source="media/create-full-private-5g-core-deployment-arm-template/full-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing the resources for a full Azure Private 5G Core deployment." lightbox="media/create-full-private-5g-core-deployment-arm-template/full-deployment-resource-group.png":::
private-5g-core Distributed Tracing Share Traces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/distributed-tracing-share-traces.md
In this step, you'll export the trace from the distributed tracing web GUI and s
You can now upload the trace to the container you created in [Create a storage account and blob container in Azure](#create-a-storage-account-and-blob-container-in-azure).
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. Navigate to your Storage account resource. 1. In the **Resource** menu, select **Containers**.
private-5g-core Enable Log Analytics For Private 5G Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-log-analytics-for-private-5g-core.md
Log Analytics is a tool in the Azure portal used to edit and run log queries wit
- Identify the Kubernetes - Azure Arc resource representing the Azure Arc-enabled Kubernetes cluster on which your packet core instance is running. - Ensure you have [Contributor](../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Kubernetes - Azure Arc resource.-- Ensure your local machine has kubectl access to the Azure Arc-enabled Kubernetes cluster.
+- Ensure your local machine has admin kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires an admin kubeconfig file. Contact your trials engineer for instructions on how to obtain this.
## Create an Azure Monitor extension
In this step, you'll configure and deploy a ConfigMap which will allow Container
`kubectl apply -f 99-azure-monitoring-configmap.yml`
- The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+ The command will return quickly with a message that's similar to the following: `configmap "container-azm-ms-agentconfig" created`. However, the configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time.
## Run a query In this step, you'll run a query in the Log Analytics workspace to confirm that you can retrieve logs for the packet core instance.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. Search for and select the Log Analytics workspace you used when creating the Azure Monitor extension in [Create an Azure Monitor extension](#create-an-azure-monitor-extension). 1. Select **Logs** from the resource menu. :::image type="content" source="media/log-analytics-workspace.png" alt-text="Screenshot of the Azure portal showing a Log Analytics workspace resource. The Logs option is highlighted.":::
private-5g-core How To Guide Deploy A Private Mobile Network Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/how-to-guide-deploy-a-private-mobile-network-azure-portal.md
Private mobile networks provide high performance, low latency, and secure connec
- Complete all of the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). - Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor or Owner role at the subscription scope. - Collect all of the information listed in [Collect the required information to deploy a private mobile network - Azure portal](collect-required-information-for-private-mobile-network.md). You may also need to take the following steps based on the decisions you made when collecting this information.
-
+ - If you decided you wanted to provision SIMs using a JSON file, ensure you've prepared this file and made it available on the machine you'll use to access the Azure portal. For more information on the file format, see [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims). - If you decided you want to use the default service and SIM policy, identify the name of the data network to which your private mobile network will connect. ## Deploy your private mobile network In this step, you'll create the Mobile Network resource representing your private mobile network as a whole. You can also provision one or more SIMs, and / or create the default service and SIM policy.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. In the **Search** bar, type *mobile networks* and then select the **Mobile Networks** service from the results that appear. :::image type="content" source="media/mobile-networks-search.png" alt-text="Screenshot of the Azure portal showing a search for the Mobile Networks service.":::
In this step, you'll create the Mobile Network resource representing your privat
1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected in [Collect SIM values](collect-required-information-for-private-mobile-network.md#collect-sim-values).
- - If you select **Upload JSON file**, the **Upload SIM profile configurations** field will appear. Use this field to upload your chosen JSON file.
- - If you select **Add manually**, a new set of fields will appear under **Enter SIM profile configurations**. Fill out the first row of these fields with the correct settings for the first SIM you want to provision. If you've got more SIMs you want to provision, add the settings for each of these SIMs to a new row.
- If you decided that you don't want to provision any SIMs at this point, select **Add SIMs later**.
+ - If you select **Add manually**, a new set of fields will appear under **Enter SIM profile configurations**. Fill out the first row of these fields with the correct settings for the first SIM you want to provision. If you've got more SIMs you want to provision, add the settings for each of these SIMs to a new row.
+ - If you select **Upload JSON file**, the **Upload SIM profile configurations** field will appear. Use this field to upload your chosen JSON file.
:::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
-1. If you want to use the default service and SIM policy, set **Do you wish to create a basic, default SIM policy and assign it these SIMs?** to **Yes**, and then enter the name of the data network into the **Data network name** field that appears.
+1. If you're provisioning SIMs at this point, you'll need to take the following additional steps.
+ 1. If you want to use the default service and SIM policy, set **Do you wish to create a basic, default SIM policy and assign it these SIMs?** to **Yes**, and then enter the name of the data network into the **Data network name** field that appears.
+ 1. Under **Enter SIM group information**, set **SIM group name** to your chosen name for the SIM group to which your SIMs will be added.
1. Select **Review + create**. 1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
In this step, you'll create the Mobile Network resource representing your privat
Select **Go to resource group**, and then check that your new resource group contains the correct **Mobile Network** resource. It may also contain the following, depending on the choices you made during the procedure.
- - One or more **SIM** resources (if you provisioned any).
+ - A **SIM group** resource (if you provisioned SIMs).
- **Service**, **SIM Policy**, **Data Network**, and **Slice** resources (if you decided to use the default service and SIM policy).
- :::image type="content" source="media/pmn-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing Mobile Network, SIM, Service, SIM policy, Data Network, and Slice resources.":::
+ :::image type="content" source="media/pmn-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing Mobile Network, SIM, SIM group, Service, SIM policy, Data Network, and Slice resources.":::
## Next steps
private-5g-core Key Components Of A Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/key-components-of-a-private-mobile-network.md
The following diagram shows the key resources you'll use to manage your private
- The *mobile network* resource represents the private mobile network as a whole. - Each *SIM* resource represents a physical SIM or eSIM. The physical SIMs and eSIMs are used by UEs that will be served by the private mobile network.
+- *SIM group* resources serve as containers for SIM resources and allow you to sort SIMs into categories for easier management. Each SIM must be a member of a single SIM group, but can't be a member of multiple SIM groups. If you only have a small number of SIMs, you may want to add them all to the same SIM group. Alternatively, you can create multiple SIM groups to sort your SIMs. For example, you could categorize your SIMs by their purpose (such as SIMs used by specific UE types like cameras or cellphones), or by their on-site location.
- *SIM policy* resources are a key component of Azure Private 5G Core's customizable policy control, which allows you to provide flexible traffic handling. You can determine exactly how your packet core instance applies quality of service (QoS) characteristics to service data flows (SDFs) to meet your deployment's needs. You can also use policy control to block or limit certain flows.
- Each SIM policy defines a set of policies and interoperability settings, which can each be assigned to a group of SIMs. You'll need to assign a SIM policy to a SIM before the UE using that SIM can access the private mobile network.
+ Each SIM policy defines a set of policies and interoperability settings. You'll need to assign a SIM policy to a SIM before the UE using that SIM can access the private mobile network.
A SIM policy will also reference one or more *services*. Each service is a representation of a set of QoS characteristics that you want to offer to UEs on SDFs that match particular properties, such as their destination, or the protocol used. You can also use services to limit or block particular SDFs based on these properties.
private-5g-core Manage Existing Sims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-existing-sims.md
+
+ Title: Manage existing SIMs - Azure portal
+description: In this how-to guide, learn how to manage existing SIMs in your private mobile network using the Azure portal.
++++ Last updated : 06/16/2022+++
+# Manage existing SIMs for Azure Private 5G Core Preview - Azure portal
+
+*SIM* resources represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, you'll learn how to manage existing SIMs, including how to assign static IP addresses and SIM policies.
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope.
+- Identify the name of the Mobile Network resource corresponding to your private mobile network.
+
+## View existing SIMs
+
+You can view your existing SIMs in the Azure portal.
+
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Search for and select the **Mobile Network** resource representing the private mobile network.
+
+ :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
+
+1. To see a list of all existing SIMs in the private mobile network, select **SIMs** from the **Resource** menu.
+
+ :::image type="content" source="media/manage-existing-sims/sims-list-inline.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs for a private mobile network." lightbox="media/manage-existing-sims/sims-list-enlarged.png":::
+
+1. To see a list of existing SIMs in a particular SIM group, select **SIM groups** from the resource menu, and then select your chosen SIM group from the list.
+
+ :::image type="content" source="media/sim-group-resource.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs in a SIM group." lightbox="media/sim-group-resource-enlarged.png":::
+
+## Assign SIM policies
+
+SIMs need an assigned SIM policy before they can use your private mobile network. You may want to assign a SIM policy to an existing SIM that doesn't already have one, or you may want to change the assigned SIM policy for an existing SIM. For information on configuring SIM policies, see [Configure a SIM policy](configure-sim-policy-azure-portal.md).
+
+To assign a SIM policy to one or more SIMs:
+
+1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
+1. In the resource menu, select **SIMs**.
+1. You'll see a list of provisioned SIMs in the private mobile network. For each SIM policy you want to assign to one or more SIMs, do the following:
+
+ 1. Tick the checkbox next to the name of each SIM to which you want to assign the SIM policy.
+ 1. Select **Assign SIM policy**.
+ 1. In **Assign SIM policy** on the right, select your chosen SIM policy from the **SIM policy** drop-down menu.
+ 1. Select **Assign SIM policy**.
+
+ :::image type="content" source="media/manage-existing-sims/assign-sim-policy-inline.png" alt-text="Screenshot of the Azure portal. It shows a list of provisioned SIMs and fields to assign a SIM policy." lightbox="media/manage-existing-sims/assign-sim-policy-enlarged.png":::
+
+1. The Azure portal will now begin deploying the configuration change. When the deployment is complete, select **Go to resource** (if you have assigned a SIM policy to a single SIM) or **Go to resource group** (if you have assigned a SIM policy to multiple SIMs).
+
+ - If you assigned a SIM policy to a single SIM, you'll be taken to that SIM resource. Check the **SIM policy** field in the **Management** section to confirm that the correct SIM policy has been assigned successfully.
+ - If you assigned a SIM policy to multiple SIMs, you'll be taken to the resource group containing your private mobile network. Select the **Mobile Network** resource, and then select **SIMs** in the resource menu. Check the **SIM policy** column in the SIMs list to confirm the correct SIM policy has been assigned to your chosen SIMs.
+
+1. Repeat this step for any other SIM policies you want to assign to SIMs.
+
+## Assign static IP addresses
+
+Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. This is useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you may configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you won't need to reconfigure the video analysis application with new IP addresses each time the cameras restart.
+
+If you've configured static IP address allocation for your packet core instance(s), you can assign static IP addresses to the SIMs you've provisioned. If you have multiple sites in your private mobile network, you can assign a different static IP address for each site to the same SIM.
+
+Each IP address must come from the pool you assigned for static IP address allocation when creating the relevant site, as described in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values). For more information, see [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools).
+
+If you're assigning a static IP address to a SIM, you'll also need the following information.
+
+- The SIM policy to assign to the SIM. You won't be able to set a static IP address for a SIM without also assigning a SIM policy.
+- The name of the data network the SIM will use.
+- The site at which the SIM will use this static IP address.
+
+To assign static IP addresses to SIMs:
+
+1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
+1. In the resource menu, select **SIMs**.
+1. You'll see a list of provisioned SIMs in the private mobile network. Select each SIM to which you want to assign a static IP address, and then select **Assign Static IPs**.
+
+ :::image type="content" source="media/manage-existing-sims/assign-static-ips.png" alt-text="Screenshot of the Azure portal showing a list of provisioned SIMs. Selected SIMs and the Assign Static IPs button are highlighted.":::
+
+1. In **Assign static IP configurations** on the right, run the following steps for each SIM in turn. If your private mobile network has multiple sites and you want to assign a different static IP address for each site to the same SIM, you'll need to repeat these steps on the same SIM for each IP address.
+
+ 1. Set **SIM name** your chosen SIM.
+ 1. Set **SIM policy** to the SIM policy you want to assign to this SIM.
+ 1. Set **Slice** to **slice-1**.
+ 1. Set **Data network name** to the name of the data network this SIM will use.
+ 1. Set **Site** to the site at which the SIM will use this static IP address.
+ 1. Set **Static IP** to your chosen IP address.
+ 1. Select **Save static IP configuration**. The SIM will then appear in the list under **Number of pending changes**.
+
+ :::image type="content" source="media/manage-existing-sims/assign-static-ip-configurations.png" alt-text="Screenshot of the Azure portal showing the Assign static IP configurations screen.":::
+
+1. Once you have assigned static IP addresses to all of your chosen SIMs, select **Assign static IP configurations**.
+1. The Azure portal will now begin deploying the configuration change. When the deployment is complete, select **Go to resource** (if you have assigned a static IP address to a single SIM) or **Go to resource group** (if you have assigned static IP addresses to multiple SIMs).
+
+ - If you assigned a static IP address to a single SIM, you'll be taken to that SIM resource. Check the **SIM policy** field in the **Management** section and the list under the **Static IP Configuration** section to confirm that the correct SIM policy and static IP address have been assigned successfully.
+ - If you assigned static IP addresses to multiple SIMs, you'll be taken to the resource group containing your private mobile network. Select the **Mobile Network** resource, and then select **SIMs** in the resource menu. Check the **SIM policy** column in the SIMs list to confirm the correct SIM policy has been assigned to your chosen SIMs. You can then select an individual SIM and check the **Static IP Configuration** section to confirm that the correct static IP address has been assigned to that SIM.
+
+## Delete SIMs
+
+Deleting a SIM will remove it from your private mobile network.
+
+1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
+1. In the resource menu, select **SIMs**.
+1. Tick the checkbox next to each SIM you want to delete.
+1. Select **Delete**.
+1. Select **Delete** to confirm you want to delete the SIM(s).
+
+## Next steps
+If you need to add more SIMs, you can provision them using the Azure portal or an Azure Resource Manager template (ARM template).
+- [Provision new SIMs - Azure portal](provision-sims-azure-portal.md)
+- [Provision new SIMs - ARM template](provision-sims-arm-template.md)
private-5g-core Manage Sim Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-sim-groups.md
+
+ Title: Manage SIM groups - Azure portal
+
+description: With this how-to guide, learn how to manage SIM groups for Azure Private 5G Core Preview through the Azure portal.
++++ Last updated : 06/16/2022+++
+# Manage SIM groups - Azure portal
+
+*SIM groups* allow you to sort SIMs into categories for easier management. Each SIM must be a member of a single SIM group, but can't be a member of multiple SIM groups. If you only have a small number of SIMs, you may want to add them all to the same SIM group. Alternatively, you can create multiple SIM groups to sort your SIMs. For example, you could categorize your SIMs by their purpose (such as SIMs used by specific UE types like cameras or cellphones), or by their on-site location. In this how-to guide, you'll learn how to create, delete, and view SIM groups using the Azure portal.
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope.
+- Identify the name of the Mobile Network resource corresponding to your private mobile network.
+
+## View existing SIM groups
+
+You can view your existing SIM groups in the Azure portal.
+
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Search for and select the **Mobile Network** resource representing the private mobile network to which you want to add a SIM group.
+
+ :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
+
+1. In the **Resource** menu, select **SIM groups** to see a list of existing SIM groups.
+
+ :::image type="content" source="media/manage-sim-groups/sim-groups-list.png" alt-text="Screenshot of the Azure portal showing a list of SIM groups. The SIM groups resource menu option is highlighted." :::
+
+## Create a SIM group
+
+You can create new SIM groups in the Azure portal. As part of creating a SIM group, you'll be given the option of provisioning new SIMs to add to your new SIM group. If you want to provision new SIMs, you'll need to [collect values for your SIMs](collect-required-information-for-private-mobile-network.md#collect-sim-values) before you start.
+
+To create a new SIM group:
+
+1. Navigate to the list of SIM groups in your private mobile network, as described in [View existing SIM groups](#view-existing-sim-groups).
+1. Select **Create**.
+1. Do the following on the **Basics** configuration tab.
+
+ - Enter a name for the new SIM group into the **SIM group name** field.
+ - Set **Region** to **East US**.
+ - Select your private mobile network from the **Mobile network** drop-down menu.
+
+ :::image type="content" source="media/manage-sim-groups/create-sim-group-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab.":::
+
+1. Select **Next: SIMs**.
+1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected for your SIMs.
+
+ - If you decided that you don't want to provision any SIMs at this point, select **Add SIMs later**.
+ - If you select **Add manually**, a new set of fields will appear under **Enter SIM profile configurations**. Fill out the first row of these fields with the correct settings for the first SIM you want to provision. If you've got more SIMs you want to provision, add the settings for each of these SIMs to a new row.
+ - If you select **Upload JSON file**, the **Upload SIM profile configurations** field will appear. Use this field to upload your chosen JSON file.
+
+ :::image type="content" source="media/manage-sim-groups/create-sim-group-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
+
+1. Select **Review + create**.
+1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+
+ :::image type="content" source="media/manage-sim-groups/create-sim-group-review-create-tab.png" alt-text="Screenshot of the Azure portal showing validated configuration for a SIM group.":::
+
+ If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
+
+1. Once your configuration has been validated, you can select **Create** to create the SIM group. The Azure portal will display the following confirmation screen when the SIM group has been created.
+
+ :::image type="content" source="media/manage-sim-groups/sim-group-deployment-complete.png" alt-text="Screenshot of the Azure portal. It shows confirmation of the successful creation of a SIM group.":::
+
+1. Click **Go to resource group** and then select your new SIM group from the list of resources. You'll be shown your new SIM group and any SIMs you've provisioned.
+
+ :::image type="content" source="media/sim-group-resource.png" alt-text="Screenshot of the Azure portal showing a SIM group containing SIMs." lightbox="media/sim-group-resource-enlarged.png" :::
+
+1. At this point, your SIMs will not have any assigned SIM policies and so will not be brought into service. If you want to begin using the SIMs, [assign a SIM policy to them](manage-existing-sims.md#assign-sim-policies). If you've configured static IP address allocation for your packet core instance(s), you may also want to [assign static IP addresses](manage-existing-sims.md#assign-static-ip-addresses) to the SIMs you've provisioned.
+
+## Delete a SIM group
+
+You can delete SIM groups through the Azure portal.
+
+1. Navigate to the list of SIM groups in your private mobile network, as described in [View existing SIM groups](#view-existing-sim-groups).
+1. Check the **Number of SIMs** column for the SIM group you want to delete. If there are any SIMs in the SIM group, you'll need to delete the SIMs first. To delete the SIMs:
+
+ 1. Select the relevant SIM group.
+ 1. Tick the checkboxes next to all of the SIMs in the SIM group.
+ 1. Select **Delete** from the **Command** bar.
+ 1. In the pop-up that appears, select **Delete** to confirm you want to delete the SIMs.
+
+1. Once you've confirmed the SIM group is empty, tick the checkbox next to it in the list of SIM groups.
+1. Select **Delete** from the **Command** bar.
+1. In the pop-up that appears, select **Delete** to confirm you want to delete the SIM group.
+
+## Next steps
+Learn more about how to manage the SIMs in your SIM groups.
+- [Manage existing SIMs - Azure portal](manage-existing-sims.md)
+
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
Each service includes:
### SIM policies
-*SIM policies* let you define different sets of policies and interoperability settings that can each be assigned to a group of SIMs. You'll need to assign a SIM to a SIM policy before the SIM can use the private mobile network.
+*SIM policies* let you define different sets of policies and interoperability settings that can each be assigned to one or more SIMs. You'll need to assign a SIM policy to a SIM before the UE using that SIM can access the private mobile network.
Each SIM policy includes: -- Top-level settings that are applied to every SIM assigned to the SIM policy. These settings include the UE aggregated maximum bit rate (UE-AMBR) for downloads and uploads, and the RAT/Frequency Priority ID (RFSP ID).-- A *network scope*, which defines how SIMs assigned to this SIM policy will connect to the data network. You can use the network scope to determine the following settings:
+- Top-level settings that are applied to every SIM using the SIM policy. These settings include the UE aggregated maximum bit rate (UE-AMBR) for downloads and uploads, and the RAT/Frequency Priority ID (RFSP ID).
+- A *network scope*, which defines how SIMs using this SIM policy will connect to the data network. You can use the network scope to determine the following settings:
- The services (as described in [Services](#services)) offered to SIMs on this data network. - A set of QoS characteristics that will be used to form the default QoS flow for PDU sessions (or EPS bearer for PDN connections in 4G networks).
When you first come to design the policy control configuration for your own priv
1. Learn about each of the available options for a service in [Collect the required information for a service](collect-required-information-for-service.md). Compare these options with the requirements of the SDFs to decide on the services you'll need. 1. Collect the appropriate policy configuration values you'll need for each service, using the information in [Collect the required information for a service](collect-required-information-for-service.md). 1. Configure each of your services as described in [Configure a service - Azure portal](configure-service-azure-portal.md).
-1. Group your SIMs according to the services they'll require. For each group, configure a SIM policy and assign it to the correct SIMs by carrying out the following procedures:
+1. Categorize your SIMs according to the services they'll require. For each category, configure a SIM policy and assign it to the correct SIMs by carrying out the following procedures:
1. [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md) 1. [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md)
private-5g-core Private 5G Core Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-5g-core-overview.md
Azure Private 5G Core is able to leverage this low latency with the security and
Azure Private 5G Core instantiates a single private mobile network distributed across one or more enterprise sites across the world. Each site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC). A packet core instance authenticates end devices and aggregates their data traffic over 5G Standalone wireless and access technologies. Each packet core instance includes the following components: -- A high performance and highly programmable 5G User Plane Function (UPF).
+- A high performance (25 Gbps rated load) and highly programmable 5G User Plane Function (UPF).
- Core control plane functions including policy and subscriber management. - A portfolio of service-based architecture elements. - Management components for network monitoring.
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
Title: Provision SIMs - ARM template
+ Title: Provision new SIMs - ARM template
-description: This how-to guide shows how to provision SIMs using an Azure Resource Manager (ARM) template.
+description: This how-to guide shows how to provision new SIMs using an Azure Resource Manager (ARM) template.
Last updated 03/21/2022
-# Provision SIMs for Azure Private 5G Core Preview - ARM template
+# Provision new SIMs for Azure Private 5G Core Preview - ARM template
*SIM resources* represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, you'll learn how to provision new SIMs for an existing private mobile network using an Azure Resource Manager template (ARM template).
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-provision-sims%2Fazuredeploy.json)
+[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-provision-proxy-sims%2Fazuredeploy.json)
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope. - Identify the name of the Mobile Network resource corresponding to your private mobile network and the resource group containing it.
+- Choose a name for the new SIM group to which your SIMs will be added.
+- Identify the SIM policy you want to assign to the SIMs you're provisioning. You must have already created this SIM policy using the instructions in [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md).
## Collect the required information for your SIMs
To begin, collect the values in the following table for each SIM you want to pro
## Prepare an array for your SIMs
-Use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to create an array containing properties for each of the SIMs you want to provision. The following is an example of an array containing properties for two SIMs.
+Use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to create an array containing properties for each of the SIMs you want to provision. The following is an example of an array containing properties for two SIMs. If you don't want to assign a SIM policy to a SIM, you can delete the `simPolicyId` parameter for that SIM.
```json [
Use the information you collected in [Collect the required information for your
}, { "simName": "SIM2",
- "simProfileName": "profile2",
"integratedCircuitCardIdentifier": "8922345678901234567", "internationalMobileSubscriberIdentity": "001019990010002", "authenticationKey": "11112233445566778899AABBCCDDEEFF",
Use the information you collected in [Collect the required information for your
## Review the template
-<!--
-Need to confirm whether the following link is correct.
>
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/mobilenetwork-provision-proxy-sims).
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/mobilenetwork-provision-sims).
-The template defines one or more [**Microsoft.MobileNetwork/sims**](/azure/templates/microsoft.mobilenetwork/sims) resources, each of which represents a physical SIM or eSIM.
+The following Azure resources are defined in the template.
+
+- [**Microsoft.MobileNetwork/simGroups**](/azure/templates/microsoft.mobilenetwork/simGroups): a resource representing a SIM group.
+- [**Microsoft.MobileNetwork/simGroups/sims**](/azure/templates/microsoft.mobilenetwork/simGroups/sims): a resource representing a physical SIM or eSIM.
## Deploy the template 1. Select the following link to sign in to Azure and open a template.
- [![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-provision-sims%2Fazuredeploy.json)
+ [![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-provision-proxy-sims%2Fazuredeploy.json)
-1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites). <!-- We should also add a screenshot of a filled out set of parameters. -->
+1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
- **Subscription:** select the Azure subscription you used to create your private mobile network. - **Resource group:** select the resource group containing the Mobile Network resource representing your private mobile network. - **Region:** select **East US**.
- - **Location:** leave this field unchanged.
+ - **Location:** enter *eastus*.
- **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network.
- - **SIM resources:** paste in the array you prepared in [Prepare an array for your SIMs](#prepare-an-array-for-your-sims).
+ - **Existing Sim Policy Name:** enter the name of the SIM policy you want to assign to the SIMs.
+ - **Sim Group Name:** enter the name for the new SIM group.
+ - **Sim Resources:** paste in the array you prepared in [Prepare an array for your SIMs](#prepare-an-array-for-your-sims).
:::image type="content" source="media/provision-sims-arm-template/sims-arm-template-configuration-fields.png" alt-text="Screenshot of the Azure portal showing the configuration fields for the SIMs ARM template.":::
The template defines one or more [**Microsoft.MobileNetwork/sims**](/azure/templ
1. Once your configuration has been validated, you can select **Create** to provision your SIMs. The Azure portal will display a confirmation screen when the SIMs have been provisioned. - ## Review deployed resources 1. Select **Go to resource group**. :::image type="content" source="media/template-deployment-confirmation.png" alt-text="Screenshot of the Azure portal showing a deployment confirmation for the ARM template.":::
-1. Confirm that your SIMs have been created in the resource group.
+1. Confirm that the **SIM Group** resource has been created in the resource group.
- :::image type="content" source="media/provision-sims-arm-template/sims-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing newly provisioned SIMs.":::
+ :::image type="content" source="media/provision-sims-arm-template/sims-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing a newly created SIM group.":::
-## Next steps
+1. Select the **SIM Group** resource and confirm that all of your SIMs have been provisioned correctly.
-You'll need to assign a SIM policy to your SIMs to bring them into service.
-<!-- we may want to update the template to include SIM policies, or update the link below to reference the ARM template procedure rather than the portal -->
+ :::image type="content" source="media/provision-sims-arm-template/sim-group-resource-inline.png" alt-text="Screenshot of the Azure portal showing a SIM group resource containing SIMs." lightbox="media/provision-sims-arm-template/sim-group-resource-enlarged.png":::
+
+## Next steps
-- [Configure a SIM policy for Azure Private 5G Core Preview - Azure portal](configure-sim-policy-azure-portal.md)-- [Assign a SIM policy to a SIM](provision-sims-azure-portal.md#assign-sim-policies)
+If you've configured static IP address allocation for your packet core instance(s), you may want to [assign static IP addresses to the SIMs you've provisioned](manage-existing-sims.md#assign-static-ip-addresses).
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
Title: Provision SIMs using Azure portal
+ Title: Provision new SIMs - Azure portal
description: In this how-to guide, learn how to provision new SIMs for an existing private mobile network using the Azure portal.
Last updated 01/16/2022
-# Provision SIMs for Azure Private 5G Core Preview - Azure portal
+# Provision new SIMs for Azure Private 5G Core Preview - Azure portal
-*SIM* resources represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, we'll provision new SIMs for an existing private mobile network. You can also choose to assign static IP addresses and a SIM policy to the SIMs you provision.
+*SIM* resources represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, we'll provision new SIMs for an existing private mobile network.
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope.+ - Identify the name of the Mobile Network resource corresponding to your private mobile network.+ - Decide on the method you'll use to provision SIMs. You can choose from the following:+ - Manually entering each provisioning value into fields in the Azure portal. This option is best if you're provisioning a few SIMs.+ - Importing a JSON file containing values for one or more SIM resources. This option is best if you're provisioning a large number of SIMs. You'll need a good JSON editor if you want to use this option.-- For each SIM you want to provision, decide whether you want to assign a SIM policy to it. If you do, you must have already created the relevant SIM policies using the instructions in [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md). SIMs can't access your private mobile network unless they have an assigned SIM policy.-- If you've configured static IP address allocation for your packet core instance(s), decide whether you want to assign a static IP address to any of the SIMs you're provisioning. If you have multiple sites in your private mobile network, you can assign a different static IP address for each site to the same SIM.
- Each IP address must come from the pool you assigned for static IP address allocation when creating the relevant site, as described in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values). For more information, see [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools).
+- Decide on the SIM group to which you want to add your SIMs. You can create a new SIM group when provisioning your SIMs, or you can choose an existing SIM group. See [Manage SIM groups - Azure portal](manage-sim-groups.md) for information on viewing your existing SIM groups.
+
+ - If you're manually entering provisioning values, you'll add each SIM to a SIM group individually.
+
+ - If you're using a JSON file, all SIMs in the same JSON file will be added to the same SIM group.
+
+- For each SIM you want to provision, decide whether you want to assign a SIM policy to it. If you do, you must have already created the relevant SIM policies using the instructions in [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md). SIMs can't access your private mobile network unless they have an assigned SIM policy.
- If you're assigning a static IP address to a SIM, you'll also need the following information.
+ - If you're manually entering provisioning values, you'll need the name of the SIM policy.
- - The SIM policy to assign to the SIM. You won't be able to set a static IP address for a SIM without also assigning a SIM policy.
- - The name of the data network the SIM will use.
- - The site at which the SIM will use this static IP address.
+ - If you're using a JSON file, you'll need the full resource ID of the SIM policy.
## Collect the required information for your SIMs
To begin, collect the values in the following table for each SIM you want to pro
| The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. It must be a 32-character string, containing hexadecimal characters only. | **Ki** | `authenticationKey` | | The derived operator code (OPc). The OPc is taken from the SIM's Ki and the network's operator code (OP). The packet core instance uses it to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. | **Opc** | `operatorKeyCode` | | The type of device using this SIM. This value is an optional free-form string. You can use it as required to easily identify device types using the enterprise's private mobile network. | **Device type** | `deviceType` |
+| The SIM policy to assign to the SIM. This is optional, but your SIMs won't be able to use the private mobile network without an assigned SIM policy. | **SIM policy** | `simPolicyId` |
## Create the JSON file Only carry out this step if you decided in [Prerequisites](#prerequisites) to use a JSON file to provision your SIMs. Otherwise, you can skip to [Begin provisioning the SIMs in the Azure portal](#begin-provisioning-the-sims-in-the-azure-portal).
-Prepare the JSON file using the information you collected for your SIMs in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims). This example file shows the required format. It contains the parameters required to provision two SIMs (`SIM1` and `SIM2`).
+Prepare the JSON file using the information you collected for your SIMs in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims). This example file shows the required format. It contains the parameters required to provision two SIMs (`SIM1` and `SIM2`). If you don't want to assign a SIM policy to a SIM, you can delete the `simPolicyId` parameter for that SIM.
```json [
Prepare the JSON file using the information you collected for your SIMs in [Coll
"internationalMobileSubscriberIdentity": "001019990010001", "authenticationKey": "00112233445566778899AABBCCDDEEFF", "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88737d",
- "deviceType": "Cellphone"
+ "deviceType": "Cellphone",
+ "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy1"
}, { "simName": "SIM2",
- "simProfileName": "profile2",
"integratedCircuitCardIdentifier": "8922345678901234567", "internationalMobileSubscriberIdentity": "001019990010002", "authenticationKey": "11112233445566778899AABBCCDDEEFF", "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88738d",
- "deviceType": "Sensor"
+ "deviceType": "Sensor",
+ "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy2"
} ] ```
Prepare the JSON file using the information you collected for your SIMs in [Coll
You'll now begin the SIM provisioning process through the Azure portal.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. Search for and select the **Mobile Network** resource representing the private mobile network for which you want to provision SIMs. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
-1. Select **Add SIMs**.
+1. Select **View SIMs**.
- :::image type="content" source="media/provision-sims-azure-portal/add-sims.png" alt-text="Screenshot of the Azure portal showing the Add SIMs button on a Mobile Network resource":::
+ :::image type="content" source="media/provision-sims-azure-portal/view-sims.png" alt-text="Screenshot of the Azure portal showing the View SIMs button on a Mobile Network resource.":::
1. Select **Create** and then select your chosen provisioning method from the options that appear.
You'll now begin the SIM provisioning process through the Azure portal.
In this step, you'll enter provisioning values for your SIMs directly into the Azure portal.
-1. In **Add SIMs** on the right, use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to fill out the fields for one of the SIMs you want to provision.
+1. In **Add SIMs** on the right, use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to fill out the fields for one of the SIMs you want to provision. You can set **SIM policy** to **None** if you don't want to assign a SIM policy to the SIM at this point.
+1. Set the **SIM group** field to an existing SIM group, or select **Create new** to create a new one.
1. Select **Add**. 1. The Azure portal will now begin deploying the SIM. When the deployment is complete, select **Go to resource**.
In this step, you'll enter provisioning values for your SIMs directly into the A
In this step, you'll provision SIMs using a JSON file. 1. In **Add SIMs** on the right, select **Browse** and then select the JSON file you created in [Create the JSON file](#create-the-json-file).
+1. Set the **SIM group** field to an existing SIM group, or select **Create new** to create a new one.
1. Select **Add**. If the **Add** button is greyed out, check your JSON file to confirm that it's correctly formatted. 1. The Azure portal will now begin deploying the SIMs. When the deployment is complete, select **Go to resource group**. :::image type="content" source="media/provision-sims-azure-portal/multiple-sim-resource-deployment.png" alt-text="Screenshot of the Azure portal. It shows a completed deployment of SIM resources through a J S O N file and the Go to resource group button.":::
-1. The Azure portal will display the resource group containing your private mobile network. Select the **Mobile Network** resource.
-1. In the resource menu, select **SIMs**.
+1. Select the **SIM Group** resource to which you added your SIMs.
1. Check the list of SIMs to ensure your new SIMs are present and provisioned correctly. :::image type="content" source="media/provision-sims-azure-portal/sims-list.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs for a private mobile network." lightbox="media/provision-sims-azure-portal/sims-list.png":::
-## Assign static IP addresses
-
-In this step, you'll assign static IP addresses to your SIMs. You can skip this step if you don't want to assign any static IP addresses.
-
-1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
-
- :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
-
-1. In the resource menu, select **SIMs**.
-1. You'll see a list of provisioned SIMs in the private mobile network. Select each SIM to which you want to assign a static IP address, and then select **Assign Static IPs**.
-
- :::image type="content" source="media/provision-sims-azure-portal/assign-static-ips.png" alt-text="Screenshot of the Azure portal showing a list of provisioned SIMs. Selected SIMs and the Assign Static I Ps button are highlighted.":::
-
-1. In **Assign static IP configurations** on the right, run the following steps for each SIM in turn. If your private mobile network has multiple sites and you want to assign a different static IP address for each site to the same SIM, you'll need to repeat these steps on the same SIM for each IP address.
-
- 1. Set **SIM name** your chosen SIM.
- 1. Set **SIM policy** to the SIM policy you want to assign to this SIM.
- 1. Set **Slice** to **slice-1**.
- 1. Set **Data network name** to the name of the data network this SIM will use.
- 1. Set **Site** to the site at which the SIM will use this static IP address.
- 1. Set **Static IP** to your chosen IP address.
- 1. Select **Save static IP configuration**. The SIM will then appear in the list under **Number of pending changes**.
-
- :::image type="content" source="media/provision-sims-azure-portal/assign-static-ip-configurations.png" alt-text="Screenshot of the Azure portal showing the Assign static I P configurations screen.":::
-
-1. Once you have assigned static IP addresses to all of your chosen SIMs, select **Assign static IP configurations**.
-1. The Azure portal will now begin deploying the configuration change. When the deployment is complete, select **Go to resource** (if you have assigned a static IP address to a single SIM) or **Go to resource group** (if you have assigned static IP addresses to multiple SIMs).
-
- - If you assigned a static IP address to a single SIM, you'll be taken to that SIM resource. Check the **SIM policy** field in the **Management** section and the list under the **Static IP Configuration** section to confirm that the correct SIM policy and static IP address have been assigned successfully.
- - If you assigned a SIM policy to multiple SIMs, you'll be taken to the resource group containing your private mobile network. Select the **Mobile Network** resource, and then select **SIMs** in the resource menu. Check the **SIM policy** column in the SIMs list to confirm the correct SIM policy has been assigned to your chosen SIMs. You can then select an individual SIM and check the **Static IP Configuration** section to confirm that the correct static IP address has been assigned to that SIM.
-
-## Assign SIM policies
-
-In this step, you'll assign SIM policies to your SIMs. SIMs need an assigned SIM policy before they can use your private mobile network. You can skip this step and come back to it later if you don't want the SIMs to be able to access the private mobile network straight away. You can also skip this step for any SIMs to which you've assigned a static IP address, as these SIMs will already have an assigned SIM policy.
-
-1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
-
- :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
-
-1. In the resource menu, select **SIMs**.
-1. You'll see a list of provisioned SIMs in the private mobile network. For each SIM policy you want to assign to one or more SIMs, do the following:
-
- 1. Tick the checkbox next to the name of each SIM to which you want to assign the SIM policy.
- 1. Select **Assign SIM policy**.
- 1. In **Assign SIM policy** on the right, select your chosen SIM policy from the **SIM policy** drop-down menu.
- 1. Select **Assign SIM policy**.
-
- :::image type="content" source="media/provision-sims-azure-portal/assign-sim-policy.png" alt-text="Screenshot of the Azure portal. It shows a list of provisioned SIMs and fields to assign a SIM policy." lightbox="media/provision-sims-azure-portal/assign-sim-policy.png":::
-
-1. The Azure portal will now begin deploying the configuration change. When the deployment is complete, select **Go to resource** (if you have assigned a SIM policy to a single SIM) or **Go to resource group** (if you have assigned a SIM policy to multiple SIMs).
-
- - If you assigned a SIM policy to a single SIM, you'll be taken to that SIM resource. Check the **SIM policy** field in the **Management** section to confirm that the correct SIM policy has been assigned successfully.
- - If you assigned a SIM policy to multiple SIMs, you'll be taken to the resource group containing your private mobile network. Select the **Mobile Network** resource, and then select **SIMs** in the resource menu. Check the **SIM policy** column in the SIMs list to confirm the correct SIM policy has been assigned to your chosen SIMs.
-
-1. Repeat this step for any other SIM policies you want to assign to SIMs.
- ## Next steps -- [Learn more about policy control](policy-control.md)
+If you've configured static IP address allocation for your packet core instance(s), you may want to [assign static IP addresses to the SIMs you've provisioned](manage-existing-sims.md#assign-static-ip-addresses).
private-5g-core Tutorial Create Example Set Of Policy Control Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/tutorial-create-example-set-of-policy-control-configuration.md
In this step, we'll create a service that filters packets based on their protoco
To create the service:
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. Search for and select the Mobile Network resource representing your private mobile network. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal showing the results for a search for a Mobile Network resource.":::
In this step, we will provision two SIMs and assign a SIM policy to each one. Th
:::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal showing the results for a search for a Mobile Network resource.":::
-1. In the **Resource** menu, select **Add SIMs**.
+1. Select **View SIMs**.
- :::image type="content" source="media/provision-sims-azure-portal/add-sims.png" alt-text="Screenshot of the Azure portal showing the Add SIMs button on a Mobile Network resource":::
+ :::image type="content" source="media/provision-sims-azure-portal/view-sims.png" alt-text="Screenshot of the Azure portal showing the View SIMs button on a Mobile Network resource.":::
1. Select **Create** and then **Upload JSON from file**. :::image type="content" source="media/provision-sims-azure-portal/create-new-sim.png" alt-text="Screenshot of the Azure portal showing the Create button and its options - Upload J S O N from file and Add manually."::: 1. Select **Browse** and then select the JSON file you created at the start of this step.
+1. Under **SIM group name**, select **Create new** and then enter **SIMGroup1** into the field that appears.
1. Select **Add**.
-1. The Azure portal will now begin deploying the SIMs. When the deployment is complete, select **Go to resource group**.
+1. The Azure portal will now begin deploying the SIM group and SIMs. When the deployment is complete, select **Go to resource group**.
- :::image type="content" source="media/provision-sims-azure-portal/multiple-sim-resource-deployment.png" alt-text="Screenshot of the Azure portal showing a completed deployment of SIM resources through a J S O N file and the Go to resource button.":::
+ :::image type="content" source="media/provision-sims-azure-portal/multiple-sim-resource-deployment.png" alt-text="Screenshot of the Azure portal showing a completed deployment of SIM group and SIM resources through a J S O N file. The Go to resource button is highlighted.":::
-1. In the **Resource group** that appears, select the **Mobile Network** resource representing your private mobile network.
-1. In the **Resource** menu, select **SIMs**.
-
- :::image type="content" source="media/tutorial-create-example-set-of-policy-control-configuration/sims-resource-menu-option.png" alt-text="Screenshot of the Azure portal. The SIMs option in the resource menu for a private mobile network is highlighted.":::
-
-1. Your new **SIM1** and **SIM2** SIM resources are shown in the list.
+1. In the **Resource group** that appears, select the **SIMGroup1** resource you've just created. You'll then see your new SIMs in the SIM group.
- :::image type="content" source="media/tutorial-create-example-set-of-policy-control-configuration/sims-list.png" alt-text="Screenshot of the Azure portal. It shows the SIMs currently provisioned for the private mobile network." lightbox="media/tutorial-create-example-set-of-policy-control-configuration/sims-list.png":::
+ :::image type="content" source="media/tutorial-create-example-set-of-policy-control-configuration/sims-list.png" alt-text="Screenshot of the Azure portal. It shows a SIM group containing two SIMs." lightbox="media/tutorial-create-example-set-of-policy-control-configuration/sims-list.png":::
1. Tick the checkbox next to **SIM1**. 1. In the **Command** bar, select **Assign SIM policy**.
In this step, we will provision two SIMs and assign a SIM policy to each one. Th
1. Once the deployment is complete, select **Go to Resource**. 1. Check the **SIM policy** field in the **Management** section to confirm **sim-policy-1** has been successfully assigned.
- :::image type="content" source="media/tutorial-create-example-set-of-policy-control-configuration/sim-with-sim-policy.png" alt-text="Screenshot of the Azure portal showing a SIM resource. The SIM policy field is highlighted in the Management section." lightbox="media/tutorial-create-example-set-of-policy-control-configuration/sim-with-sim-policy.png":::
+ :::image type="content" source="media/tutorial-create-example-set-of-policy-control-configuration/sim-with-sim-policy.png" alt-text="Screenshot of the Azure portal showing a SIM resource. The SIM policy field is highlighted in the Management section." lightbox="media/tutorial-create-example-set-of-policy-control-configuration/sim-with-sim-policy-enlarged.png":::
-1. Search for and select the Mobile Network resource representing your private mobile network.
-1. In the **Resource** menu, select **SIMs**.
+1. In the **SIM group** field under **Essentials**, select **SIMGroup1** to return to the SIM group.
1. Tick the checkbox next to **SIM2**. 1. In the **Command** bar, select **Assign SIM policy**. 1. Under **Assign SIM policy** on the right, set the **SIM policy** field to **sim-policy-2**.
You have now provisioned two SIMs and assigned each of them a different SIM poli
You can now delete each of the resources we've created during this tutorial. 1. Search for and select the Mobile Network resource representing your private mobile network.
-1. In the **Resource** menu, select **SIMs**.
-1. Tick the checkboxes next to **SIM1** and **SIM2**, and then select **Delete** from the **Command** bar.
-1. Select **Delete** to confirm your choice.
-1. Once the SIMs have been deleted, select **SIM policies** from the **Resource** menu.
+1. In the **Resource** menu, select **SIM groups**.
+1. Select **SIMGroup1**.
+1. Tick the checkboxes next to **SIM1** and **SIM2**, and then select **Delete** from the **Command** bar.
+1. Select **Delete** to confirm your choice.
+1. Once the SIMs have been deleted, select the name of your private mobile network from the breadcrumbs in the top left corner to return to the list of SIM groups.
+1. Tick the checkbox next to **SIMGroup1**, and then select **Delete** from the **Command** bar.
+1. Select **Delete** to confirm your choice.
+1. Once the SIM group has been deleted, select **SIM policies** from the **Resource** menu.
1. Tick the checkboxes next to **sim-policy-1** and **sim-policy-2**, and then select **Delete** from the **Command** bar. 1. Select **Delete** to confirm your choice. 1. Once the SIM policies have been deleted, select **Services** from the **Resource** menu.
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
Each Azure Private 5G Core Preview site contains a packet core instance, which i
## Prerequisites -- Contact your Microsoft assigned trials engineer. They'll guide you through the upgrade process and provide you with the required information, including the amount of time you'll need to allow for the upgrade to complete and the new software version number.
+- Contact your Microsoft assigned trials engineer. They'll guide you through the upgrade process and provide you with the required information, including the amount of time you'll need to allow for the upgrade to complete.
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
-## Upgrade the packet core instance
+## View the current packet core version
-Carry out the following steps to upgrade the packet core instance.
+To check which version your packet core instance is currently running, and whether there is a newer version available:
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
1. Search for and select the **Mobile Network** resource representing the private mobile network. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource."::: 1. In the **Resource** menu, select **Sites**.
-1. Select the site containing the packet core instance you want to upgrade.
-1. Under the **Network function** heading, select the name of the packet core control plane resource shown next to **Packet Core**.
+1. Select the site containing the packet core instance you're interested in.
+1. Under the **Network function** heading, select the name of the **Packet Core Control Plane** resource shown next to **Packet Core**.
:::image type="content" source="media/upgrade-packet-core-azure-portal/packet-core-field.png" alt-text="Screenshot of the Azure portal showing the Packet Core field.":::
+1. Check the **Version** field under the **Configuration** heading to view the current software version. If there's an attention icon next to this field, a new packet core version is available. If there's a warning that you're running an unsupported version, we advise that you upgrade your packet core instance to a version that Microsoft currently supports.
+
+ :::image type="content" source="media/upgrade-packet-core-azure-portal/packet-core-control-plane-overview.png" alt-text="Screenshot of the Azure portal showing the Packet Core Control Plane resource overview." lightbox="media/upgrade-packet-core-azure-portal/packet-core-control-plane-overview.png":::
+
+## Upgrade the packet core instance
+
+1. If you haven't already, navigate to the **Packet Core Control Plane** resource that you're interested in upgrading.
1. Select **Upgrade version**. :::image type="content" source="media/upgrade-packet-core-azure-portal/upgrade-version.png" alt-text="Screenshot of the Azure portal showing the Upgrade version option.":::
-1. Under **Upgrade packet core version**, fill out the **New version** field with the string for the new software version provided to you by your trials engineer.
+1. From the **New version** dropdown list, select the recommended packet core version.
+ > [!IMPORTANT]
+ > You can upgrade or downgrade to any version of packet core. However, distributed tracing and packet core data might be lost when you move in or out of unsupported packet core versions.
:::image type="content" source="media/upgrade-packet-core-azure-portal/upgrade-packet-core-version.png" alt-text="Screenshot of the Azure portal showing the New version field on the Upgrade packet core version screen.":::
Carry out the following steps to upgrade the packet core instance.
1. Check the **Version** field under the **Configuration** heading to confirm that it displays the new software version. ## Next steps+ You may want to use Log Analytics or the packet core dashboards to confirm your packet core instance is operating normally after the upgrade. - [Monitor Azure Private 5G Core with Log Analytics](monitor-private-5g-core-with-log-analytics.md)
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
For all [system supported file types](#file-types-supported-for-scanning), if th
Nested data, or nested schema parsing, isn't supported in SQL. A column with nested data will be reported and classified as is, and subdata won't be parsed.
-## Sampling within a file
+## Sampling data for classification
In Microsoft Purview Data Map terminology, - L1 scan: Extracts basic information and meta data like file name, size and fully qualified name
role-based-access-control Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-powershell.md
$remotepeer = @{
ResourceGroupName = 'myRouteServerRG' PeerName = 'myNVA' }
-Get-AzRouteServerPeerAdvertisedRoute @routeserver
+Get-AzRouteServerPeerAdvertisedRoute @remotepeer
``` Use the [Get-AzRouteServerPeerLearnedRoute](/powershell/module/az.network/get-azrouteserverpeerlearnedroute) to view routes learned by the Azure Route Server. ```azurepowershell-interactive
-$routeserver = @{
+$remotepeer = @{
RouteServerName = 'myRouteServer' ResourceGroupName = 'myRouteServerRG'
- AllowBranchToBranchTraffic
+ PeerName = 'myNVA'
}
-Get-AzRouteServerPeerLearnedRoute @routeserver
+Get-AzRouteServerPeerLearnedRoute @remotepeer
``` ## Clean up resources
If you no longer need the Azure Route Server, use the first command to remove th
1. Remove the BGP peering between Azure Route Server and an NVA with [Remove-AzRouteServerPeer](/powershell/module/az.network/remove-azrouteserverpeer): ```azurepowershell-interactive
-$peer = @{
+$remotepeer = @{
PeerName = 'myNVA' RouteServerName = 'myRouteServer' ResourceGroupName = 'myRouteServerRG' }
-Remove-AzRouteServerPeer @peer
+Remove-AzRouteServerPeer @remotepeer
``` 2. Remove the Azure Route Server with [Remove-AzRouteServer](/powershell/module/az.network/remove-azrouteserver):
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Title: Frequently asked questions about Azure Route Server
+ Title: Azure Route Server frequently asked questions (FAQ)
description: Find answers to frequently asked questions about Azure Route Server. Previously updated : 07/26/2022 Last updated : 07/31/2022
-# Azure Route Server FAQ
+# Azure Route Server frequently asked questions (FAQ)
## What is Azure Route Server?
route-server Tutorial Configure Route Server With Quagga https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-configure-route-server-with-quagga.md
Previously updated : 08/23/2021 Last updated : 08/01/2022 # Tutorial: Configure peering between Azure Route Server and Quagga network virtual appliance
-This tutorial shows you how to deploy an Azure Route Server into a virtual network and establish a BGP peering connection with a Quagga network virtual appliance. You'll deploy a virtual network with five subnets. One subnet will be dedicated to the Azure Route Server and another subnet dedicated to the Quagga NVA. The Quagga NVA will be configured to exchange routes with the Route Server. Lastly, you'll test to make sure routes are properly exchanged on the Azure Route Server and Quagga NVA.
+This tutorial shows you how to deploy an Azure Route Server into a virtual network and establish a BGP peering connection with a Quagga network virtual appliance. You'll deploy a virtual network with four subnets. One subnet will be dedicated to the Azure Route Server and another subnet dedicated to the Quagga NVA. The Quagga NVA will be configured to exchange routes with the Route Server. Lastly, you'll test to make sure routes are properly exchanged on the Azure Route Server and Quagga NVA.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create virtual network with five subnets
+> * Create a virtual network with five subnets
> * Deploy an Azure Route Server
-> * Deploy virtual machine running Quagga
+> * Deploy a virtual machine running Quagga
> * Configure Route Server peering > * Check learned routes
If you don't have an Azure subscription, create a [free account](https://azure.m
* An Azure subscription
+## Sign in to Azure
+
+Sign in to the Azure portal at https://portal.azure.com.
+ ## Create a virtual network
-You'll need a virtual network to deploy both the Azure Route Server and the Quagga NVA into. Each deployment will have its own dedicated subnet.
+You'll need a virtual network to deploy both the Azure Route Server and the Quagga NVA. Azure Route Server must be deployed in a dedicated subnet called *RouteServerSubnet*.
-1. On the top left-hand side of the screen, select **Create a resource** and search for **Virtual Network**. Then select **Create**.
+1. On the Azure portal home page, search for *virtual network*, and select **Virtual networks** from the search results.
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-new-virtual-network.png" alt-text="Screenshot of create a new virtual network resource.":::
-1. On the *Basics* tab of *Create a virtual network* enter or select the following information then select **Next : IP Addresses >**:
+1. On the **Virtual networks** page, select **+ Create**.
+
+1. On the **Basics** tab of **Create virtual network**, enter or select the following information:
| Settings | Value |
- | -- | -- |
- | Subscription | Select the subscription for this deployment. |
- | Resource group | Select an existing or create a new resource group for this deployment. |
- | Name | Enter a name for the virtual network. This tutorial will use *myVirtualNetwork*.
- | Region | Select the region for which this virtual network will be deployed in. This tutorial will use *West US*.
+ | -- | -- |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Create new**. </br> In **Name** enter **myRouteServerRG**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter *myVirtualNetwork*. |
+ | Region | Select **East US**. |
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/virtual-network-basics-tab.png" alt-text="Screenshot of basics tab settings for the virtual network.":::
-1. On the **IP Addresses** tab, configure the *virtual network address space* to **10.1.0.0/16**. Then configure the following five subnets:
+1. Select **IP Addresses** tab or **Next : IP Addresses >** button.
+
+1. On the **IP Addresses** tab, configure **IPv4 address space** to **10.1.0.0/16**, then configure the following subnets:
| Subnet name | Subnet address range | | -- | -- |
You'll need a virtual network to deploy both the Azure Route Server and the Quag
| subnet1 | 10.1.2.0/24 | | subnet2 | 10.1.3.0/24 | | subnet3 | 10.1.4.0/24 |
- | GatewaySubnet | 10.1.5.0/24 |
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/virtual-network-ip-addresses.png" alt-text="Screenshot of IP address settings for the virtual network.":::
You'll need a virtual network to deploy both the Azure Route Server and the Quag
The Route Server is used to communicate with your NVA and exchange virtual network routes using a BGP peering connection.
-1. Go to https://aka.ms/routeserver.
-
-1. Select **+ Create new route server**.
+1. On the Azure portal, search for *route server*, and select **Route Servers** from the search results.
- :::image type="content" source="./media/quickstart-configure-route-server-portal/route-server-landing-page.png" alt-text="Screenshot of Route Server landing page.":::
+1. On the **Route Servers** page, select **+ Create**.
-1. On the **Create a Route Server** page, enter, or select the following information:
+1. On the **Basics** tab of **Create a Route Server** page, enter or select the following information:
| Settings | Value | | -- | -- |
- | Subscription | Select the same subscription the virtual was created in the previous section. |
- | Resource group | Select the existing resource group *myRouteServerRG*. |
- | Name | Enter the Route Server name *myRouteServer*. |
- | Region | Select the **West US** region. |
- | Virtual Network | Select the *myVirtualNetwork* virtual network. |
- | Subnet | Select the *RouteServerSubnet (10.1.0.0/25)* created previously. |
- | Public IP address | Create a new or selecting an existing Standard public IP address to use with the Route Server. This IP address ensures connectivity to the backend service that manages the Route Server configuration. |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription that you used for the virtual network. |
+ | Resource group | Select **myRouteServerRG**. |
+ | **Instance details** | |
+ | Name | Enter *myRouteServer*. |
+ | Region | Select **East US** region. |
+ | **Configure virtual networks** | |
+ | Virtual Network | Select **myVirtualNetwork**. |
+ | Subnet | Select **RouteServerSubnet (10.1.0.0/25)**. This subnet is a dedicated Route Server subnet. |
+ | **Public IP address** | |
+ | Public IP address | Select **Create new**, and then enter *myRouteServer-ip*. This Standard IP address ensures connectivity to the backend service that manages the Route Server configuration. |
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/route-server-basics-tab.png" alt-text="Screenshot of basics tab for Route Server creation.":::
The Route Server is used to communicate with your NVA and exchange virtual netwo
## Create Quagga network virtual appliance
-To configure the Quagga network virtual appliance, you will need to deploy a Linux virtual machine.
+To configure the Quagga network virtual appliance, you'll need to deploy a Linux virtual machine, and then configure it with this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh).
+
+### Create Quagga virtual machine
-1. From the Azure portal, select **+ Create a resource > Compute > Virtual machine**. Then select **Create**.
+1. On the Azure portal, search for *virtual machine*, and select **Virtual machines** from the search results.
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-virtual-machine.png" alt-text="Screenshot of creating new virtual machine page.":::
+1. Select **Create**, then select **Azure virtual machine**.
-1. On the *Basics* tab, enter or select the following information:
+1. On the **Basics** tab of **Create a virtual machine**, enter or select the following information:
| Settings | Value | | -- | -- |
- | Subscription | Select the same subscription as the virtual network deployed previously. |
- | Resource group | Select the existing resource group *myRouteServerRG*. |
- | Virtual machine name | Enter the name **Quagga**. |
- | Region | Select the **West US** region. |
- | Image | Select **Ubuntu 18.04 LTS - Gen 1**. |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription that you used for the virtual network. |
+ | Resource group | Select **myRouteServerRG**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter *Quagga*. |
+ | Region | Select **(US) East US**. |
+ | Availability options | Select **No infrastructure required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Ubuntu 18.04 LTS - Gen 2**. |
| Size | Select **Standard_B2s - 2vcpus, 4GiB memory**. |
- | Authentication type | Select **Password** |
+ | **Administrator account** | |
+ | Authentication type | Select **Password**. |
| Username | Enter *azureuser*. Don't use *quagga* as the user name or else the setup script will fail in a later step. |
- | Password | Enter and confirm the password of your choosing. |
+ | Password | Enter a password of your choosing. |
+ | Confirm password | Reenter the password. |
+ | **Inbound port rules** | |
| Public inbound ports | Select **Allow selected ports**. | | Select inbound ports | Select **SSH (22)**. |
To configure the Quagga network virtual appliance, you will need to deploy a Lin
| Settings | Value | | -- | -- |
- | Virtual Network | Select **myVirtualNetwork**. |
+ | Virtual network | Select **myVirtualNetwork**. |
| Subnet | Select **subnet3 (10.1.4.0/24)**. | | Public IP | Leave as default. |
- | NIC network security group | Leave as default. **Basic**. |
- | Public inbound ports | Leave as default. **Allow selected ports**. |
- | Select inbound ports | Leaves as default. **SSH (22)**. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **SSH (22)**. |
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-quagga-networking-tab.png" alt-text="Screenshot of networking tab for creating a new virtual machine." lightbox="./media/tutorial-configure-route-server-with-quagga/create-quagga-networking-tab-expanded.png"::: 1. Select **Review + create** and then **Create** after validation passes. The deployment of the VM will take about 10 minutes.
-1. Once the VM has deployed, go to the networking settings of the Quagga VM and select the network interface.
+1. Once the VM has deployed, go to the **Networking** page of **Quagga** virtual machine and select the network interface.
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/quagga-network-settings.png" alt-text="Screenshot of networking page of the Quagga VM.":::
-1. Select **IP configuration** under *Settings* and then select **ipconfig1**.
+1. Select **IP configuration** under **Settings** and then select **ipconfig1**.
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/quagga-ip-configuration.png" alt-text="Screenshot of IP configuration page of the Quagga VM.":::
+ :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/quagga-ip-configuration.png" alt-text="Screenshot of IP configurations page of the Quagga VM.":::
-1. Change the assignment from *Dynamic* to **Static** and then change the IP address from *10.1.4.4* to **10.1.4.10**. This IP is used in this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh) which will be run in a later step. If you want to use a different IP address ensure to update the IP in the script. Select **Save** to update the IP configuration of the VM.
+1. Under **Private IP address Settings**, change the **Assignment** from **Dynamic** to **Static**, and then change the **IP address** from **10.1.4.4** to **10.1.4.10**. This IP address is used in this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh), which will be run in a later step. If you want to use a different IP address, ensure to update the IP in the script.
-1. Using [Putty](https://www.putty.org/) connect to the Quagga VM using the public IP address and credential used to create the VM.
+1. Take note of the public IP, and select **Save** to update the IP configurations of the VM.
-1. Once logged in, enter `sudo su` to switch to super user to avoid errors running the script. Copy this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh) and paste it into the Putty session. The script will configure the virtual machine with Quagga along with other network settings. Update the script to suit your network environment before running it on the virtual machine. It will take a few minutes for the script to complete the setup.
+ :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/change-ip-configuration.png" alt-text="Screenshot of changing IP configurations the Quagga VM.":::
+
+### Configure Quagga virtual machine
+
+1. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a PowerShell prompt.
+
+1. At your prompt, open an SSH connection to the Quagga VM. Replace the IP address with the one you took note of in the previous step.
+
+```console
+ssh azureuser@52.240.57.121
+```
+
+3. When prompted, enter the password you previously created for the Quagga VM.
+
+1. Once logged in, enter `sudo su` to switch to super user to avoid errors running the script. Copy this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh) and paste it into the SSH session. The script will configure the virtual machine with Quagga along with other network settings. Update the script to suit your network environment before running it on the virtual machine. It will take a few minutes for the script to complete the setup.
## Configure Route Server peering 1. Go to the Route Server you created in the previous step.
-1. Select **Peers** under *Settings*. Then select **+ Add** to add a new peer.
+1. Select **Peers** under **Settings**. Then, select **+ Add** to add a new peer.
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/peers.png" alt-text="Screenshot of peers page for Route Server.":::
-1. On the *Add Peer* page, enter the following information, and then select **Add** to save the configuration:
+1. On the **Add Peer** page, enter the following information, and then select **Add** to save the configuration:
| Setting | Value | | - | -- |
- | Name | Enter a name to identify this peer. **Quagga**. |
- | ASN | Enter the ASN number for the Quagga NVA. **65001** is the ASN defined in the script. |
- | IPv4 Address | Enter the private IP of the Quagga NVA virtual machine. |
+ | Name | Enter *Quagga*. This name is used to identify the peer. |
+ | ASN | Enter *65001*. This ASN is defined in the script for Quagga NVA. |
+ | IPv4 Address | Enter *10.1.4.10*. This IPv4 is the private IP of the Quagga NVA. |
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/add-peer.png" alt-text="Screenshot of add peer page.":::
-1. The Peers page should look like this once you add a peer:
+1. Once you add the Quagga NVA as a peer, the **Peers** page should look like this:
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/peer-configured.png" alt-text="Screenshot of a configured peer."::: ## Check learned routes
-1. To check the routes learned by the Route Server use this command:
+
+1. To check the routes learned by the Route Server, use this command in Azure portal Cloud Shell:
```azurepowershell-interactive $routes = @{
To configure the Quagga network virtual appliance, you will need to deploy a Lin
:::image type="content" source="./media/tutorial-configure-route-server-with-quagga/routes-learned.png" alt-text="Screenshot of routes learned by Route Server.":::
-1. To check the routes learned by the Quagga NVA enter `vtysh` and then enter `show ip bgp`. Output should look like the following:
+1. To check the routes learned by the Quagga NVA, enter `vtysh` and then enter `show ip bgp` on the NVA. Output should look like the following:
``` root@Quagga:/home/azureuser# vtysh
To configure the Quagga network virtual appliance, you will need to deploy a Lin
## Clean up resources
-If you no longer need the Route Server and all associating resources, you can delete the **myRouteServerRG** resource group.
+When no longer needed, you can delete all resources created in this tutorial by following these steps:
+
+1. On the Azure portal menu, select **Resource groups**.
+
+2. Select the **myRouteServerRG** resource group.
+3. Select **Delete resource group**.
+
+4. Enter *myRouteServerRG* and select **Delete**.
## Next steps
+In this tutorial, you learned how to create and configure a Route Server with and an NVA. To learn more on Route Servers, see the frequently asked questions page:
+ Advance to the next article to learn how to troubleshoot Route Server. > [!div class="nextstepaction"]
-> [Troubleshoot Route Server](troubleshoot-route-server.md)
+> [Route Server FAQ](route-server-faq.md)
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
Title: Microsoft Sentinel skill-up training
-description: This article walks you through a Microsoft Sentinel level 400 training to help you skill up on Microsoft Sentinel. The training includes 21 modules that contain relevant product documentation, blog posts and other resources. Make sure to check the most recent links for the documentation.
+description: This article walks you through a level 400 training to help you skill up on Microsoft Sentinel. The training comprises 21 modules that present relevant product documentation, blog posts, and other resources.
Last updated 06/29/2022
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-This article walks you through a Microsoft Sentinel level 400 training to help you skill up on Microsoft Sentinel. The training includes 21 modules that contain relevant product documentation, blog posts and other resources. Make sure to check the most recent links for the documentation.
+This article walks you through a level 400 training to help you skill up on Microsoft Sentinel. The training comprises 21 modules that present relevant product documentation, blog posts, and other resources.
-The modules listed below are split into five parts following the life cycle of a Security Operation Center (SOC):
+The modules listed here are split into five parts following the life cycle of a Security Operation Center (SOC):
[Part 1: Overview](#part-1-overview) - [Module 0: Other learning and support options ](#module-0-other-learning-and-support-options) - [Module 1: Get started with Microsoft Sentinel](#module-1-get-started-with-microsoft-sentinel) - [Module 2: How is Microsoft Sentinel used?](#module-2-how-is-microsoft-sentinel-used)
-[Part 2: Architecting & Deploying](#part-2-architecting--deploying)
+[Part 2: Architecting and deploying](#part-2-architecting-and-deploying)
- [Module 3: Workspace and tenant architecture](#module-3-workspace-and-tenant-architecture) - [Module 4: Data collection](#module-4-data-collection)-- [Module 5: Log Management](#module-5-log-management)-- [Module 6: Enrichment: TI, Watchlists, and more](#module-6-enrichment-ti-watchlists-and-more)
+- [Module 5: Log management](#module-5-log-management)
+- [Module 6: Enrichment: Threat intelligence, watchlists, and more](#module-6-enrichment-threat-intelligence-watchlists-and-more)
- [Module 7: Log transformation](#module-7-log-transformation) - [Module 8: Migration](#module-8-migration)-- [Module 9: ASIM and Normalization](#module-9-advanced-siem-information-model-asim-and-normalization)
+- [Module 9: Advanced SIEM information model and normalization](#module-9-advanced-siem-information-model-and-normalization)
-[Part 3: Creating Content](#part-3-creating-content)
-- [Module 10: The Kusto Query Language (KQL)](#module-10-the-kusto-query-language-kql)
+[Part 3: Creating content](#part-3-creating-content)
+- [Module 10: Kusto Query Language](#module-10-kusto-query-language)
- [Module 11: Analytics](#module-11-analytics) - [Module 12: Implementing SOAR](#module-12-implementing-soar) - [Module 13: Workbooks, reporting, and visualization](#module-13-workbooks-reporting-and-visualization)
The modules listed below are split into five parts following the life cycle of a
- [Module 19: Monitoring Microsoft Sentinel's health](#module-19-monitoring-microsoft-sentinels-health) [Part 5: Advanced](#part-5-advanced)-- [Module 20: Extending and Integrating using Microsoft Sentinel APIs](#module-20-extending-and-integrating-using-microsoft-sentinel-apis)-- [Module 21: Bring your own ML](#module-21-bring-your-own-ml)
+- [Module 20: Extending and integrating by using the Microsoft Sentinel APIs](#module-20-extending-and-integrating-by-using-the-microsoft-sentinel-apis)
+- [Module 21: Build-your-own machine learning](#module-21-build-your-own-machine-learning)
## Part 1: Overview ### Module 0: Other learning and support options
-This Skill-up training is based on the [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/become-a-microsoft-sentinel-ninja-the-complete-level-400/ba-p/1246310) and is a level 400 training. If you don't want to go as deep, or have a specific issue, other resources might be more suitable:
+This skill-up training is a level-400 training that's based on the [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/become-a-microsoft-sentinel-ninja-the-complete-level-400/ba-p/1246310). If you don't want to go as deep, or you have a specific issue to resolve, other resources might be more suitable:
-* While extensive, the Skill-up training has to follow a script, and can't expand on every topic. Read the referenced documentation for details on every article.
-* You can now certify with the new certification [SC-200: Microsoft Security Operations Analyst](/learn/certifications/exams/sc-200), which covers Microsoft Sentinel. You may also want to consider the [SC-900: Microsoft Security, Compliance, and Identity Fundamentals](/learn/certifications/exams/sc-900) or the [AZ-500: Microsoft Azure Security Technologies](/learn/certifications/exams/az-500), for a broader, higher level view of the Microsoft Security suite.
-* Are you already skilled-up on Microsoft Sentinel? Just keep track of [what's new](whats-new.md) or join the [Private Preview](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-kibZAPJAVBiU46J6wWF_5URDFSWUhYUldTWjdJNkFMVU1LTEU4VUZHMy4u) program for an earlier glimpse.
-* Do you have a feature idea and do you want to share with us? Let us know on the [Microsoft Sentinel user voice page](https://feedback.azure.com/d365community/forum/37638d17-0625-ec11-b6e6-000d3a4f07b8).
-* Premier customer? You might want the on-site (or remote) four-day _Microsoft Sentinel Fundamentals Workshop_. Contact your Customer Success Account Manager for more details.
-* Do you have a specific issue? Ask (or answer others) on the [Microsoft Sentinel Tech Community](https://techcommunity.microsoft.com/t5/microsoft-sentinel/bd-p/MicrosoftSentinel). As a last resort, send an e-mail to <MicrosoftSentinel@microsoft.com>.
+* Although the skill-up training is extensive, it naturally has to follow a script and can't expand on every topic. See the referenced documentation for information about each article.
+* You can now become certified with the new certification [SC-200: Microsoft Security Operations Analyst](/learn/certifications/exams/sc-200), which covers Microsoft Sentinel. For a broader, higher-level view of the Microsoft Security suite, you might also want to consider [SC-900: Microsoft Security, Compliance, and Identity Fundamentals](/learn/certifications/exams/sc-900) or [AZ-500: Microsoft Azure Security Technologies](/learn/certifications/exams/az-500).
+* If you're already skilled up on Microsoft Sentinel, keep track of [what's new](whats-new.md) or join the [Private Preview](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-kibZAPJAVBiU46J6wWF_5URDFSWUhYUldTWjdJNkFMVU1LTEU4VUZHMy4u) program for an earlier view into upcoming releases.
+* Do you have a feature idea to share with us? Let us know on the [Microsoft Sentinel user voice page](https://feedback.azure.com/d365community/forum/37638d17-0625-ec11-b6e6-000d3a4f07b8).
+* Are you a premier customer? You might want the on-site or remote, four-day _Microsoft Sentinel Fundamentals Workshop_. Contact your Customer Success Account Manager for more details.
+* Do you have a specific issue? Ask (or answer others) on the [Microsoft Sentinel Tech Community](https://techcommunity.microsoft.com/t5/microsoft-sentinel/bd-p/MicrosoftSentinel). Or you can email your question or issue to us at <MicrosoftSentinel@microsoft.com>.
### Module 1: Get started with Microsoft Sentinel
-Microsoft Sentinel is a **scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution**. Microsoft Sentinel delivers security analytics and threat intelligence across the enterprise. It provides a single solution for alert detection, threat visibility, proactive hunting, and threat response. [Read more.](overview.md)
+Microsoft Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Microsoft Sentinel delivers security analytics and threat intelligence across the enterprise. It provides a single solution for alert detection, threat visibility, proactive hunting, and threat response. For more information, see [What is Microsoft Sentinel?](overview.md).
+If you want to get an initial overview of Microsoft Sentinel's technical capabilities, the latest [Ignite presentation](https://www.youtube.com/watch?v=kGctnb4ddAE) is a good starting point. You might also find the [Quick Start Guide to Microsoft Sentinel](https://azure.microsoft.com/resources/quick-start-guide-to-azure-sentinel/) useful (site registration is required).
-If you want to get an initial overview of Microsoft Sentinel's technical capabilities, the [latest Ignite presentation](https://www.youtube.com/watch?v=kGctnb4ddAE) is a good starting point. You might also find the [Quick Start Guide to Microsoft Sentinel](https://azure.microsoft.com/resources/quick-start-guide-to-azure-sentinel/) useful (requires registration). A more detailed overview can be found in this webinar: [MP4](https://1drv.ms/v/s%21AnEPjr8tHcNmggMkcVweWOqoxuN9), [YouTube](https://youtu.be/7An7BB-CcQI), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgjrN_zHpzbnfX_mX).
+You'll find a more detailed overview in this Microsoft Sentinel webinar: [YouTube](https://youtu.be/7An7BB-CcQI), [MP4](https://1drv.ms/v/s%21AnEPjr8tHcNmggMkcVweWOqoxuN9), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgjrN_zHpzbnfX_mX).
-Lastly, do you want to try it yourself? The Microsoft Sentinel All-In-One Accelerator ([blog](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-all-in-one-accelerator/ba-p/1807933), [YouTube](https://youtu.be/JB73TuX9DVs), [MP4](https://aka.ms/AzSentinel_04FEB2021_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhjw41XZvVSCSNIuX)) presents an easy way to get you started. To learn how to start yourself, review the [onboarding documentation](quickstart-onboard.md), or watch [Insight's Sentinel setup and configuration video](https://www.youtube.com/watch?v=Cyd16wVwxZc).
+Finally, do you want to try it yourself? The Microsoft Sentinel All-In-One Accelerator ([blog](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-all-in-one-accelerator/ba-p/1807933), [YouTube](https://youtu.be/JB73TuX9DVs), [MP4](https://aka.ms/AzSentinel_04FEB2021_MP4), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhjw41XZvVSCSNIuX)) offers an easy way to get started. To learn how to get started, review the [onboarding documentation](quickstart-onboard.md), or view [Insight's Microsoft Sentinel setup and configuration video](https://www.youtube.com/watch?v=Cyd16wVwxZc).
-#### Learn from users
+#### Learn from other users
-Thousands of organizations and service providers are using Microsoft Sentinel. As usual with security products, most of them don't go public about it. Still, there are some.
+Thousands of organizations and service providers are using Microsoft Sentinel. As is usual with security products, most organizations don't go public about it. Still, here are a few who have:
-* You can find [public customer use cases here](https://customers.microsoft.com/en-us/home)
-* [Insight](https://www.insightcdct.com/) released a use case about [an NBA team adapting Sentinel](https://www.insightcdct.com/Resources/Case-Studies/Case-Studies/NBA-Team-Adopts-Azure-Sentinel-for-a-Modern-Securi).
-* Stuart Gregg, Security Operations Manager @ ASOS, posted a much more detailed [blog post from Microsoft Sentinel's experience, focusing on hunting](https://medium.com/@stuart.gregg/proactive-phishing-with-azure-sentinel-part-1-b570fff3113).
+* Find [public customer use cases](https://customers.microsoft.com/home).
+* [Insight](https://www.insightcdct.com/) released a use case about [an NBA team adopts Microsoft Sentinel](https://www.insightcdct.com/Resources/Case-Studies/Case-Studies/NBA-Team-Adopts-Azure-Sentinel-for-a-Modern-Securi).
+* Stuart Gregg, Security Operations Manager at ASOS, posted a much more detailed [blog post from the Microsoft Sentinel experience, focusing on hunting](https://medium.com/@stuart.gregg/proactive-phishing-with-azure-sentinel-part-1-b570fff3113).
-#### Learn from Analysts
-* [Microsoft Sentinel is a Leader placement in Forrester Wave.](https://www.microsoft.com/security/blog/2020/12/01/azure-sentinel-achieves-a-leader-placement-in-forrester-wave-with-top-ranking-in-strategy/)
-* [Microsoft named a Visionary in the 2021 Gartner Magic Quadrant for SIEM for Microsoft Sentinel.](https://www.microsoft.com/security/blog/2021/07/08/microsoft-named-a-visionary-in-the-2021-gartner-magic-quadrant-for-siem-for-azure-sentinel/)
+#### Learn from analysts
+* [Azure Sentinel achieves a Leader placement in Forrester Wave, with top ranking in Strategy](https://www.microsoft.com/security/blog/2020/12/01/azure-sentinel-achieves-a-leader-placement-in-forrester-wave-with-top-ranking-in-strategy/)
+* [Microsoft named a Visionary in the 2021 Gartner Magic Quadrant for SIEM for Microsoft Sentinel](https://www.microsoft.com/security/blog/2021/07/08/microsoft-named-a-visionary-in-the-2021-gartner-magic-quadrant-for-siem-for-azure-sentinel/)
### Module 2: How is Microsoft Sentinel used?
-Many users use Microsoft Sentinel as their primary SIEM. Most of the modules in this course cover this use case. In this module, we present a few extra ways to use Microsoft Sentinel.
+Many organizations use Microsoft Sentinel as their primary SIEM. Most of the modules in this course cover this use case. In this module, we present a few extra ways to use Microsoft Sentinel.
#### As part of the Microsoft Security stack
-Use Microsoft Sentinel, Microsoft Defender for Cloud, Microsoft 365 Defender in tandem to protect your Microsoft workloads, including Windows, Azure, and Office:
+Use Microsoft Sentinel, Microsoft Defender for Cloud, and Microsoft 365 Defender together to protect your Microsoft workloads, including Windows, Azure, and Office:
* Read more about [our comprehensive SIEM+XDR solution combining Microsoft Sentinel and Microsoft 365 Defender](https://techcommunity.microsoft.com/t5/azure-sentinel/whats-new-azure-sentinel-and-microsoft-365-defender-incident/ba-p/2191090).
-* Read [The Azure Security compass](https://aka.ms/azuresecuritycompass) to understand Microsoft's blueprint for your security operations.
-* Read and watch how such a setup helps detect and respond to a WebShell attack: [Blog](https://techcommunity.microsoft.com/t5/azure-sentinel/analysing-web-shell-attacks-with-azure-defender-data-in-azure/ba-p/1724130), [Video demo](https://techcommunity.microsoft.com/t5/video-hub/webshell-attack-deep-dive/m-p/1698964).
-* Watch the webinar: [Better Together | OT and IoT Attack Detection, Investigation and Response](https://youtu.be/S8DlZmzYO2s).
+* Read [The Azure Security compass](https://aka.ms/azuresecuritycompass) (now Microsoft Security Best Practices) to understand the Microsoft blueprint for your security operations.
+* Read and watch how such a setup helps detect and respond to a WebShell attack: [blog](https://techcommunity.microsoft.com/t5/azure-sentinel/analysing-web-shell-attacks-with-azure-defender-data-in-azure/ba-p/1724130) or [video demo](https://techcommunity.microsoft.com/t5/video-hub/webshell-attack-deep-dive/m-p/1698964).
+* View the Better Together webinar ["OT and IOT attack detection, investigation, and response."](https://youtu.be/S8DlZmzYO2s)
#### To monitor your multi-cloud workloads
The cloud is (still) new and often not monitored as extensively as on-premises w
#### Side by side with your existing SIEM
-Either for a transition period or a longer term, if you're using Microsoft Sentinel for your cloud workloads, you may be using Microsoft Sentinel alongside your existing SIEM. You might also be using both with a ticketing system such as Service Now.
+For either a transition period or a longer term, if you're using Microsoft Sentinel for your cloud workloads, you might be using Microsoft Sentinel alongside your existing SIEM. You might also be using both with a ticketing system such as Service Now.
-For more information on migrating from another SIEM to Microsoft Sentinel, watch the migration webinar: [MP4](https://aka.ms/AzSentinel_DetectionRules_19FEB21_MP4), [YouTube](https://youtu.be/njXK1h9lfR4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhlsYDm99KLbNWlq5).
+For more information about migrating from another SIEM to Microsoft Sentinel, view the migration webinar: [YouTube](https://youtu.be/njXK1h9lfR4), [MP4](https://aka.ms/AzSentinel_DetectionRules_19FEB21_MP4), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhlsYDm99KLbNWlq5).
-There are three common scenarios for side by side deployment:
+There are three common scenarios for side-by-side deployment:
-* If you have a ticketing system in your SOC, a best practice is to send alerts or incidents from both SIEM systems to a ticketing system such as Service Now. An example is using [Microsoft Sentinel Incident Bi-directional sync with ServiceNow](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-incident-bi-directional-sync-with-servicenow/ba-p/1667771) or [sending alerts enriched with supporting events from Microsoft Sentinel to third-party SIEMs](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-alerts-enriched-with-supporting-events-from-azure/ba-p/1456976).
-* At least initially, many users send alerts from Microsoft Sentinel to your on-premises SIEM. Read on how to do it in [Sending alerts enriched with supporting events from Microsoft Sentinel to third-party SIEMs](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-alerts-enriched-with-supporting-events-from-azure/ba-p/1456976).
-* Over time, as Microsoft Sentinel covers more workloads, it's typical to reverse that and send alerts from your on-premises SIEM to Microsoft Sentinel. To do that:
- * With Splunk, read [Send data and notable events from Splunk to Microsoft Sentinel using the Microsoft Sentinel Splunk ....](https://techcommunity.microsoft.com/t5/azure-sentinel/how-to-export-data-from-splunk-to-azure-sentinel/ba-p/1891237)
- * With QRadar read [Sending QRadar offenses to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/migrating-qradar-offenses-to-azure-sentinel/ba-p/2102043)
- * For ArcSight, use [CEF Forwarding](https://community.microfocus.com/t5/Logger-Forwarding-Connectors/ArcSight-Forwarding-Connector-Configuration-Guide/ta-p/1583918).
+* If you have a ticketing system in your SOC, a best practice is to send alerts or incidents from both SIEM systems to a ticketing system such as Service Now. Examples include [using Microsoft Sentinel incident bi-directional sync with ServiceNow](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-incident-bi-directional-sync-with-servicenow/ba-p/1667771) or [sending alerts enriched with supporting events from Microsoft Sentinel to third-party SIEMs](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-alerts-enriched-with-supporting-events-from-azure/ba-p/1456976).
-You can also send the alerts from Microsoft Sentinel to your third-party SIEM or ticketing system using the [Graph Security API](/graph/security-integration), which is simpler, but wouldn't enable sending other data.
+* At least initially, many users send alerts from Microsoft Sentinel to their on-premises SIEM. To learn how, see [Send alerts enriched with supporting events from Microsoft Sentinel to third-party SIEMs](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-alerts-enriched-with-supporting-events-from-azure/ba-p/1456976).
+
+* Over time, as Microsoft Sentinel covers more workloads, you would ordinarily reverse direction and send alerts from your on-premises SIEM to Microsoft Sentinel. To do so:
+ * For Splunk, see [Send data and notable events from Splunk to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/how-to-export-data-from-splunk-to-azure-sentinel/ba-p/1891237).
+ * For QRadar, see [Send QRadar offenses to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/migrating-qradar-offenses-to-azure-sentinel/ba-p/2102043).
+ * For ArcSight, see [Common Event Format (CEF) forwarding](https://community.microfocus.com/t5/Logger-Forwarding-Connectors/ArcSight-Forwarding-Connector-Configuration-Guide/ta-p/1583918).
+
+You can also send the alerts from Microsoft Sentinel to your third-party SIEM or ticketing system by using the [Graph Security API](/graph/security-integration). This approach is simpler, but it doesn't enable sending other data.
#### For MSSPs
-Since it eliminates the setup cost and is location agnostics, Microsoft Sentinel is a popular choice for providing SIEM-as-a-service. You can find a [list of MISA (Microsoft Intelligent Security Association) member managed security service providers (MSSPs) using Microsoft Sentinel](https://www.microsoft.com/security/blog/2020/07/14/microsoft-intelligent-security-association-managed-security-service-providers/). Many other MSSPs, especially regional and smaller ones, use Microsoft Sentinel but aren't MISA members.
+Because it eliminates the setup cost and is location agnostic, Microsoft Sentinel is a popular choice for providing SIEM as a service. You'll find a [list of MISA (Microsoft Intelligent Security Association) member-managed security service providers (MSSPs) that use Microsoft Sentinel](https://www.microsoft.com/security/blog/2020/07/14/microsoft-intelligent-security-association-managed-security-service-providers/). Many other MSSPs, especially regional and smaller ones, use Microsoft Sentinel but aren't MISA members.
+
+To start your journey as an MSSP, read the [Microsoft Sentinel Technical Playbooks for MSSPs](https://aka.ms/azsentinelmssp). More information about MSSP support is included in the next module, which covers cloud architecture and multi-tenant support.
-To start your journey as an MSSP, you should read the [Microsoft Sentinel Technical Playbooks for MSSPs](https://aka.ms/azsentinelmssp). More information about MSSP support is included in the next module, cloud architecture and multi-tenant support.
+## Part 2: Architecting and deploying
-## Part 2: Architecting & Deploying
+Although "Part 1: Overview" offers ways to start using Microsoft Sentinel in a matter of minutes, before you start a production deployment, it's important to create a plan.
-While the previous section offers options to start using Microsoft Sentinel in a matter of minutes, before you start a production deployment, you need to plan. This section walks you through the areas that you need to consider when architecting your solution, and provides guidelines on how to implement your design:
+This section walks you through the areas to consider when you're architecting your solution, and it provides guidelines on how to implement your design:
* Workspace and tenant architecture * Data collection * Log management
-* Threat Intelligence acquisition
+* Threat intelligence acquisition
### Module 3: Workspace and tenant architecture
-A Microsoft Sentinel instance is called a workspace. The workspace is the same as a Log Analytics workspace and supports any Log Analytics capability. You can think of Sentinel as a solution that adds SIEM features on top of a Log Analytics workspace.
+A Microsoft Sentinel instance is called a *workspace*. The workspace is the same as a Log Analytics workspace, and it supports any Log Analytics capability. You can think of Microsoft Sentinel as a solution that adds SIEM features on top of a Log Analytics workspace.
-Multiple workspaces are often necessary and can act together as a single Microsoft Sentinel system. A special use case is providing service using Microsoft Sentinel, for example, by an **MSSP** (Managed Security Service Provider) or by a **Global SOC** in a large organization.
+Multiple workspaces are often necessary and can act together as a single Microsoft Sentinel system. A special use case is providing a service by using Microsoft Sentinel (for example, by an *MSSP* (Managed Security Service Provider) or by a *Global SOC* in a large organization).
-To learn more about using multiple workspaces as one Microsoft Sentinel system, read [Extend Microsoft Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md) or watch the Webinar: [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmgkqH7MASAKIg8ql8), [YouTube](https://youtu.be/hwahlwgJPnE), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgkkYuxOITkGSI7x8).
+To learn more about using multiple workspaces as one Microsoft Sentinel system, see [Extend Microsoft Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md) or view the webinar: [YouTube](https://youtu.be/hwahlwgJPnE), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmgkqH7MASAKIg8ql8), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgkkYuxOITkGSI7x8).
-There are a few specific areas that require your consideration when using multiple workspaces:
-* An important driver for using multiple workspaces is **data residency**. Read more about [Microsoft Sentinel data residency](quickstart-onboard.md).
-* To deploy Microsoft Sentinel and manage content efficiently across multiple workspaces; you would like to manage Sentinel as code using **CI/CD technology**. A recommended best practice for Microsoft Sentinel is to enable continuous deployment:
- * Read [Enable Continuous Deployment Natively with Microsoft Sentinel Repositories!](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enable-continuous-deployment-natively-with-microsoft-sentinel/ba-p/2929413)
-* When managing multiple workspaces as an MSSP, you may want to [protect the MSSPΓÇÖs Intellectual Property in Microsoft Sentinel](mssp-protect-intellectual-property.md).
+When you're using multiple workspaces, consider the following:
+* An important driver for using multiple workspaces is *data residency*. For more information, see [Microsoft Sentinel data residency](quickstart-onboard.md).
+* To deploy Microsoft Sentinel and manage content efficiently across multiple workspaces, you could manage Microsoft Sentinel as code by using continuous integration/continuous delivery (CI/CD) technology. A recommended best practice for Microsoft Sentinel is to enable continuous deployment. For more information, see [Enable continuous deployment natively with Microsoft Sentinel repositories](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enable-continuous-deployment-natively-with-microsoft-sentinel/ba-p/2929413).
+* When you're managing multiple workspaces as an MSSP, you might want to [protect MSSP intellectual property in Microsoft Sentinel](mssp-protect-intellectual-property.md).
-The [Microsoft Sentinel Technical Playbook for MSSPs](https://aka.ms/azsentinelmssp) provides detailed guidelines for many of those topics, and is useful also for large organizations, not just to MSSPs.
+The [Microsoft Sentinel Technical Playbook for MSSPs](https://aka.ms/azsentinelmssp) provides detailed guidelines for many of those topics, and it's useful for large organizations, not just for MSSPs.
-### Module 4: Data Collection
+### Module 4: Data collection
-The foundation of a SIEM is collecting telemetry: events, alerts, and contextual enrichment information such as Threat Intelligence, vulnerability data, and asset information. You can find a list of sources you can connect here:
-* [Microsoft Sentinel data connectors](connect-data-sources.md)
-* [Find your Microsoft Sentinel data connector](data-connectors-reference.md) for seeing all the supported and out-of-the-box data connectors. You'll find links to generic deployment procedures, and extra steps required for specific connectors.
-* Data Collection Scenarios: Learn about collection methods such as [Logstash/CEF/WEF](connect-logstash.md). Other common scenarios are permissions restriction to tables, log filtering, collecting logs from AWS or GCP, O365 raw logs etc. All can be found in this webinar: [YouTube](https://www.youtube.com/watch?v=FStpHl0NRM8), [MP4](https://aka.ms/AS_LogCollectionScenarios_V3.0_18MAR2021_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhx-_hfIf0Ng3aM_G).
+The foundation of a SIEM is collecting telemetry: events, alerts, and contextual enrichment information, such as threat intelligence, vulnerability data, and asset information. Here is a list of sources to refer to:
+* Read [Microsoft Sentinel data connectors](connect-data-sources.md).
+* Go to [Find your Microsoft Sentinel data connector](data-connectors-reference.md) to see all the supported and out-of-the-box data connectors. You'll find links to generic deployment procedures, and extra steps required for specific connectors.
+* Data collection scenarios: Learn about collection methods such as [Logstash/CEF/WEF](connect-logstash.md). Other common scenarios are permissions restriction to tables, log filtering, collecting logs from Amazon Web Services (AWS) or Google Cloud Platform (GCP), Microsoft 365 raw logs, and so on. All can be found in the "Data Collection Scenarios" webinar: [YouTube](https://www.youtube.com/watch?v=FStpHl0NRM8), [MP4](https://aka.ms/AS_LogCollectionScenarios_V3.0_18MAR2021_MP4), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhx-_hfIf0Ng3aM_G).
-The first piece of information you'll see for each connector is its **data ingestion method**. The method that appears there will be a link to one of the following generic deployment procedures, which contain most of the information you'll need to connect your data sources to Microsoft Sentinel:
+The first piece of information you'll see for each connector is its *data ingestion method*. The method that appears there is a link to one of the following generic deployment procedures, which contain most of the information you'll need to connect your data sources to Microsoft Sentinel:
-|Data ingestion method | Linked article with instructions |
+|Data ingestion method | Associated article |
| -- | -- | | Azure service-to-service integration | [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md) | | Common Event Format (CEF) over Syslog | [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](connect-common-event-format.md) | | Microsoft Sentinel Data Collector API | [Connect your data source to the Microsoft Sentinel Data Collector API to ingest data](connect-rest-api-template.md) | | Azure Functions and the REST API | [Use Azure Functions to connect Microsoft Sentinel to your data source](connect-azure-functions-template.md) |
-| Syslog | [Collect data from Linux-based sources using Syslog](connect-syslog.md) |
-| Custom logs | [ Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md) |
+| Syslog | [Collect data from Linux-based sources by using Syslog](connect-syslog.md) |
+| Custom logs | [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md) |
-If your source isn't available, you can [create a custom connector](create-custom-connector.md). Custom connectors use the ingestion API and therefore are similar to direct sources. Custom connectors are most often implemented using Logic Apps, offering a codeless option, or Azure Functions.
+If your source isn't available, you can [create a custom connector](create-custom-connector.md). Custom connectors use the ingestion API and therefore are similar to direct sources. You most often implement custom connectors by using Azure Logic Apps, which offers a codeless option, or Azure Functions.
### Module 5: Log management
-While 'how many workspaces and which ones to use' is the first architecture question to ask when configuring Sentinel, there are other log management architectural decisions to consider:
-* Where and how long to retain data
-* How to best manage access to data and secure it
+The first architecture decision to consider when you're configuring Microsoft Sentinel, is *how many workspaces and which ones to use*. Other key log management architectural decisions to consider include:
+* Where and how long to retain data.
+* How to best manage access to data and secure it.
-#### Ingest, Archive, Search, and Restore Data within Microsoft Sentinel
+#### Ingest, archive, search, and restore data within Microsoft Sentinel
-Watch the webinar: Manage Your Log Lifecycle with New Methods for Ingestion, Archival, Search, and Restoration, [here](https://www.youtube.com/watch?v=LgGpSJxUGoc&ab_channel=MicrosoftSecurityCommunity).
+To get started, view the ["Manage your log lifecycle with new methods for ingestion, archival, search, and restoration"](https://www.youtube.com/watch?v=LgGpSJxUGoc&ab_channel=MicrosoftSecurityCommunity) webinar.
This suite of features contains:
-* **Basic ingestion tier**: new pricing tier for Azure Log Analytics that allows for logs to be ingested at a lower cost. This data is only retained in the workspace for eight days total.
-* **Archive tier**: Azure Log Analytics has expanded its retention capability from two years to seven years. With the new tier, it will allow data to be retained up to seven years in a low-cost archived state.
-* **Search jobs**: search tasks that run limited KQL in order to find and return all relevant logs to what is searched. These jobs search data across the analytics tier, basic tier. and archived data.
-* **Data restoration**: new feature that allows users to pick a data table and a time range in order to restore data to the workspace via restore table.
+* **Basic ingestion tier**: A new pricing tier for Azure Monitor Logs that lets you ingest logs at a lower cost. This data is retained in the workspace for only eight days.
+* **Archive tier**: Azure Monitor Logs has expanded its retention capability from two years to seven years. With this new tier, you can retain data for up to seven years in a low-cost archived state.
+* **Search jobs**: Search tasks that run limited KQL to find and return all relevant logs. These jobs search data across the analytics tier, the basic tier, and archived data.
+* **Data restoration**: A new feature that lets you pick a data table and a time range so that you can restore data to the workspace via a restore table.
+
+For more information about these new features, see [Ingest, archive, search, and restore data in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/ingest-archive-search-and-restore-data-in-microsoft-sentinel/ba-p/3195126).
+
+#### Alternative retention options outside the Microsoft Sentinel platform
-Learn more about these new features in [this article](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/ingest-archive-search-and-restore-data-in-microsoft-sentinel/ba-p/3195126).
+If you want to _retain data_ for more than two years or _reduce the retention cost_, consider using Azure Data Explorer for long-term retention of Microsoft Sentinel logs. See the [webinar slides](https://onedrive.live.com/?authkey=%21AGe3Zue4W0xYo4s&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21963&parId=66C31D2DBF8E0F71%21954&o=OneUp), [webinar recording](https://www.youtube.com/watch?v=UO8zeTxgeVw&ab_channel=MicrosoftSecurityCommunity), or [blog](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-azure-data-explorer-for-long-term-retention-of-microsoft/ba-p/1883947).
-#### Alternative retention options outside of the Microsoft Sentinel platform
+Want more in-depth information? View the ["Improving the breadth and coverage of threat hunting with ADX support, more entity types, and updated MITRE integration"](https://www.youtube.com/watch?v=5coYjlw2Qqs&ab_channel=MicrosoftSecurityCommunity) webinar.
-If you want to retain data for _more than two years_, or _reduce the retention cost_, you can consider using Azure Data Explorer for long-term retention of Microsoft Sentinel logs: [Webinar Slides](https://onedrive.live.com/?authkey=%21AGe3Zue4W0xYo4s&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21963&parId=66C31D2DBF8E0F71%21954&o=OneUp), [Webinar Recording](https://www.youtube.com/watch?v=UO8zeTxgeVw&ab_channel=MicrosoftSecurityCommunity), [Blog](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-azure-data-explorer-for-long-term-retention-of-microsoft/ba-p/1883947).
+If you prefer another long-term retention solution, see [Export from Microsoft Sentinel / Log Analytics workspace to Azure Storage and Event Hubs](/cli/azure/monitor/log-analytics/workspace/data-export.md) or [Move logs to long-term storage by using Azure Logic Apps](../azure-monitor/logs/logs-export-logic-app.md). The advantage of using Logic Apps is that it can export historical data.
-Need more depth? Watch the _Improving the Breadth and Coverage of Threat Hunting with ADX Support, More Entity Types, and Updated MITRE Integration_ webinar [here](https://www.youtube.com/watch?v=5coYjlw2Qqs&ab_channel=MicrosoftSecurityCommunity).
+Finally, you can set fine-grained retention periods by using [table-level retention settings](https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/azure-log-analytics-data-retention-by-type-in-real-life/ba-p/1416287). For more information, see [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-archive.md).
-If you prefer another long-term retention solution, [export from Microsoft Sentinel / Log Analytics to Azure Storage and Event Hubs](/cli/azure/monitor/log-analytics/workspace/data-export) or [move Logs to Long-Term Storage using Logic Apps](../azure-monitor/logs/logs-export-logic-app.md). The latter advantage is that it can export historical data.
-Lastly, you can set fine-grained retention periods using [table-level retention Settings](https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/azure-log-analytics-data-retention-by-type-in-real-life/ba-p/1416287). More details [here](../azure-monitor/logs/data-retention-archive.md).
+#### Log security
-#### Log Security
+* Use [resource role-based access control (RBAC)](https://techcommunity.microsoft.com/t5/azure-sentinel/controlling-access-to-azure-sentinel-data-resource-rbac/ba-p/1301463) or [table-level RBAC](../azure-monitor/logs/manage-access.md) to enable multiple teams to use a single workspace.
-* Use [resource RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/controlling-access-to-azure-sentinel-data-resource-rbac/ba-p/1301463) or [Table Level RBAC](../azure-monitor/logs/manage-access.md) to enable multiple teams to use a single workspace.
* If needed, [delete customer content from your workspaces](../azure-monitor/logs/personal-data-mgmt.md).
-* Learn how to [audit workspace queries and Microsoft Sentinel use, using alerts workbooks and queries](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/auditing-microsoft-sentinel-activities/ba-p/1718328).
-* Use [private links](../azure-monitor/logs/private-link-security.md) to ensure logs never leave your private network.
+
+* Learn how to [audit workspace queries and Microsoft Sentinel use by using alerts workbooks and queries](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/auditing-microsoft-sentinel-activities/ba-p/1718328).
+
+* Use [private links](../azure-monitor/logs/private-link-security.md) to ensure that logs never leave your private network.
#### Dedicated cluster
-Use a [dedicated workspace cluster](../azure-monitor/logs/logs-dedicated-clusters.md) if your projected data ingestion is around or more than 500 GB per day. A dedicated cluster enables you to secure resources for your Microsoft Sentinel data, which enables better query performance for large data sets.
+Use a [dedicated workspace cluster](../azure-monitor/logs/logs-dedicated-clusters.md) if your projected data ingestion is about or more than 500 GB per day. With a dedicated cluster, you can secure resources for your Microsoft Sentinel data, which enables better query performance for large data sets.
++
+### Module 6: Enrichment: Threat intelligence, watchlists, and more
+One of the important functions of a SIEM is to apply contextual information to the event steam, which enables detection, alert prioritization, and incident investigation. Contextual information includes, for example, threat intelligence, IP intelligence, host and user information, and watchlists.
-### Module 6: Enrichment: TI, Watchlists, and more
+Microsoft Sentinel provides comprehensive tools to import, manage, and use threat intelligence. For other types of contextual information, Microsoft Sentinel provides watchlists and other alternative solutions.
-One of the important functions of a SIEM is to apply contextual information to the event steam, enabling detection, alert prioritization, and incident investigation. Contextual information includes, for example, threat intelligence, IP intelligence, host and user information, and watchlists
+#### Threat intelligence
-Microsoft Sentinel provides comprehensive tools to import, manage, and use threat intelligence. For other types of contextual information, Microsoft Sentinel provides Watchlists, and other alternative solutions.
+Threat intelligence is an important building block of a SIEM. View the ["Explore the Power of Threat Intelligence in Microsoft Sentinel"](https://www.youtube.com/watch?v=i29Uzg6cLKc&ab_channel=MicrosoftSecurityCommunity) webinar.
-#### Threat Intelligence
+In Microsoft Sentinel, you can integrate threat intelligence by using the built-in connectors from TAXII (Trusted Automated eXchange of Indicator Information) servers or through the Microsoft Graph Security API. For more information, see [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md). For more information about importing threat intelligence, see the [Module 4: Data collection](#module-4-data-collection) sections.
-Threat Intelligence is an important building block of a SIEM. Watch the Explore the Power of Threat Intelligence in Microsoft Sentinel webinar [here](https://www.youtube.com/watch?v=i29Uzg6cLKc&ab_channel=MicrosoftSecurityCommunity).
+After it's imported, [threat intelligence](understand-threat-intelligence.md) is used extensively throughout Microsoft Sentinel. The following features focus on using threat intelligence:
-In Microsoft Sentinel, you can integrate Threat Intelligence (TI) using the built-in connectors from TAXII servers or through the Microsoft Graph Security API. Read more on how to in the [documentation](threat-intelligence-integration.md). For more information about importing Threat Intelligence, see the data collection modules.
+* View and manage the imported threat intelligence in **Logs** in the new **Threat Intelligence** area of Microsoft Sentinel.
-Once imported, [Threat Intelligence](understand-threat-intelligence.md) is used extensively throughout Microsoft Sentinel. The following features focus on using Threat Intelligence:
+* Use the [built-in threat intelligence analytics rule templates](understand-threat-intelligence.md#detect-threats-with-threat-indicator-based-analytics) to generate security alerts and incidents by using your imported threat intelligence.
-* View and manage the imported threat intelligence in **Logs** in the new Threat Intelligence area of Microsoft Sentinel.
-* Use the [built-in TI Analytics rule templates](understand-threat-intelligence.md#detect-threats-with-threat-indicator-based-analytics) to generate security alerts and incidents using your imported threat intelligence.
-* [Visualize key information about your threat intelligence](understand-threat-intelligence.md#view-and-manage-your-threat-indicators) in Microsoft Sentinel with the Threat Intelligence workbook.
+* [Visualize key information about your threat intelligence](understand-threat-intelligence.md#view-and-manage-your-threat-indicators) in Microsoft Sentinel by using the threat intelligence workbook.
-Watch the **Automate Your Microsoft Sentinel Triage Efforts with RiskIQ Threat
-Intelligence** webinar: [YouTube](https://youtu.be/8vTVKitim5c), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmkngW7psV4janJrVE?e=UkmgWk).
+View the "Automate Your Microsoft Sentinel Triage Efforts with RiskIQ Threat Intelligence" webinar: [YouTube](https://youtu.be/8vTVKitim5c) or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmkngW7psV4janJrVE?e=UkmgWk).
-Short on time? watch the [Ignite session](https://www.youtube.com/watch?v=RLt05JaOnHc) (28 Minutes)
+Short on time? View the [Ignite session](https://www.youtube.com/watch?v=RLt05JaOnHc) (28 minutes).
-Go in-depth? Watch the Webinar: [YouTube](https://youtu.be/zfoVe4iarto), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmgi8zazMLahRyycPf), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgi0pABN930p56id_).
+Want more in-depth information? View the "Deep dive on threat intelligence" webinar: [YouTube](https://youtu.be/zfoVe4iarto), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmgi8zazMLahRyycPf), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgi0pABN930p56id_).
#### Watchlists and other lookup mechanisms
-To import and manage any type of contextual information, Microsoft Sentinel provides Watchlists. Watchlists enable you to upload data tables in CSV format and use them in your KQL queries. Read more about Watchlists in the [documentation](watchlists.md) or watch the use _Watchlists to Manage Alerts, Reduce Alert Fatigue and improve SOC efficiency_ webinar: [YouTube](https://youtu.be/148mr8anqtI), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk1qPwVKXkyKwqsM5?e=jLlNmP).
+To import and manage any type of contextual information, Microsoft Sentinel provides watchlists. By using watchlists, you can upload data tables in CSV format and use them in your KQL queries. For more information, see [Use watchlists in Microsoft Sentinel](watchlists.md), or view the "Use watchlists to manage alerts, reduce alert fatigue, and improve SOC efficiency" webinar: [YouTube](https://youtu.be/148mr8anqtI) or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk1qPwVKXkyKwqsM5?e=jLlNmP).
Use watchlists to help you with following scenarios:
-* **Investigate threats and respond to incidents quickly** with the rapid import of IP addresses, file hashes, and other data from CSV files. After you import the data, use watchlist name-value pairs for joins and filters in alert rules, threat hunting, workbooks, notebooks, and general queries.
+* **Investigate threats and respond to incidents quickly**: Rapidly import IP addresses, file hashes, and other data from CSV files. After you import the data, use watchlist name-value pairs for joins and filters in alert rules, threat hunting, workbooks, notebooks, and general queries.
-* **Import business data as a watchlist**. For example, import user lists with privileged system access, or terminated employees. Then, use the watchlist to create allowlists and blocklists to detect or prevent those users from logging in to the network.
+* **Import business data as a watchlist**: For example, import lists of users with privileged system access, or terminated employees. Then, use the watchlist to create allow lists and block lists to detect or prevent those users from logging in to the network.
-* **Reduce alert fatigue**. Create allowlists to suppress alerts from a group of users, such as users from authorized IP addresses that perform tasks that would normally trigger the alert. Prevent benign events from becoming alerts.
+* **Reduce alert fatigue**: Create allow lists to suppress alerts from a group of users, such as users from authorized IP addresses who perform tasks that would normally trigger the alert. Prevent benign events from becoming alerts.
-* **Enrich event data**. Use watchlists to enrich your event data with name-value combinations derived from external data sources.
+* **Enrich event data**: Use watchlists to enrich your event data with name-value combinations that are derived from external data sources.
-In addition to Watchlists, you can also use the KQL externaldata operator, custom logs, and KQL functions to manage and query context information. Each one of the four methods has its pros and cons, and you can read more about the comparison between those options in the blog post ["Implementing Lookups in Microsoft Sentinel"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/implementing-lookups-in-azure-sentinel/ba-p/1091306). While each method is different, using the resulting information in your queries is similar enabling easy switching between them.
+In addition to watchlists, you can use the KQL external-data operator, custom logs, and KQL functions to manage and query context information. Each of the four methods has its pros and cons, and you can read more about the comparisons between them in the blog post ["Implementing lookups in Microsoft Sentinel."](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/implementing-lookups-in-azure-sentinel/ba-p/1091306) Although each method is different, using the resulting information in your queries is similar and enables easy switching between them.
-Read ["Utilize Watchlists to Drive Efficiency During Microsoft Sentinel Investigations"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/utilize-watchlists-to-drive-efficiency-during-microsoft-sentinel/ba-p/2090711) for ideas on using Watchlist outside of analytic rules.
+For ideas about using watchlists outside analytic rules, see [Utilize watchlists to drive efficiency during Microsoft Sentinel investigations](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/utilize-watchlists-to-drive-efficiency-during-microsoft-sentinel/ba-p/2090711).
-Watch the **Use Watchlists to Manage Alerts, Reduce Alert Fatigue and improve
-SOC efficiency** webinar. [YouTube](https://youtu.be/148mr8anqtI), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk1qPwVKXkyKwqsM5?e=jLlNmP).
+View the "Use watchlists to manage alerts, reduce alert fatigue, and improve SOC efficiency" webinar: [YouTube](https://youtu.be/148mr8anqtI) or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk1qPwVKXkyKwqsM5?e=jLlNmP).
### Module 7: Log transformation
-Microsoft Sentinel supports two new features for data ingestion and transformation. These features, provided by Log Analytics, act on your data even before it's stored in your workspace.
+Microsoft Sentinel supports two new features for data ingestion and transformation. These features, provided by Log Analytics, act on your data even before it's stored in your workspace. The features are:
-* The first of these features is the [**Logs ingestion API.**](../azure-monitor/logs/logs-ingestion-api-overview.md) It allows you to send custom-format logs from any data source to your Log Analytics workspace, and store those logs either in certain specific standard tables, or in custom-formatted tables that you create. The actual ingestion of these logs can be done by direct API calls. You can use Log Analytics [data collection rules (DCRs)](../azure-monitor/essentials/data-collection-rule-overview.md) to define and configure these workflows.
+* [**Logs ingestion API**](../azure-monitor/logs/logs-ingestion-api-overview.md): Use it to send custom-format logs from any data source to your Log Analytics workspace and then store those logs either in certain specific standard tables, or in custom-formatted tables that you create. You can perform the actual ingestion of these logs by using direct API calls. You can use Azure Monitor [data collection rules](../azure-monitor/essentials/data-collection-rule-overview.md) to define and configure these workflows.
-* The second feature is [**workspace data transformations for standard logs**](../azure-monitor/essentials/data-collection-transformations.md#workspace-transformation-dcr). It uses [DCRs](../azure-monitor/essentials/data-collection-rule-overview.md) to filter out irrelevant data, to enrich or tag your data, or to hide sensitive or personal information. Data transformation can be configured at ingestion time for the following types of built-in data connectors:
- * AMA-based data connectors (based on the new Azure Monitor Agent)
- * MMA-based data connectors (based on the legacy Log Analytics Agent)
- * Data connectors that use Diagnostic settings
+* [**Workspace data transformations for standard logs**](../azure-monitor/essentials/data-collection-transformations.md#workspace-transformation-dcr): It uses [data collection rules](../azure-monitor/essentials/data-collection-rule-overview.md) to filter out irrelevant data, to enrich or tag your data, or to hide sensitive or personal information. You can configure data transformation at ingestion time for the following types of built-in data connectors:
+ * Azure Monitor agent (AMA)-based data connectors (based on the new Azure Monitor agent)
+ * Microsoft Monitoring agent (MMA)-based data connectors (based on the legacy Azure Monitor Logs Agent)
+ * Data connectors that use diagnostics settings
* [Service-to-service data connectors](data-connectors-reference.md) For more information, see: * [Transform or customize data at ingestion time in Microsoft Sentinel](configure-data-transformation.md)
-* [Custom data ingestion and transformation in Microsoft Sentinel](configure-data-transformation.md)
* [Find your Microsoft Sentinel data connector](data-connectors-reference.md) ### Module 8: Migration
-In many (if not most) cases, you already have a SIEM and need to migrate to Microsoft Sentinel. While it may be a good time to start over, and rethink your SIEM implementation, it makes sense to utilize some of the assets you already built in your current implementation. Watch the webinar describing best practices for converting detection rules from Splunk, QRadar, and ArcSight to Azure Sentinel Rules: [YouTube](https://youtu.be/njXK1h9lfR4), [MP4](https://aka.ms/AzSentinel_DetectionRules_19FEB21_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhlsYDm99KLbNWlq5), [blog](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
+In many (if not most) cases, you already have a SIEM and need to migrate to Microsoft Sentinel. Although it might be a good time to start over and rethink your SIEM implementation, it makes sense to utilize some of the assets you've already built in your current implementation. View the "Best practices for converting detection rules" (from Splunk, QRadar, and ArcSight to Azure Microsoft Sentinel) webinar: [YouTube](https://youtu.be/njXK1h9lfR4), [MP4](https://aka.ms/AzSentinel_DetectionRules_19FEB21_MP4), [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhlsYDm99KLbNWlq5), or [blog](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
-You might also be interested in some of the following resources:
+You might also be interested in the following resources:
-* [Splunk SPL to KQL mappings](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md)
+* [Splunk Search Processing Language (SPL) to KQL mappings](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md)
* [ArcSight and QRadar rule mapping samples](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/Rule%20Logic%20Mappings.md)
-### Module 9: Advanced SIEM Information Model (ASIM) and Normalization
+### Module 9: Advanced SIEM information model and normalization
-Working with various data types and tables together presents a challenge. You must become familiar with different data types and schemas, write and use a unique set of analytics rules, workbooks, and hunting queries. Correlation between the different data types necessary for investigation and hunting can also be tricky.
+Working with varied data types and tables together can present a challenge. You must become familiar with those data types and schemas as you're writing and using a unique set of analytics rules, workbooks, and hunting queries. Correlating among the data types that are necessary for investigation and hunting can also be tricky.
-The **Advanced SIEM Information Model (ASIM)** provides a seamless experience for handling various sources in uniform, normalized views. ASIM aligns with the Open-Source Security Events Metadata (OSSEM) common information model, promoting vendor agnostic, industry-wide normalization. Watch the Advanced SIEM Information Model (ASIM): Now built into Microsoft Sentinel webinar: YouTube, Deck.
+The Advanced SIEM information model (ASIM) provides a seamless experience for handling various sources in uniform, normalized views. ASIM aligns with the Open-Source Security Events Metadata (OSSEM) common information model, promoting vendor-agnostic, industry-wide normalization. View the "Advanced SIEM information model (ASIM): Now built into Microsoft Sentinel" webinar: [YouTube](https://www.youtube.com/watch?v=Cf4wu_ujhG4&ab_channel=MicrosoftSecurityCommunity) or [presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%212459&ithint=file%2Cpdf&authkey=%21AD3Hp0A%5Ft2%5FbEH4).
-The current implementation is based on query time normalization using KQL functions:
+The current implementation is based on query time normalization, which uses KQL functions:
* **Normalized schemas** cover standard sets of predictable event types that are easy to work with and build unified capabilities. The schema defines which fields should represent an event, a normalized column naming convention, and a standard format for the field values.
- * Watch the _Understanding Normalization in Microsoft Sentinel_ webinar: [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG).
- * Watch the _Deep Dive into Microsoft Sentinel Normalizing Parsers and Normalized Content_ webinar: [YouTube](https://www.youtube.com/watch?v=zaqblyjQW6k), [MP3](https://aka.ms/AS_Normalizing_Parsers_and_Normalized_Content_11AUG2021_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM).
-* **Parsers** map existing data to the normalized schemas. Parsers are implemented using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). Watch the _Extend and Manage ASIM: Developing, Testing and Deploying Parsers_ webinar: [YouTube](https://youtu.be/NHLdcuJNqKw), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk0_k0zs21rL7euHp?e=5XkTnW).
-* **Content** for each normalized schema includes analytics rules, workbooks, hunting queries. This content works on any normalized data without the need to create source-specific content.
-
+ * View the "Understanding normalization in Microsoft Sentinel" webinar: [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng) or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG).
+ * View the "Deep Dive into Microsoft Sentinel normalizing parsers and normalized content" webinar: [YouTube](https://www.youtube.com/watch?v=zaqblyjQW6k), [MP3](https://aka.ms/AS_Normalizing_Parsers_and_Normalized_Content_11AUG2021_MP4), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM).
+
+* **Parsers** map existing data to the normalized schemas. You implement parsers by using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). View the "Extend and manage ASIM: Developing, testing and deploying parsers" webinar: [YouTube](https://youtu.be/NHLdcuJNqKw) or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk0_k0zs21rL7euHp?e=5XkTnW).
+
+* **Content** for each normalized schema includes analytics rules, workbooks, and hunting queries. This content works on any normalized data without the need to create source-specific content.
Using ASIM provides the following benefits:
-* **Cross source detection**: Normalized analytic rules work across sources, on-premises and cloud, now detecting attacks such as brute force or impossible travel across systems including Okta, AWS, and Azure.
-* **Allows source agnostic content**: the coverage of built-in and custom content using ASIM automatically expands to any source that supports ASIM, even if the source was added after the content was created. For example, process event analytics support any source that a customer may use to bring in the data, including Microsoft Defender for Endpoint, Windows Events, and Sysmon. We're ready to add [Sysmon for Linux](https://twitter.com/markrussinovich/status/1283039153920368651?lang=en) and WEF once released!
+* **Cross source detection**: Normalized analytic rules work across sources on-premises and in the cloud. The rules detect attacks, such as brute force, or impossible travel across systems, including Okta, AWS, and Azure.
+
+* **Allows source agnostic content**: Covering built-in and custom content by using ASIM automatically expands to any source that supports ASIM, even if the source was added after the content was created. For example, process event analytics support any source that a customer might use to bring in the data, including Microsoft Defender for Endpoint, Windows Events, and Sysmon. We're ready to add [Sysmon for Linux](https://twitter.com/markrussinovich/status/1283039153920368651?lang=en) and WEF when it has been released.
+ * **Support for your custom sources in built-in analytics**
-* **Ease of use:** once an analyst learns ASIM, writing queries is much simpler as the field names are always the same.
+* **Ease of use**: Analysts who learn ASIM find it much simpler to write queries because the field names are always the same.
++
+#### Learn more about ASIM
+
+Take advantage of these resources:
-#### To learn more about ASIM:
+* View the "Understanding normalization in Azure Sentinel" overview webinar: [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng) or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG).
-* Watch the overview webinar: [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG) .
-* Watch the _Deep Dive into Microsoft Sentinel Normalizing Parsers and Normalized Content_ webinar: [YouTube](https://www.youtube.com/watch?v=zaqblyjQW6k), [MP3](https://aka.ms/AS_Normalizing_Parsers_and_Normalized_Content_11AUG2021_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM).
-* Watch the _Turbocharging ASIM: Making Sure Normalization Helps Performance Rather Than Impacting It_ webinar: [YouTube](https://youtu.be/-dg_0NBIoak), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmjk5AfH32XSdoVzTJ?e=a6hCHb), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjnQITNn35QafW5V2?e=GnCDkA).
-* Read the [documentation](https://aka.ms/AzSentinelNormalization).
+* View the "Deep dive into Microsoft Sentinel normalizing parsers and normalized content" webinar: [YouTube](https://www.youtube.com/watch?v=zaqblyjQW6k), [MP3](https://aka.ms/AS_Normalizing_Parsers_and_Normalized_Content_11AUG2021_MP4), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM).
-#### To Deploy ASIM:
+* View the "Turbocharge ASIM: Make sure normalization helps performance rather than impact it" webinar: [YouTube](https://youtu.be/-dg_0NBIoak), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmjk5AfH32XSdoVzTJ?e=a6hCHb), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjnQITNn35QafW5V2?e=GnCDkA).
-* Deploy the parsers from the folders starting with ΓÇ£ASIM*ΓÇ¥ in the [parsers](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers) folder on GitHub.
-* Activate analytic rules that use ASIM. Search for ΓÇ£normalΓÇ¥ in the template gallery to find some of them. To get the full list, use this [GitHub search](https://github.com/search?q=ASIM+repo%3AAzure%2FAzure-Sentinel+path%3A%2Fdetections&type=Code&ref=advsearch&l=&l=).
+* Read the [ASIM documentation](https://aka.ms/AzSentinelNormalization).
-#### To Use ASIM:
+#### Deploy ASIM
-* Use the [ASIM hunting queries from GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries)
-* Use ASIM queries when using KQL in the log screen.
-* Write your own analytic rules using ASIM or [convert existing ones](normalization.md).
-* Write [parsers](normalization.md#asim-components) for your custom sources to make them ASIM compatible and take part in built-in analytics
+* Deploy the parsers from the folders, starting with ΓÇ£ASIM*ΓÇ¥ in the [*parsers*](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers) folder on GitHub.
-## Part 3: Creating Content
+* Activate analytic rules that use ASIM. Search for **normal** in the template gallery to find some of them. To get the full list, use this [GitHub search](https://github.com/search?q=ASIM+repo%3AAzure%2FAzure-Sentinel+path%3A%2Fdetections&type=Code&ref=advsearch&l=&l=).
-What is Microsoft Sentinel's content?
+#### Use ASIM
-Microsoft Sentinel's security value is a combination of its built-in capabilities and your capability to create custom ones and customize the built-in ones. Among built-in capabilities, there are UEBA, Machine Learning or out-of-the-box analytics rules. Customized capabilities are often referred to as "content" and include analytic rules, hunting queries, workbooks, playbooks, etc.
+* Use the [ASIM hunting queries from GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries).
-In this section, we grouped the modules that help you learn how to create such content or modify built-in-content to your needs. We start with KQL, the Lingua Franca of Azure Sentinel. The following modules discuss one of the content building blocks such as rules, playbooks, and workbooks. We wrap up by discussing use cases, which encompass elements of different types to address specific security goals such as threat detection, hunting, or governance.
+* Use ASIM queries when you're using KQL on the log screen.
-### Module 10: The Kusto Query Language (KQL)
+* Write your own analytics rules by using ASIM, or [convert existing rules](normalization.md).
-Most Microsoft Sentinel capabilities use [KQL or Kusto Query Language](/azure/data-explorer/kusto/query/). When you search in your logs, write rules, create hunting queries, or design workbooks, you use KQL.
+* Write [parsers](normalization.md#asim-components) for your custom sources to make them ASIM-compatible, and take part in built-in analytics.
+
+## Part 3: Creating content
+
+What is Microsoft Sentinel content?
+
+The value of Microsoft Sentinel security is a combination of its built-in capabilities and your ability to create custom capabilities and customize the built-in ones. Among built-in capabilities, there are User and Entity Behavior Analytics (UEBA), machine learning, or out-of-box analytics rules. Customized capabilities are often referred to as "content" and include analytic rules, hunting queries, workbooks, playbooks, and so on.
+
+In this section, we grouped the modules that help you learn how to create such content or modify built-in-content to your needs. We start with KQL, the lingua franca of Azure Microsoft Sentinel. The following modules discuss one of the content building blocks such as rules, playbooks, and workbooks. They wrap up by discussing use cases, which encompass elements of different types that address specific security goals, such as threat detection, hunting, or governance.
+
+### Module 10: Kusto Query Language
+
+Most Microsoft Sentinel capabilities use [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). When you search in your logs, write rules, create hunting queries, or design workbooks, you use KQL.
The next section on writing rules explains how to use KQL in the specific context of SIEM rules.
-#### Below is the recommended journey for learning Sentinel KQL:
-* [Pluralsight KQL course](https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch) - the basics
-* [Must Learn KQL](https://aka.ms/MustLearnKQL) - A 20-part KQL series that walks through the basics to creating your first Analytics Rule. Includes an assessment and certificate.
-* The Microsoft Sentinel KQL Lab: An interactive lab teaching KQL focusing on what you need for Microsoft Sentinel:
+#### The recommended journey for learning Microsoft Sentinel KQL
+
+* [Pluralsight KQL course](https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch): Gives you the basics
+
+* [Must Learn KQL](https://aka.ms/MustLearnKQL): A 20-part KQL series that walks you through the basics of creating your first analytics rule (includes an assessment and certificate)
+
+* The Microsoft Sentinel KQL Lab: An interactive lab that teaches KQL with a focus on what you need for Microsoft Sentinel:
* [Learning module (SC-200 part 4)](/learn/paths/sc-200-utilize-kql-for-azure-sentinel/)
- * [Presentation](https://onedrive.live.com/?authkey=%21AJRxX475AhXGQBE&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21740&parId=66C31D2DBF8E0F71%21446&o=OneUp), [Lab URL](https://aka.ms/lademo)
- * a [Jupyter Notebooks version](https://github.com/jjsantanna/azure_sentinel_learn_kql_lab/blob/master/azure_sentinel_learn_kql_lab.ipynb), which let you test the queries within the notebook.
- * Learning webinar: [YouTube](https://youtu.be/EDCBLULjtCM), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmglwAjUjmYy2Qn5J-);
- * Reviewing lab solutions webinar: [YouTube](https://youtu.be/YKD_OFLMpf8), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmg0EKIi5gwXyccB44?e=sF6UG5)
-* [Pluralsight Advanced KQL course](https://www.pluralsight.com/courses/microsoft-azure-data-explorer-advanced-query-capabilities)
-* _Optimizing Azure Sentinel KQL queries performance_: [YouTube](https://youtu.be/jN1Cz0JcLYU), [MP4](https://aka.ms/AzS_09SEP20_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmg2imjIS8NABc26b-?e=rXZrR5).
-* Using ASIM in your KQL queries: [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
-* _KQL Framework for Microsoft Sentinel - Empowering You to Become KQL-Savvy:_ [YouTube](https://youtu.be/j7BQvJ-Qx_k), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmkgqKSV-m1QWgkzKT?e=QAilwu).
+ * [Presentation](https://onedrive.live.com/?authkey=%21AJRxX475AhXGQBE&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21740&parId=66C31D2DBF8E0F71%21446&o=OneUp) or [lab URL](https://aka.ms/lademo)
+ * A [Jupyter notebooks version](https://github.com/jjsantanna/azure_sentinel_learn_kql_lab/blob/master/azure_sentinel_learn_kql_lab.ipynb) that lets you test the queries within the notebook
+ * Learning webinar: [YouTube](https://youtu.be/EDCBLULjtCM) or [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmglwAjUjmYy2Qn5J-)
+ * Reviewing lab solutions webinar: [YouTube](https://youtu.be/YKD_OFLMpf8) or [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmg0EKIi5gwXyccB44?e=sF6UG5)
-You might also find the following references useful as you learn KQL:
+* [Pluralsight advanced KQL course](https://www.pluralsight.com/courses/microsoft-azure-data-explorer-advanced-query-capabilities)
+
+* "Optimizing Azure Microsoft Sentinel KQL queries performance" webinar: [YouTube](https://youtu.be/jN1Cz0JcLYU), [MP4](https://aka.ms/AzS_09SEP20_MP4), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmg2imjIS8NABc26b-?e=rXZrR5)
+
+* "Using ASIM in your KQL queries": [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng) or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+
+* "KQL framework for Microsoft Sentinel: Empowering you to become KQL-savvy" webinar: [YouTube](https://youtu.be/j7BQvJ-Qx_k) or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmkgqKSV-m1QWgkzKT?e=QAilwu)
+
+As you learn KQL, you might also find the following references useful:
* [The KQL Cheat Sheet](https://www.mbsecure.nl/blog/2019/12/kql-cheat-sheet) * [Query optimization best practices](../azure-monitor/logs/query-optimization.md) ### Module 11: Analytics
-#### Writing Scheduled Analytics Rules
+#### Writing scheduled analytics rules
-Microsoft Sentinel enables you to use [built-in rule templates](detect-threats-built-in.md), customize the templates for your environment, or create custom rules. The core of the rules is a KQL query; however, there's much more than that to configure in a rule.
+With Microsoft Sentinel, you can use [built-in rule templates](detect-threats-built-in.md), customize the templates for your environment, or create custom rules. The core of the rules is a KQL query; however, there's much more than that to configure in a rule.
-To learn the procedure for creating rules, read the [documentation](detect-threats-custom.md). To learn how to write rules, that is, what should go into a rule, focusing on KQL for rules, watch the webinar: [MP4](https://1drv.ms/v/s%21AnEPjr8tHcNmghlWrlBCPKwT5WTT), [YouTube](https://youtu.be/pJjljBT4ipQ), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgmffNHf0wqmNEqdx).
+To learn the procedure for creating rules, see [Create custom analytics rules to detect threats](detect-threats-custom.md). To learn how to write rules (that is, what should go into a rule, focusing on KQL for rules), view the webinar: [YouTube](https://youtu.be/pJjljBT4ipQ), [MP4](https://1drv.ms/v/s%21AnEPjr8tHcNmghlWrlBCPKwT5WTT), or [presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgmffNHf0wqmNEqdx).
SIEM analytics rules have specific patterns. Learn how to implement rules and write KQL for those patterns:
-* **Correlation rules**: [using lists and the "in" operator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-active-lists-out-make-list-in/ba-p/1029225) or using the ["join" operator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-the-join-kql-operator/ba-p/1041500)
-* **Aggregation**: see using lists and the "in" operator above, or a more [advanced pattern handling sliding windows](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/handling-sliding-windows-in-azure-sentinel-rules/ba-p/1505394)
-* **Lookups**: Regular, or Approximate, partial & combined lookups
+* **Correlation rules**: See [Using lists and the "in" operator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-active-lists-out-make-list-in/ba-p/1029225) or [using the "join" operator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-the-join-kql-operator/ba-p/1041500)
+
+* **Aggregation**: See [Using lists and the "in" operator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-active-lists-out-make-list-in/ba-p/1029225), or a more [advanced pattern handling sliding windows](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/handling-sliding-windows-in-azure-sentinel-rules/ba-p/1505394)
+
+* **Lookups**: Regular, or approximate, partial and combined lookups
+ * **Handling false positives**
-* **Delayed events:** are a fact of life in any SIEM and are hard to tackle. Microsoft Sentinel can help you mitigate delays in your rules.
-* Using KQL functions as **building blocks**: Enriching Windows Security Events with Parameterized Function.
-To blog post ["Blob and File Storage Investigations"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-ignite-2021-blob-and-file-storage-investigations/ba-p/2175138) provides a step by step example of writing a useful analytic rule.
+* **Delayed events:** A fact of life in any SIEM, and they're hard to tackle. Microsoft Sentinel can help you mitigate delays in your rules.
+
+* **Use KQL functions as *building blocks***: Enrich Windows Security Events with parameterized functions.
-#### Using Built-in Analytics
+The blog post ["Blob and File storage investigations"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-ignite-2021-blob-and-file-storage-investigations/ba-p/2175138) provides a step-by-step example of writing a useful analytic rule.
-Before embarking on your own rule writing, you should take advantage of the built-in analytics capabilities. They don't require much from you, but it's worthwhile learning about them:
+#### Using built-in analytics
-* Use the [built-in scheduled rule templates](detect-threats-built-in.md). You can tune those templates by modifying the templates the same way to edit any scheduled rule. Make sure to deploy the templates for the data connectors you connect listed in the data connector "next steps" tab.
-* Learn more about Microsoft Sentinel's [Machine learning capabilities](bring-your-own-ml.md): [MP4](https://onedrive.live.com/?authkey=%21ANHkqv1CC1rX0JE&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21772&parId=66C31D2DBF8E0F71%21770&o=OneUp), [YouTube](https://www.youtube.com/watch?v=DxZXHvq1jOs&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21ACovlR%2DY24o1rzU&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21773&parId=66C31D2DBF8E0F71%21770&o=OneUp)
-* Find the list of Microsoft Sentinel's [Advanced multi-stage attack detections ("Fusion") ](fusion.md) that are enabled by default.
-* Watch the Fusion ML Detections with Scheduled Analytics Rules webinar: [YouTube](https://www.youtube.com/watch?v=Ee7gBAQ2Dzc), [MP4](https://onedrive.live.com/?authkey=%21AJzpplg3agpLKdo&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211663&parId=66C31D2DBF8E0F71%211654&o=OneUp), [Presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%211674&ithint=file%2Cpdf&authkey=%21AD%5F1AN14N3W592M).
-* Learn more about Azure Sentinel's built-in SOC-ML anomalies [here](soc-ml-anomalies.md).
-* Watch the customized SOC-ML anomalies and how to use them webinar here: [YouTube](https://www.youtube.com/watch?v=z-suDfFgSsk&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AJVEGsR4ym8hVKk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211742&parId=66C31D2DBF8E0F71%211720&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AFqylaqbAGZAIfA&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211729&parId=66C31D2DBF8E0F71%211720&o=OneUp).
-* Watch the Fusion ML Detections for Emerging Threats & Configuration UI webinar here: [YouTube](https://www.youtube.com/watch?v=bTDp41yMGdk), [Presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%212287&ithint=file%2Cpdf&authkey=%21AIJICOTqjY7bszE).
+Before you embark on your own rule writing, consider taking advantage of the built-in analytics capabilities. They don't require much from you, but it's worthwhile learning about them:
+
+* Use the [built-in scheduled rule templates](detect-threats-built-in.md). You can tune those templates by modifying them the same way to edit any scheduled rule. Be sure to deploy the templates for the data connectors you connect, which are listed in the data connector **Next steps** tab.
+
+* Learn more about Microsoft Sentinel [machine learning capabilities](bring-your-own-ml.md): [YouTube](https://www.youtube.com/watch?v=DxZXHvq1jOs&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ANHkqv1CC1rX0JE&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21772&parId=66C31D2DBF8E0F71%21770&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21ACovlR%2DY24o1rzU&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21773&parId=66C31D2DBF8E0F71%21770&o=OneUp).
+
+* Get the list of Microsoft Sentinel [advanced, multi-stage attack detections (Fusion)](fusion.md), which are enabled by default.
+* View the "Fusion machine learning detections with scheduled analytics rules" webinar: [YouTube](https://www.youtube.com/watch?v=Ee7gBAQ2Dzc), [MP4](https://onedrive.live.com/?authkey=%21AJzpplg3agpLKdo&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211663&parId=66C31D2DBF8E0F71%211654&o=OneUp), or [presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%211674&ithint=file%2Cpdf&authkey=%21AD%5F1AN14N3W592M).
+
+* Learn more about [Microsoft Sentinel built-in SOC-machine learning anomalies](soc-ml-anomalies.md).
+
+* View the "Customized SOC-machine learning anomalies and how to use them" webinar: [YouTube](https://www.youtube.com/watch?v=z-suDfFgSsk&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AJVEGsR4ym8hVKk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211742&parId=66C31D2DBF8E0F71%211720&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21AFqylaqbAGZAIfA&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211729&parId=66C31D2DBF8E0F71%211720&o=OneUp).
+
+* View the "Fusion machine learning detections for emerging threats and configuration UI" webinar: [YouTube](https://www.youtube.com/watch?v=bTDp41yMGdk) or [presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%212287&ithint=file%2Cpdf&authkey=%21AIJICOTqjY7bszE).
### Module 12: Implementing SOAR
-In modern SIEMs such as Microsoft Sentinel, SOAR (Security Orchestration, Automation, and Response) comprises the entire process from the moment an incident is triggered and until it's resolved. This process starts with an [incident investigation](investigate-cases.md) and continues with an [automated response](tutorial-respond-threats-playbook.md). The blog post ["How to use Microsoft Sentinel for Incident Response, Orchestration and Automation"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397) provides an overview of common use cases for SOAR.
+In modern SIEMs, such as Microsoft Sentinel, SOAR makes up the entire process from the moment an incident is triggered until it's resolved. This process starts with an [incident investigation](investigate-cases.md) and continues with an [automated response](tutorial-respond-threats-playbook.md). The blog post ["How to use Microsoft Sentinel for Incident Response, Orchestration and Automation"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397) provides an overview of common use cases for SOAR.
+
+[Automation rules](automate-incident-handling-with-automation-rules.md) are the starting point for Microsoft Sentinel automation. They provide a lightweight method of centralized, automated handling of incidents, including suppression, [false-positive handling](false-positives.md), and automatic assignment.
+
+To provide robust workflow-based automation capabilities, automation rules use [Logic Apps playbooks](automate-responses-with-playbooks.md). To learn more:
+
+* View the "Unleash the automation Jedi tricks and build Logic Apps playbooks like a boss" webinar: [YouTube](https://www.youtube.com/watch?v=G6TIzJK8XBA&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AMHoD01Fnv0Nkeg&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21513&parId=66C31D2DBF8E0F71%21511&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21AJK2W6MaFrzSzpw&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21514&parId=66C31D2DBF8E0F71%21511&o=OneUp).
-[Automation rules](automate-incident-handling-with-automation-rules.md) are the starting point for Microsoft Sentinel automation. They provide a lightweight method for central automated handling of incidents, including suppression,[ false-positive handling](false-positives.md), and automatic assignment.
+* Read about [Logic Apps](../logic-apps/logic-apps-overview.md), which is the core technology that drives Microsoft Sentinel playbooks.
-To provide robust workflow based automation capabilities, automation rules use [Logic App playbooks](automate-responses-with-playbooks.md):
-* Watch the Unleash the automation Jedi tricks & build Logic Apps Playbooks like a Boss Webinar: [YouTube](https://www.youtube.com/watch?v=G6TIzJK8XBA&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AMHoD01Fnv0Nkeg&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21513&parId=66C31D2DBF8E0F71%21511&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AJK2W6MaFrzSzpw&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21514&parId=66C31D2DBF8E0F71%21511&o=OneUp).
-* Read about [Logic Apps](../logic-apps/logic-apps-overview.md), which is the core technology driving Microsoft Sentinel playbooks.
-*[ The Microsoft Sentinel Logic App connector](/connectors/azuresentinel/) is a link between Logic Apps and Azure Sentinel.
+* See [The Microsoft Sentinel Logic Apps connector](/connectors/azuresentinel/), the link between Logic Apps and Microsoft Sentinel.
-You can find dozens of useful Playbooks in the [Playbooks folder](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks) on the [Microsoft Sentinel GitHub](https://github.com/Azure/Azure-Sentinel), or read [_A playbook using a watchlist to Inform a subscription owner about an alert_](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/playbooks-amp-watchlists-part-1-inform-the-subscription-owner/ba-p/1768917) for a Playbook walkthrough.
+You'll find dozens of useful playbooks in the [*Playbooks* folder](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks) on [Microsoft Sentinel GitHub](https://github.com/Azure/Azure-Sentinel) site, or read [A playbook using a watchlist to inform a subscription owner about an alert](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/playbooks-amp-watchlists-part-1-inform-the-subscription-owner/ba-p/1768917) for a playbook walkthrough.
### Module 13: Workbooks, reporting, and visualization #### Workbooks
-As the nerve center of your SOC, you need Microsoft Sentinel to visualize the information it collects and produces. Use workbooks to visualize data in Microsoft Sentinel.
+As the nerve center of your SOC, Microsoft Sentinel is required for visualizing the information it collects and produces. Use workbooks to visualize data in Microsoft Sentinel.
-* To learn how to create workbooks, read the [documentation](../azure-monitor/visualize/workbooks-overview.md) or watch Billy York's [Workbooks training](https://www.youtube.com/watch?v=iGiPpD_-10M&ab_channel=FestiveTechCalendar) (and [accompanying text](https://www.cloudsma.com/2019/12/azure-advent-calendar-azure-monitor-workbooks/).
-* The mentioned resources aren't Microsoft Sentinel specific, and apply to Microsoft Workbooks in general. To learn more about Workbooks in Microsoft Sentinel, watch the Webinar: [YouTube](https://www.youtube.com/watch?v=7eYNaYSsk1A&list=PLmAptfqzxVEUD7-w180kVApknWHJCXf0j&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ALoa5KFEhBq2DyQ&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21373&parId=66C31D2DBF8E0F71%21372&o=OneUp), [Presentation](https://onedrive.live.com/view.aspx?resid=66C31D2DBF8E0F71!374&ithint=file%2cpptx&authkey=!AD5hvwtCTeHvQLQ), and read the [documentation](monitor-your-data.md).
+* To learn how to create workbooks, read the [Azure Workbooks documentation](../azure-monitor/visualize/workbooks-overview.md) or watch Billy York's [Workbooks training](https://www.youtube.com/watch?v=iGiPpD_-10M&ab_channel=FestiveTechCalendar) (and [accompanying text](https://www.cloudsma.com/2019/12/azure-advent-calendar-azure-monitor-workbooks/)).
+
+* The mentioned resources aren't Microsoft Sentinel-specific. They apply to workbooks in general. To learn more about workbooks in Microsoft Sentinel, view the webinar: [YouTube](https://www.youtube.com/watch?v=7eYNaYSsk1A&list=PLmAptfqzxVEUD7-w180kVApknWHJCXf0j&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ALoa5KFEhBq2DyQ&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21373&parId=66C31D2DBF8E0F71%21372&o=OneUp), or [presentation](https://onedrive.live.com/view.aspx?resid=66C31D2DBF8E0F71!374&ithint=file%2cpptx&authkey=!AD5hvwtCTeHvQLQ). Read the [documentation](monitor-your-data.md).
-Workbooks can be interactive and enable much more than just charting. With Workbooks, you can create apps or extension modules for Microsoft Sentinel to complement built-in functionality. We also use workbooks to extend the features of Microsoft Sentinel. Few examples of such apps you can both use and learn from are:
-* The [Investigation Insights Workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-investigation-insights-workbook/ba-p/1816903) provides an alternative approach for investigating incidents.
-* [Graph Visualization of External Teams Collaborations](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/graph-visualization-of-external-teams-collaborations-in-azure/ba-p/1356847) enables hunting for risky Teams use.
-* The [users' travel map workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-to-follow-a-users-travel-and-map-their/ba-p/981716) allows investigating geo-location alerts.
+Workbooks can be interactive and enable much more than just charting. With workbooks, you can create apps or extension modules for Microsoft Sentinel to complement its built-in functionality. You can also use workbooks to extend the features of Microsoft Sentinel. Here are a few examples of such apps:
+
+* The [Investigation Insights Workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-investigation-insights-workbook/ba-p/1816903) provides an alternative approach to investigating incidents.
+
+* [Graph visualization of external Teams collaborations](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/graph-visualization-of-external-teams-collaborations-in-azure/ba-p/1356847) enables hunting for risky Teams use.
+
+* The [users' travel map workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-to-follow-a-users-travel-and-map-their/ba-p/981716) allows you to investigate geo-location alerts.
-* The insecure protocols workbook ([Implementation Guide](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-insecure-protocols-workbook-implementation-guide/ba-p/1197564), [recent enhancements](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-insecure-protocols-workbook-reimagined/ba-p/1558375), and [overview video](https://www.youtube.com/watch?v=xzHDWbBX6h8&list=PLmAptfqzxVEWkrUwV-B1Ob3qW-QPW_Ydu&index=9&ab_channel=MicrosoftSecurityCommunity)) lets you identify the use of insecure protocols in your network.
+* The [Microsoft Sentinel insecure protocols workbook implementation guide](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-insecure-protocols-workbook-implementation-guide/ba-p/1197564), [recent enhancements](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-insecure-protocols-workbook-reimagined/ba-p/1558375), and [overview video](https://www.youtube.com/watch?v=xzHDWbBX6h8&list=PLmAptfqzxVEWkrUwV-B1Ob3qW-QPW_Ydu&index=9&ab_channel=MicrosoftSecurityCommunity)) helps you identify the use of insecure protocols in your network.
-* Lastly, learn how to [integrate information from any source using API calls in a workbook](https://techcommunity.microsoft.com/t5/azure-sentinel/using-the-sentinel-api-to-view-data-in-a-workbook/ba-p/1386436).
+* Finally, learn how to [integrate information from any source by using API calls in a workbook](https://techcommunity.microsoft.com/t5/azure-sentinel/using-the-sentinel-api-to-view-data-in-a-workbook/ba-p/1386436).
-You can find dozens of workbooks in the [Workbooks folder](https://github.com/Azure/Azure-Sentinel/tree/master/Workbooks) in the [Microsoft Sentinel GitHub](https://github.com/Azure/Azure-Sentinel). Some of them are available in the Microsoft Sentinel workbooks gallery as well.
+You'll find dozens of workbooks in the [*Workbooks* folder](https://github.com/Azure/Azure-Sentinel/tree/master/Workbooks) in the [Microsoft Sentinel GitHub](https://github.com/Azure/Azure-Sentinel). Some of them are available in the Microsoft Sentinel workbooks gallery as well.
#### Reporting and other visualization options
-Workbooks can serve for reporting. For more advanced reporting capabilities such as reports scheduling and distribution or pivot tables, you might want to use:
-* Power BI, which natively [integrates with Log Analytics and Sentinel](../azure-monitor/logs/log-powerbi.md).
-* Excel, which can use [Log Analytics and Sentinel as the data source](../azure-monitor/logs/log-excel.md) (and see [video](https://www.youtube.com/watch?v=Rx7rJhjzTZA) on how).
-* Jupyter notebooks covered later in the hunting module are also a great visualization tool.
+Workbooks can serve for reporting. For more advanced reporting capabilities, such as reports scheduling and distribution or pivot tables, you might want to use:
+
+* Power BI, which natively [integrates with Azure Monitor Logs and Microsoft Sentinel](../azure-monitor/logs/log-powerbi.md).
+
+* Excel, which can use [Azure Monitor Logs and Microsoft Sentinel as the data source](../azure-monitor/logs/log-excel.md), and view the ["Integrate Azure Monitor Logs and Excel with Azure Monitor"](https://www.youtube.com/watch?v=Rx7rJhjzTZA) video.
+
+* Jupyter notebooks, a topic that's covered later in the hunting module, are also a great visualization tool.
### Module 14: Notebooks
-Jupyter notebooks are fully integrated with Microsoft Sentinel. While considered an important tool in the hunter's tool chest and discussed the webinars in the hunting section below, their value is much broader. Notebooks can serve for advanced visualization, an investigation guide, and for sophisticated automation.
+Jupyter notebooks are fully integrated with Microsoft Sentinel. Although considered an important tool in the hunter's tool chest and discussed the webinars in the hunting section below, their value is much broader. Notebooks can serve for advanced visualization, as an investigation guide, and for sophisticated automation.
+
+To understand notebooks better, view the [Introduction to notebooks video](https://www.youtube.com/watch?v=TgRRJeoyAYw&ab_channel=MicrosoftSecurityCommunity). Get started using the notebooks webinar ([YouTube](https://www.youtube.com/watch?v=rewdNeX6H94&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ALXve0rEAhZOuP4&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21778&parId=66C31D2DBF8E0F71%21776&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21AEQpzVDAwzzen30&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21779&parId=66C31D2DBF8E0F71%21776&o=OneUp)) or read the [documentation](notebooks.md). The [Microsoft Sentinel Notebooks Ninja series](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/becoming-a-microsoft-sentinel-notebooks-ninja-the-series/ba-p/2693491) is an ongoing training series to upskill you in notebooks.
-To understand them better, watch the [Introduction to notebooks video](https://www.youtube.com/watch?v=TgRRJeoyAYw&ab_channel=MicrosoftSecurityCommunity). Get started using the Notebooks webinar ([YouTube](https://www.youtube.com/watch?v=rewdNeX6H94&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ALXve0rEAhZOuP4&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21778&parId=66C31D2DBF8E0F71%21776&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AEQpzVDAwzzen30&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21779&parId=66C31D2DBF8E0F71%21776&o=OneUp)) or read the [documentation](notebooks.md). The [Microsoft Sentinel Notebooks Ninja series](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/becoming-a-microsoft-sentinel-notebooks-ninja-the-series/ba-p/2693491) is an ongoing training series to upskill you in Notebooks.
+An important part of the integration is implemented by [MSTICPy](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/msticpy-python-defender-tools/ba-p/648929), which is a Python library developed by our research team to be used with Jupyter notebooks. It adds Microsoft Sentinel interfaces and sophisticated security capabilities to your notebooks.
-An important part of the integration is implemented by [MSTICPY](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/msticpy-python-defender-tools/ba-p/648929), which is a Python library developed by our research team to be used with Jupyter notebooks. It adds Microsoft Sentinel interfaces and sophisticated security capabilities to your notebooks.
* [MSTICPy Fundamentals to Build Your Own Notebooks](https://www.youtube.com/watch?v=S0knTOnA2Rk&ab_channel=MicrosoftSecurityCommunity)+ * [MSTICPy Intermediate to Build Your Own Notebooks](https://www.youtube.com/watch?v=Rpj-FS_0Wqg&ab_channel=MicrosoftSecurityCommunity) ### Module 15: Use cases and solutions
-Connectors, rules, playbooks, and workbooks enable you to implement **use cases**: the SIEM term for a content pack intended to detect and respond to a threat. You can deploy Sentinel built-in use cases by activating the suggested rules when connecting each Connector. A **solution** is a **group of use cases** addressing a specific threat domain.
+With connectors, rules, playbooks, and workbooks, you can implement *use cases*, which is the SIEM term for a content pack that's intended to detect and respond to a threat. You can deploy Microsoft Sentinel built-in use cases by activating the suggested rules when you're connecting each connector. A *solution* is a group of use cases that address a specific threat domain.
+
+The "Tackling Identity" webinar ([YouTube](https://www.youtube.com/watch?v=BcxiY32famg&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AFsVrhZwut8EnB4&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21284&parId=66C31D2DBF8E0F71%21282&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21ACSAvdeLB7JfAX8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21283&parId=66C31D2DBF8E0F71%21282&o=OneUp)) explains what a use case is and how to approach its design, and it presents several use cases that collectively address identity threats.
-The Webinar **"Tackling Identity"**([YouTube](https://www.youtube.com/watch?v=BcxiY32famg&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AFsVrhZwut8EnB4&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21284&parId=66C31D2DBF8E0F71%21282&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21ACSAvdeLB7JfAX8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21283&parId=66C31D2DBF8E0F71%21282&o=OneUp)) explains what a use case is, how to approach its design, and presents several use cases that collectively address identity threats.
+Another relevant solution area is *protecting remote work*. View our [Ignite session on protecting remote work](https://www.youtube.com/watch?v=09JfbjQdzpg&ab_channel=MicrosoftSecurity), and read more about the following specific use cases:
+
+* [Microsoft Teams hunting use cases](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/protecting-your-teams-with-azure-sentinel/ba-p/1265761) and [Graph visualization of external Microsoft Teams collaborations](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/graph-visualization-of-external-teams-collaborations-in-azure/ba-p/1356847)
-Another relevant solution area is **protecting remote work**. Watch our [Ignite session on protection remote work](https://www.youtube.com/watch?v=09JfbjQdzpg&ab_channel=MicrosoftSecurity), and read more on the specific use cases:
-* [Microsoft Teams hunting use cases](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/protecting-your-teams-with-azure-sentinel/ba-p/1265761) and [Graph Visualization of External Microsoft Teams Collaborations](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/graph-visualization-of-external-teams-collaborations-in-azure/ba-p/1356847)
* [Monitoring Zoom with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/monitoring-zoom-with-azure-sentinel/ba-p/1341516): custom connectors, analytic rules, and hunting queries.
-* [Monitoring Azure Virtual Desktop with Microsoft Sentinel](../virtual-desktop/diagnostics-log-analytics.md): use Windows Security Events, Azure AD Sign-in logs, Microsoft 365 Defender for Endpoints, and AVD diagnostics logs to detect and hunt for AVD threats.
-*[ Monitor Microsoft endpoint Manager / Intune](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/secure-working-from-home-deep-insights-at-enrolled-mem-assets/ba-p/1424255), using queries and workbooks.
-And lastly, focusing on recent attacks, learn how to [monitor the software supply chain with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/monitoring-the-software-supply-chain-with-azure-sentinel/ba-p/2176463).
+* [Monitoring Azure Virtual Desktop with Microsoft Sentinel](../virtual-desktop/diagnostics-log-analytics.md): use Windows Security Events, Azure Active Directory (Azure AD) sign-in logs, Microsoft 365 Defender for Endpoints, and Azure Virtual Desktop diagnostics logs to detect and hunt for Azure Virtual Desktop threats.
+
+* [Monitor Microsoft Endpoint Manager / Intune](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/secure-working-from-home-deep-insights-at-enrolled-mem-assets/ba-p/1424255), by using queries and workbooks.
-**Microsoft Sentinel solutions** provide in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical scenarios in Microsoft Sentinel. Read more about them [here](sentinel-solutions.md), and watch the **webinar about how to create your own [here](https://www.youtube.com/watch?v=oYTgaTh_NOU&ab_channel=MicrosoftSecurityCommunity).** For more about Sentinel content management in general, watch the Microsoft Sentinel Content Management webinar - [YouTube](https://www.youtube.com/watch?v=oYTgaTh_NOU&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%212201&ithint=file%2Cpdf&authkey=%21AIdsDXF3iluXd94).
+And finally, focusing on recent attacks, learn how to [monitor the software supply chain with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/monitoring-the-software-supply-chain-with-azure-sentinel/ba-p/2176463).
+
+Microsoft Sentinel solutions provide in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical scenarios in Microsoft Sentinel. For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md), and view the "Create your own Microsoft Sentinel solutions" webinar: [YouTube](https://www.youtube.com/watch?v=oYTgaTh_NOU&ab_channel=MicrosoftSecurityCommunity) or [presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%212201&ithint=file%2Cpdf&authkey=%21AIdsDXF3iluXd94).
## Part 4: Operating ### Module 16: Handling incidents
-After building your SOC, you need to start using it. The "day in a SOC analyst life" webinar ([YouTube](https://www.youtube.com/watch?v=HloK6Ay4h1M&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ACD%5F1nY2ND8MOmg&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21273&parId=66C31D2DBF8E0F71%21271&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AAvOR9OSD51OZ8c&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21272&parId=66C31D2DBF8E0F71%21271&o=OneUp)) walks you through using Microsoft Sentinel in the SOC to **triage**, **investigate** and **respond** to incidents.
+After you build your SOC, you need to start using it. The "day in an SOC analyst's life" webinar ([YouTube](https://www.youtube.com/watch?v=HloK6Ay4h1M&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ACD%5F1nY2ND8MOmg&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21273&parId=66C31D2DBF8E0F71%21271&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21AAvOR9OSD51OZ8c&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21272&parId=66C31D2DBF8E0F71%21271&o=OneUp)) walks you through using Microsoft Sentinel in the SOC to *triage*, *investigate*, and *respond* to incidents.
+
+To help enable your teams to collaborate seamlessly across the organization and with external stakeholders, see [Integrating with Microsoft Teams directly from Microsoft Sentinel](collaborate-in-microsoft-teams.md). And view the ["Decrease your SOCΓÇÖs MTTR (Mean Time to Respond) by integrating Microsoft Sentinel with Microsoft Teams"](https://www.youtube.com/watch?v=0REgc2jB560&ab_channel=MicrosoftSecurityCommunity) webinar.
-[Integrating with Microsoft Teams directly from Microsoft Sentinel](collaborate-in-microsoft-teams.md) enables your teams to collaborate seamlessly across the organization, and with external stakeholders. Watch the _Decrease Your SOCΓÇÖs MTTR (Mean Time to Respond) by Integrating Microsoft Sentinel with Microsoft Teams_ webinar [here](https://www.youtube.com/watch?v=0REgc2jB560&ab_channel=MicrosoftSecurityCommunity).
+You might also want to read the [documentation article on incident investigation](investigate-cases.md). As part of the investigation, you'll also use the [entity pages](identify-threats-with-entity-behavior-analytics.md#entity-pages) to get more information about entities that are related to your incident or identified as part of your investigation.
-You might also want to read the [documentation article on incident investigation](investigate-cases.md). As part of the investigation, you'll also use the [entity pages](identify-threats-with-entity-behavior-analytics.md#entity-pages) to get more information about entities related to your incident or identified as part of your investigation.
+Incident investigation in Microsoft Sentinel extends beyond the core incident investigation functionality. You can build additional investigation tools by using workbooks and notebooks, Notebooks are discussed in the next section, [Module 17: Hunting](#module-17-hunting). You can also build more investigation tools or modify existing ones to your specific needs. Examples include:
-**Incident investigation** in Microsoft Sentinel extends beyond the core incident investigation functionality. We can build **additional investigation tools** using Workbooks and Notebooks (the latter are discussed later, under _Hunting_). You can also build more investigation tools or modify existing one to your specific needs. Examples include:
-* The [Investigation Insights Workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-investigation-insights-workbook/ba-p/1816903) provides an alternative approach for investigating incidents.
-* Notebooks enhance the investigation experience. Read [_Why Use Jupyter for Security Investigations?_](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/why-use-jupyter-for-security-investigations/ba-p/475729) and learn how to investigate with Microsoft Sentinel & Jupyter Notebooks: [part 1](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/432921), [part 2](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/483466), and [part 3](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/561413).
+* The [Investigation Insights Workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-investigation-insights-workbook/ba-p/1816903) provides an alternative approach to investigating incidents.
+
+* Notebooks enhance the investigation experience. Read [Why use Jupyter for security investigations?](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/why-use-jupyter-for-security-investigations/ba-p/475729), and learn how to investigate by using Microsoft Sentinel and Jupyter notebooks:
+ * [Part 1](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/432921)
+ * [Part 2](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/483466)
+ * [Part 3](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/561413)
### Module 17: Hunting
-While most of the discussion so far focused on detection and incident management, **hunting** is another important use case for Microsoft Sentinel. Hunting is a **proactive search for threats** rather than a reactive response to alerts.
+Although most of the discussion so far has focused on detection and incident management, *hunting* is another important use case for Microsoft Sentinel. Hunting is a **proactive search for threats** rather than a reactive response to alerts.
+
+The hunting dashboard is constantly updated. It shows all the queries that were written by the Microsoft team of security analysts and any extra queries that you've created or modified. Each query provides a description of what it's hunting for, and what kind of data it runs on. These templates are grouped by their various tactics. The icons at the right categorize the type of threat, such as initial access, persistence, and exfiltration. For more information, see [Hunt for threats with Microsoft Sentinel](hunting.md).
-The hunting dashboard is constantly updated. It shows all the queries that were written by Microsoft's team of security analysts and any extra queries that you've created or modified. Each query provides a description of what it hunts for, and what kind of data it runs on. These templates are grouped by their various tactics - the icons on the right categorize the type of threat, such as initial access, persistence, and exfiltration. Read more about it [here](hunting.md).
+To understand more about what hunting is and how Microsoft Sentinel supports it, view the introductory "Threat hunting" webinar: [YouTube](https://www.youtube.com/watch?v=6ueR09PLoLU&t=1451s&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AO3gGrb474Bjmls&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21468&parId=66C31D2DBF8E0F71%21466&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21AJ09hohPMbtbVKk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21469&parId=66C31D2DBF8E0F71%21466&o=OneUp). The webinar starts with an update on new features. To learn about hunting, start at slide 12. The YouTube video is already set to start there.
-To understand more about what hunting is and how Microsoft Sentinel supports it, watch the **Hunting Intro Webinar** ([YouTube](https://www.youtube.com/watch?v=6ueR09PLoLU&t=1451s&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AO3gGrb474Bjmls&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21468&parId=66C31D2DBF8E0F71%21466&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AJ09hohPMbtbVKk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21469&parId=66C31D2DBF8E0F71%21466&o=OneUp)). The webinar starts with an update on new features. To learn about hunting, start at slide 12. The YouTube link is already set to start there.
+Although the introductory webinar focuses on tools, hunting is all about security. Our security research team webinar ([YouTube](https://www.youtube.com/watch?v=BTEV_b6-vtg&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ADC2GvI1Yjlh%2D6E&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21276&parId=66C31D2DBF8E0F71%21274&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21AF1uqmmrWbI3Mb8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21275&parId=66C31D2DBF8E0F71%21274&o=OneUp)) focuses on how to actually hunt.
-While the intro webinar focuses on tools, hunting is all about security. Our **security research team webinar on hunting** ([MP4](https://onedrive.live.com/?authkey=%21ADC2GvI1Yjlh%2D6E&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21276&parId=66C31D2DBF8E0F71%21274&o=OneUp), [YouTube](https://www.youtube.com/watch?v=BTEV_b6-vtg&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21AF1uqmmrWbI3Mb8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21275&parId=66C31D2DBF8E0F71%21274&o=OneUp)) focuses on how to actually hunt. The follow-up **AWS Threat Hunting using Sentinel Webinar** ([MP4](https://onedrive.live.com/?authkey=%21ADu7r7XMTmKyiMk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21336&parId=66C31D2DBF8E0F71%21333&o=OneUp), [YouTube](https://www.youtube.com/watch?v=bSH-JOKl2Kk&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21AA7UKQIj2wu1FiI&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21334&parId=66C31D2DBF8E0F71%21333&o=OneUp)) really drives the point by showing an end-to-end hunting scenario on a high-value target environment. Lastly, you can learn how to do [SolarWinds Post-Compromise Hunting with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/solarwinds-post-compromise-hunting-with-azure-sentinel/ba-p/1995095) and [WebShell hunting](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/web-shell-threat-hunting-with-azure-sentinel/ba-p/2234968) motivated by the latest recent vulnerabilities in on-premises Microsoft Exchange servers.
+The follow-up webinar, "AWS threat hunting by using Microsoft Sentinel" ([YouTube](https://www.youtube.com/watch?v=bSH-JOKl2Kk&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ADu7r7XMTmKyiMk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21336&parId=66C31D2DBF8E0F71%21333&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21AA7UKQIj2wu1FiI&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21334&parId=66C31D2DBF8E0F71%21333&o=OneUp)) drives the point by showing an end-to-end hunting scenario on a high-value target environment.
+
+Finally, you can learn how to do [SolarWinds post-compromise hunting with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/solarwinds-post-compromise-hunting-with-azure-sentinel/ba-p/1995095) and [WebShell hunting](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/web-shell-threat-hunting-with-azure-sentinel/ba-p/2234968), motivated by the latest recent vulnerabilities in on-premises Microsoft Exchange servers.
### Module 18: User and Entity Behavior Analytics (UEBA)
-Microsoft Sentinel newly introduced [User and Entity Behavior Analytics (UEBA)](identify-threats-with-entity-behavior-analytics.md) module enables you to identify and investigate threats inside your organization and their potential impact - whether a compromised entity or a malicious insider.
+The newly introduced Microsoft Sentinel [User and Entity Behavior Analytics (UEBA)](identify-threats-with-entity-behavior-analytics.md) module enables you to identify and investigate threats inside your organization and their potential impact, whether they come from a compromised entity or a malicious insider.
-As Microsoft Sentinel collects logs and alerts from all of its connected data sources, it analyzes them and builds baseline behavioral profiles of your organizationΓÇÖs entities (such as **users**, **hosts**, **IP addresses**, and **applications**) across time and peer group horizon. With various techniques and machine learning capabilities, Microsoft Sentinel can then identify anomalous activity and help you determine if an asset has been compromised. Not only that, but it can also figure out the relative sensitivity of particular assets, identify peer groups of assets, and evaluate the potential impact of any given compromised asset (its ΓÇ£blast radiusΓÇ¥). Armed with this information, you can effectively prioritize your investigation and incident handling.
+As Microsoft Sentinel collects logs and alerts from all its connected data sources, it analyzes them and builds baseline behavioral profiles of your organizationΓÇÖs entities (such as *users*, *hosts*, *IP addresses*, and *applications*) across time and peer-group horizon. Through various techniques and machine learning capabilities, Microsoft Sentinel can then identify anomalous activity and help you determine whether an asset has been compromised. Not only that, but it can also figure out the relative sensitivity of particular assets, identify peer groups of assets, and evaluate the potential impact of any given compromised asset (its ΓÇ£blast radiusΓÇ¥). Armed with this information, you can effectively prioritize your investigation and incident handling.
-Learn more about UEBA in the _UEBA Webinar_ ([YouTube](https://www.youtube.com/watch?v=ixBotw9Qidg&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21ADXz0j2AO7Kgfv8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21515&parId=66C31D2DBF8E0F71%21508&o=OneUp), [MP4](https://onedrive.live.com/?authkey=%21AO0122hqWUkZTJI&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211909&parId=66C31D2DBF8E0F71%21508&o=OneUp)) and read about using [UEBA for investigations in your SOC](https://techcommunity.microsoft.com/t5/azure-sentinel/guided-ueba-investigation-scenarios-to-empower-your-soc/ba-p/1857100).
+Learn more about UEBA by viewing the webinar ([YouTube](https://www.youtube.com/watch?v=ixBotw9Qidg&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AO0122hqWUkZTJI&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211909&parId=66C31D2DBF8E0F71%21508&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21ADXz0j2AO7Kgfv8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21515&parId=66C31D2DBF8E0F71%21508&o=OneUp)), and read about using [UEBA for investigations in your SOC](https://techcommunity.microsoft.com/t5/azure-sentinel/guided-ueba-investigation-scenarios-to-empower-your-soc/ba-p/1857100).
-For watching the latest updates, see [Future of Users Entity Behavioral Analytics in Sentinel webinar](https://www.youtube.com/watch?v=dLVAkSLKLyQ&ab_channel=MicrosoftSecurityCommunity).
+To learn about the most recent updates, view the ["Future of Users Entity Behavioral Analytics in Microsoft Sentinel"](https://www.youtube.com/watch?v=dLVAkSLKLyQ&ab_channel=MicrosoftSecurityCommunity) webinar.
### Module 19: Monitoring Microsoft Sentinel's health
-Part of operating a SIEM is making sure it works smoothly and an evolving area in Azure Sentinel. Use the following to monitor Microsoft Sentinel's health:
+Part of operating a SIEM is making sure that it works smoothly and is an evolving area in Azure Microsoft Sentinel. Use the following to monitor Microsoft Sentinel's health:
+
+* Measure the efficiency of your [Security operations](manage-soc-with-incident-metrics.md#security-operations-efficiency-workbook) ([video](https://www.youtube.com/watch?v=jRucUysVpxI&ab_channel=MicrosoftSecurityCommunity)).
+
+* The Microsoft Sentinel Health data table provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions. For more information, see [Monitor the health of your data connectors](monitor-data-connector-health.md). View the ["Data Connectors Health Monitoring Workbook"](https://www.youtube.com/watch?v=T6Vyo7gZYds&ab_channel=MicrosoftSecurityCommunity) video. And [get notifications on anomalies](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/data-connector-health-push-notification-alerts/ba-p/1996442).
+
+* Monitor agents by using the [agents' health solution](../azure-monitor/insights/solution-agenthealth.md) (Windows only) and the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat) (Linux and Windows).
-* Measure the efficiency of your [Security operations](manage-soc-with-incident-metrics.md#security-operations-efficiency-workbook) ([video](https://www.youtube.com/watch?v=jRucUysVpxI&ab_channel=MicrosoftSecurityCommunity))
-* **SentinelHealth data table**. Provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions. Find more information [here](/azure/sentinel/monitor-data-connector-health).
-* Monitor [Data connectors health](monitor-data-connector-health.md) ([video](https://www.youtube.com/watch?v=T6Vyo7gZYds&ab_channel=MicrosoftSecurityCommunity)) and [get notifications on anomalies](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/data-connector-health-push-notification-alerts/ba-p/1996442).
-* Monitor agents using the [agents' health solution (Windows only)](../azure-monitor/insights/solution-agenthealth.md) and the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat)(Linux and Windows).
-* Monitor your Log Analytics workspace: [YouTube](https://www.youtube.com/watch?v=DmDU9QP_JlI&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%21792&ithint=video%2Cmp4&authkey=%21ALgHojpWDidvFyo), [Presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%21794&ithint=file%2Cpdf&authkey=%21AAva%2Do6Ru1fjJ78), including query execution and ingest health.
-* Cost management is also an important operational procedure in the SOC. Use the [Ingestion Cost Alert Playbook](https://techcommunity.microsoft.com/t5/azure-sentinel/ingestion-cost-alert-playbook/ba-p/2006003) to ensure you're aware in time of any cost increase.
+* Monitor your Log Analytics workspace: [YouTube](https://www.youtube.com/watch?v=DmDU9QP_JlI&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%21792&ithint=video%2Cmp4&authkey=%21ALgHojpWDidvFyo), or [presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%21794&ithint=file%2Cpdf&authkey=%21AAva%2Do6Ru1fjJ78), including query execution and ingestion health.
+
+* Cost management is also an important operational procedure in the SOC. Use the [Ingestion Cost Alert Playbook](https://techcommunity.microsoft.com/t5/azure-sentinel/ingestion-cost-alert-playbook/ba-p/2006003) to ensure that you're always aware of any cost increases.
## Part 5: Advanced
-### Module 20: Extending and Integrating using Microsoft Sentinel APIs
+### Module 20: Extending and integrating by using the Microsoft Sentinel APIs
+
+As a cloud-native SIEM, Microsoft Sentinel is an API-first system. Every feature can be configured and used through an API, enabling easy integration with other systems and extending Microsoft Sentinel with your own code. If API sounds intimidating to you, don't worry. Whatever is available by using the API is [also available by using PowerShell](https://techcommunity.microsoft.com/t5/azure-sentinel/new-year-new-official-azure-sentinel-powershell-module/ba-p/2025041).
+
+To learn more about the Microsoft Sentinel APIs, view the [short introductory video](https://www.youtube.com/watch?v=gQDBkc-K-Y4&ab_channel=MicrosoftSecurityCommunity) and read the [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-api-101/ba-p/1438928). For a deeper dive, view the "Extending and integrating Sentinel (APIs)" webinar ([YouTube](https://www.youtube.com/watch?v=Cu4dc88GH1k&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ACZmq6oAe1yVDmY&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21307&parId=66C31D2DBF8E0F71%21305&o=OneUp), or [presentation](https://onedrive.live.com/?authkey=%21AF3TWPEJKZvJ23Q&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21308&parId=66C31D2DBF8E0F71%21305&o=OneUp)), and read the blog post [Extending Microsoft Sentinel: APIs, integration, and management automation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/extending-azure-sentinel-apis-integration-and-management/ba-p/1116885).
+
+### Module 21: Build-your-own machine learning
-As a cloud-native SIEM, Microsoft Sentinel is an API first system. Every feature can be configured and used through an API, enabling easy integration with other systems and extending Sentinel with your own code. If API sounds intimidating to you, don't worry; whatever is available using the API is [also available using PowerShell](https://techcommunity.microsoft.com/t5/azure-sentinel/new-year-new-official-azure-sentinel-powershell-module/ba-p/2025041).
+Microsoft Sentinel provides a great platform for implementing your own machine learning algorithms. We call it the *Build-your-own machine learning model*, or BYO ML. BYO ML is intended for advanced users. If you're looking for built-in behavioral analytics, use our machine learning analytics rules or UEBA module, or write your own behavioral analytics KQL-based analytics rules.
-To learn more about Microsoft Sentinel APIs, watch the [short introductory video](https://www.youtube.com/watch?v=gQDBkc-K-Y4&ab_channel=MicrosoftSecurityCommunity) and read the [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-api-101/ba-p/1438928). To get the details, watch the deep dive Webinar ([MP4](https://onedrive.live.com/?authkey=%21ACZmq6oAe1yVDmY&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21307&parId=66C31D2DBF8E0F71%21305&o=OneUp), [YouTube](https://www.youtube.com/watch?v=Cu4dc88GH1k&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21AF3TWPEJKZvJ23Q&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21308&parId=66C31D2DBF8E0F71%21305&o=OneUp)) and read the blog post [_Extending Microsoft Sentinel: APIs, Integration, and management automation_](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/extending-azure-sentinel-apis-integration-and-management/ba-p/1116885).
+To start with bringing your own machine learning to Microsoft Sentinel, view the ["Build-your-own machine learning model"](https://www.youtube.com/watch?v=QDIuvZbmUmc) video, and read the [Build-your-own machine learning model detections in the AI-immersed Azure Sentinel SIEM](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/build-your-own-machine-learning-detections-in-the-ai-immersed/ba-p/1750920) blog post. You might also want to refer to the [BYO ML documentation](bring-your-own-ml.md).
-### Module 21: Bring your own ML
+## Next steps
+* [Pre-deployment activities and prerequisites for deploying Microsoft Sentinel](prerequisites.md)
+* [Quickstart: Onboard Microsoft Sentinel]quickstart-onboard.md)
+* [What's new in Microsoft Sentinel](whats-new.md)
-Microsoft Sentinel provides a great platform for implementing your own Machine Learning algorithms. We call it Bring-Your-Own-ML(BYOML for short). BYOML is intended for advanced users. If you're looking for built-in behavioral analytics, use our ML Analytics rules, UEBA module, or write your own behavioral analytics KQL-based analytics rules.
+## Recommended content
-To start with bringing your own ML to Microsoft Sentinel, watch the [video](https://www.youtube.com/watch?v=QDIuvZbmUmc), and read the [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/build-your-own-machine-learning-detections-in-the-ai-immersed/ba-p/1750920). You might also want to refer to the [BYOML documentation](bring-your-own-ml.md).
+* [Best practices for Microsoft Sentinel](best-practices.md)
+* [Microsoft Sentinel sample workspace designs](sample-workspace-designs.md)
+* [Plan costs and understand Microsoft Sentinel pricing and billing](billing.md#understand-the-full-billing-model-for-microsoft-sentinel)
+* [Roles and permissions in Microsoft Sentinel](roles.md)
+* [Deploy Microsoft Sentinel side-by-side with an existing SIEM](deploy-side-by-side.md)
service-bus-messaging Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
service-fabric Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md
Previously updated : 07/26/2022 Last updated : 08/01/2022 # Azure Policy Regulatory Compliance controls for Azure Service Fabric
site-recovery Physical Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-manage-configuration-server.md
Previously updated : 02/28/2019 Last updated : 07/27/2022
Upgrade the server as follows:
1. [Install](/powershell/azure/install-Az-ps) Azure PowerShell module 2. Login into to your Azure account using the command
- `Connect-AzAccount`
+ `Connect-AzAccount ΓÇôUseDeviceAuthentication`
3. Select the subscription under which the vault is present
- `Get-AzSubscription ΓÇôSubscriptionName <your subscription name> | Select-AzSubscription`
+ `Get-AzSubscription ΓÇôSubscriptionName <your subscription name> | SelectΓÇôAzSubscription`
3. Now set up your vault context ```powershell
- $Vault = Get-AzRecoveryServicesVault -Name <name of your vault>
- Set-AzRecoveryServicesVaultContext -Vault $Vault
+ $vault = GetΓÇôAzRecoveryServicesVault ΓÇôName <name of your vault>
+ Set-AzRecoveryServicesAsrVaultContext ΓÇôVault $vault
``` 4. Get select your configuration server
- `$Fabric = Get-AzRecoveryServicesAsrFabric -FriendlyName <name of your configuration server>`
+ `$Fabric = GetΓÇôAzRecoveryServicesAsrFabric ΓÇôFriendlyName <name of your configuration server>`
6. Delete the Configuration Server
- `Remove-AzRecoveryServicesAsrFabric -Fabric $Fabric [-Force]`
+ `RemoveΓÇôAzRecoveryServicesAsrFabric ΓÇôFabric $Fabric ΓÇôForce`
> [!NOTE] > The **-Force** option in the Remove-AzRecoveryServicesAsrFabric can be used to force the removal/deletion of the Configuration server.
spring-apps Access App Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/access-app-virtual-network.md
Title: "Azure Spring Apps access app in virtual network"
description: Access app in Azure Spring Apps in a virtual network. -+ Last updated 11/30/2021
spring-apps Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/breaking-changes.md
Title: Azure Spring Apps API breaking changes
description: Describes the breaking changes introduced by the latest Azure Spring Apps stable API version. -+ Last updated 05/25/2022
spring-apps Concept App Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-app-status.md
Title: App status in Azure Spring Apps description: Learn the app status categories in Azure Spring Apps -+ Last updated 03/30/2022
spring-apps Concept Manage Monitor App Spring Boot Actuator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-manage-monitor-app-spring-boot-actuator.md
Title: "Manage and monitor app with Spring Boot Actuator"
description: Learn how to manage and monitor app with Spring Boot Actuator. -+ Last updated 05/06/2022
spring-apps Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-metrics.md
Title: Metrics for Azure Spring Apps description: Learn how to review metrics in Azure Spring Apps -+ Last updated 09/08/2020
spring-apps Concept Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-security-controls.md
Title: Security controls for Azure Spring Apps Service
description: Use security controls built in into Azure Spring Apps Service. -+ Last updated 04/23/2020
spring-apps Concept Understand App And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-understand-app-and-deployment.md
Title: "App and deployment in Azure Spring Apps"
description: This topic explains the distinction between application and deployment in Azure Spring Apps. -+ Last updated 07/23/2020
spring-apps Concepts Blue Green Deployment Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concepts-blue-green-deployment-strategies.md
Title: "Blue-green deployment strategies in Azure Spring Apps"
description: This topic explains two approaches to blue-green deployments in Azure Spring Apps. -+ Last updated 11/12/2021
spring-apps Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/connect-managed-identity-to-azure-sql.md
Title: Use Managed identity to connect Azure SQL to Azure Spring Apps app
description: Set up managed identity to connect Azure SQL to an Azure Spring Apps app. -+ Last updated 03/25/2021
spring-apps Diagnostic Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/diagnostic-services.md
Title: Analyze logs and metrics in Azure Spring Apps | Microsoft Docs description: Learn how to analyze diagnostics data in Azure Spring Apps -+ Last updated 01/06/2020
spring-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/disaster-recovery.md
Title: Azure Spring Apps geo-disaster recovery | Microsoft Docs description: Learn how to protect your Spring application from regional outages -+ Last updated 10/24/2019
spring-apps Expose Apps Gateway End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-end-to-end-tls.md
description: How to expose applications to the internet using Application Gateway -+ Last updated 02/28/2022
spring-apps Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-tls-termination.md
description: How to expose applications to internet using Application Gateway with TLS termination -+ Last updated 11/09/2021
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
Title: Frequently asked questions about Azure Spring Apps | Microsoft Docs description: This article answers frequently asked questions about Azure Spring Apps. -+ Last updated 09/08/2020
spring-apps Github Actions Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/github-actions-key-vault.md
Title: Authenticate Azure Spring Apps with Key Vault in GitHub Actions
description: How to use Azure Key Vault with a CI/CD workflow for Azure Spring Apps with GitHub Actions -+ Last updated 09/08/2020
spring-apps How To Access Data Plane Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-access-data-plane-azure-ad-rbac.md
description: How to access Config Server and Service Registry Endpoints with Azure Active Directory role-based access control. -+ Last updated 08/25/2021
spring-apps How To Appdynamics Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-appdynamics-java-agent-monitor.md
description: How to use the AppDynamics Java agent to monitor Spring Boot applications in Azure Spring Apps. -+ Last updated 06/07/2022
spring-apps How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-application-insights.md
Title: How to use Application Insights Java In-Process Agent in Azure Spring App
description: How to monitor apps using Application Insights Java In-Process Agent in Azure Spring Apps. -+ Last updated 06/20/2022
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
Title: Bind an Azure Cosmos DB to your application in Azure Spring Apps description: Learn how to bind Azure Cosmos DB to your application in Azure Spring Apps -+ Last updated 10/06/2019
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
Title: How to bind an Azure Database for MySQL instance to your application in Azure Spring Apps description: Learn how to bind an Azure Database for MySQL instance to your application in Azure Spring Apps -+ Last updated 11/04/2019
spring-apps How To Bind Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-redis.md
Title: Bind Azure Cache for Redis to your application in Azure Spring Apps description: Learn how to bind Azure Cache for Redis to your application in Azure Spring Apps -+ Last updated 10/31/2019
spring-apps How To Built In Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-built-in-persistent-storage.md
Title: How to use built-in persistent storage in Azure Spring Apps | Microsoft Docs description: How to use built-in persistent storage in Azure Spring Apps -+ Last updated 10/28/2021
spring-apps How To Capture Dumps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-capture-dumps.md
Title: Capture heap dump and thread dump manually and use Java Flight Recorder i
description: Learn how to manually capture a heap dump, a thread dump, or start Java Flight Recorder. -+ Last updated 01/21/2022
spring-apps How To Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-cicd.md
Title: Automate application deployments to Azure Spring Apps description: Describes how to use the Azure Spring Apps task for Azure Pipelines. -+ Last updated 09/13/2021
spring-apps How To Circuit Breaker Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-circuit-breaker-metrics.md
Title: Collect Spring Cloud Resilience4J Circuit Breaker Metrics with Micrometer
description: How to collect Spring Cloud Resilience4J Circuit Breaker Metrics with Micrometer in Azure Spring Apps. -+ Last updated 12/15/2020
spring-apps How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-config-server.md
Title: Configure your managed Spring Cloud Config Server in Azure Spring Apps description: Learn how to configure a managed Spring Cloud Config Server in Azure Spring Apps on the Azure portal-+
spring-apps How To Configure Health Probes Graceful Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-health-probes-graceful-termination.md
Title: How to configure health probes and graceful termination period for apps hosted in Azure Spring Apps description: Shows you how to customize apps running in Azure Spring Apps with health probes and graceful termination period. -+ Last updated 07/02/2022
spring-apps How To Configure Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-ingress.md
Title: How to configure ingress for Azure Spring Apps
description: Describes how to configure ingress for Azure Spring Apps. -+ Last updated 05/27/2022
spring-apps How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-palo-alto.md
description: How to configure Palo Alto for Azure Spring Apps
-+ Last updated 09/17/2021
spring-apps How To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-persistent-storage.md
Title: How to enable your own persistent storage in Azure Spring Apps | Microsoft Docs description: How to bring your own storage as persistent storages in Azure Spring Apps -+ Last updated 2/18/2022
spring-apps How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-in-azure-virtual-network.md
Title: "Deploy Azure Spring Apps in a virtual network"
description: Deploy Azure Spring Apps in a virtual network (VNet injection). -+ Last updated 07/21/2020
spring-apps How To Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-powershell.md
description: How to create and deploy applications in Azure Spring Apps by using
-+ ms.devlang: azurepowershell Last updated 2/15/2022
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-with-custom-container-image.md
description: How to deploy applications in Azure Spring Apps with a custom conta
-+ Last updated 4/28/2022
spring-apps How To Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-distributed-tracing.md
Title: "Use Distributed Tracing with Azure Spring Apps" description: Learn how to use Azure Spring Apps distributed tracing through Azure Application Insights -+ Last updated 10/06/2019
spring-apps How To Dump Jvm Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-dump-jvm-options.md
Title: Use the diagnostic settings of JVM options for advanced troubleshooting i
description: Describes several best practices with JVM configuration to set heap dump, JFR, and GC logs. -+ Last updated 01/21/2022
spring-apps How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-dynatrace-one-agent-monitor.md
Title: "How to monitor Spring Boot apps with Dynatrace Java OneAgent"
description: How to use Dynatrace Java OneAgent to monitor Spring Boot applications in Azure Spring Apps -+ Last updated 06/07/2022
spring-apps How To Elastic Apm Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-elastic-apm-java-agent-monitor.md
Title: How to monitor Spring Boot apps with Elastic APM Java Agent
description: How to use Elastic APM Java Agent to monitor Spring Boot applications running in Azure Spring Apps -+ Last updated 06/07/2022
spring-apps How To Elastic Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-elastic-diagnostic-settings.md
Title: Analyze logs with Elastic Cloud from Azure Spring Apps description: Learn how to analyze diagnostics logs in Azure Spring Apps using Elastic -+ Last updated 12/07/2021
spring-apps How To Enable Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-availability-zone.md
description: How to create an Azure Spring Apps instance with availability zone enabled. -+ Last updated 04/14/2022
spring-apps How To Enable Ingress To App Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-ingress-to-app-tls.md
description: How to enable ingress-to-app Transport Layer Security for an application. -+ Last updated 04/12/2022
spring-apps How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-system-assigned-managed-identity.md
description: How to enable system-assigned managed identity for applications. -+ Last updated 04/15/2022
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md
description: How to use Application Configuration Service for Tanzu with Azure Spring Apps Enterprise Tier. -+ Last updated 02/09/2022
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
description: How to Use Tanzu Build Service in Azure Spring Apps Enterprise Tier -+ Last updated 02/09/2022
spring-apps How To Enterprise Deploy Non Java Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-non-java-apps.md
description: How to Deploy Non-Java Applications in Azure Spring Apps Enterprise Tier -+ Last updated 02/09/2022
spring-apps How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-marketplace-offer.md
Title: How to view the Azure Spring Apps Enterprise Tier offering from Azure Mar
description: How to view the Azure Spring Apps Enterprise Tier offering from Azure Marketplace. -+ Last updated 02/09/2022
spring-apps How To Enterprise Service Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-service-registry.md
Title: How to Use Tanzu Service Registry with Azure Spring Apps Enterprise tier
description: How to use Tanzu Service Registry with Azure Spring Apps Enterprise tier. -+ Last updated 06/17/2022
spring-apps How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-github-actions.md
Title: Use Azure Spring Apps CI/CD with GitHub Actions
description: How to build up a CI/CD workflow for Azure Spring Apps with GitHub Actions -+ Last updated 09/08/2020
spring-apps How To Integrate Azure Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-integrate-azure-load-balancers.md
Title: Tutorial - Integrate Azure Spring Apps with Azure Load Balance Solutions
description: How to integrate Azure Spring Apps with Azure Load Balance Solutions -+ Last updated 04/20/2020
spring-apps How To Intellij Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-intellij-deploy-apps.md
Title: "Tutorial: Deploy Spring Boot applications using IntelliJ"
description: Use IntelliJ to deploy applications to Azure Spring Apps. -+ Last updated 06/24/2022
spring-apps How To Launch From Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-launch-from-source.md
Title: How to Deploy Spring Boot applications from Azure CLI description: In this quickstart, learn how to launch your application in Azure Spring Apps directly from your source code -+ Last updated 11/12/2021
spring-apps How To Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-log-streaming.md
Title: Stream Azure Spring Apps app logs in real-time
description: How to use log streaming to view application logs instantly -+ Last updated 01/14/2019
spring-apps How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-manage-user-assigned-managed-identities.md
Title: Manage user-assigned managed identities for an application in Azure Sprin
description: How to manage user-assigned managed identities for applications. -+ Last updated 03/31/2022
spring-apps How To Maven Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-maven-deploy-apps.md
description: Use Maven to deploy applications to Azure Spring Apps. -+ Last updated 04/07/2022
spring-apps How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-migrate-standard-tier-to-enterprise-tier.md
description: How to migrate an Azure Spring Apps Basic or Standard tier instance to Enterprise tier -+ Last updated 05/09/2022
spring-apps How To Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-move-across-regions.md
Title: How to move an Azure Spring Apps service instance to another region
description: Describes how to move an Azure Spring Apps service instance to another region -+ Last updated 01/27/2022
spring-apps How To New Relic Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-new-relic-monitor.md
description: Learn how to monitor Spring Boot applications using the New Relic Java agent. -+ Last updated 06/08/2021
spring-apps How To Outbound Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-outbound-public-ip.md
Title: How - to identify outbound public IP addresses in Azure Spring Apps
description: How to view the static outbound public IP addresses to communicate with external resources, such as Database, Storage, Key Vault, etc. -+ Last updated 09/17/2020
spring-apps How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-permissions.md
Title: "Use permissions in Azure Spring Apps"
description: This article shows you how to create custom roles that delegate permissions to Azure Spring Apps resources. -+ Last updated 09/04/2020
spring-apps How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-prepare-app-deployment.md
Title: How to prepare an application for deployment in Azure Spring Apps description: Learn how to prepare an application for deployment to Azure Spring Apps. -+ Last updated 07/06/2021
spring-apps How To Scale Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-scale-manual.md
Title: "Scale an application in Azure Spring Apps | Microsoft Docs" description: Learn how to scale an application with Azure Spring Apps in the Azure portal-+
spring-apps How To Self Diagnose Running In Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-self-diagnose-running-in-vnet.md
Title: "How to self-diagnose Azure Spring Apps VNET"
description: Learn how to self-diagnose and solve problems in Azure Spring Apps running in VNET. -+ Last updated 01/25/2021
spring-apps How To Self Diagnose Solve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-self-diagnose-solve.md
Title: "How to self-diagnose and solve problems in Azure Spring Apps"
description: Learn how to self-diagnose and solve problems in Azure Spring Apps. -+ Last updated 05/29/2020
spring-apps How To Service Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-service-registration.md
Title: Discover and register your Spring Boot applications in Azure Spring Apps
description: Discover and register your Spring Boot applications with managed Spring Cloud Service Registry (OSS) in Azure Spring Apps -+ Last updated 05/09/2022
spring-apps How To Set Up Sso With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-set-up-sso-with-azure-ad.md
description: How to set up single sign-on with Azure Active Directory for Spring Cloud Gateway and API Portal for Tanzu with Azure Spring Apps Enterprise Tier. -+ Last updated 05/20/2022
spring-apps How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-setup-autoscale.md
Title: "Set up autoscale for applications"
description: This article describes how to set up Autoscale settings for your applications using the Microsoft Azure portal or the Azure CLI. -+ Last updated 11/03/2021
spring-apps How To Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-staging-environment.md
Title: Set up a staging environment in Azure Spring Apps | Microsoft Docs description: Learn how to use blue-green deployment with Azure Spring Apps -+ Last updated 01/14/2021
spring-apps How To Start Stop Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-start-stop-delete.md
Title: Start, stop, and delete an application in Azure Spring Apps | Microsoft Docs description: How to start, stop, and delete an application in Azure Spring Apps -+ Last updated 10/31/2019
spring-apps How To Start Stop Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-start-stop-service.md
Title: How to start or stop an Azure Spring Apps service instance
description: Describes how to start or stop an Azure Spring Apps service instance -+ Last updated 11/04/2021
spring-apps How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-api-portal.md
description: How to use API portal for VMware Tanzu with Azure Spring Apps Enterprise Tier. -+ Last updated 02/09/2022
spring-apps How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-spring-cloud-gateway.md
description: How to use Spring Cloud Gateway for Tanzu with Azure Spring Apps Enterprise Tier. -+ Last updated 02/09/2022
spring-apps How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-managed-identities.md
description: Home page for managed identities for applications. -+ Last updated 04/15/2022
spring-apps How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-tls-certificate.md
description: Use TLS/SSL certificates in an application. -+ Last updated 10/08/2021
spring-apps How To Write Log To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-write-log-to-custom-persistent-storage.md
Title: How to use Logback to write logs to custom persistent storage in Azure Sp
description: How to use Logback to write logs to custom persistent storage in Azure Spring Apps. -+ Last updated 11/17/2021
spring-apps Monitor App Lifecycle Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/monitor-app-lifecycle-events.md
Title: Monitor app lifecycle events using Azure Activity log and Azure Service
description: Monitor app lifecycle events and set up alerts with Azure Activity log and Azure Service Health. -+ Last updated 08/19/2021
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
Title: Introduction to Azure Spring Apps description: Learn the features and benefits of Azure Spring Apps to deploy and manage Java Spring applications in Azure. -+ Last updated 03/09/2021
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Last updated 07/26/2022
-+
spring-apps Principles Microservice Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/principles-microservice-apps.md
Title: Java and base OS for Azure Spring Apps apps
description: Principles for maintaining healthy Java and base operating system for Azure Spring Apps apps -+ Last updated 10/12/2021
spring-apps Quickstart Automate Deployments Github Actions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-automate-deployments-github-actions-enterprise.md
description: Explains how to automate deployments to Azure Spring Apps Enterprise tier by using GitHub Actions and Terraform. -+ Last updated 05/31/2022
spring-apps Quickstart Configure Single Sign On Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-configure-single-sign-on-enterprise.md
Title: "Quickstart - Configure single sign-on for applications using Azure Sprin
description: Describes single sign-on configuration for Azure Spring Apps Enterprise tier. -+ Last updated 05/31/2022
spring-apps Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps-enterprise.md
Title: "Quickstart - Build and deploy apps to Azure Spring Apps Enterprise tier"
description: Describes app deployment to Azure Spring Apps Enterprise tier. -+ Last updated 05/31/2022
spring-apps Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps.md
Title: "Quickstart - Build and deploy apps to Azure Spring Apps"
description: Describes app deployment to Azure Spring Apps. -+ Last updated 11/15/2021
spring-apps Quickstart Deploy Infrastructure Vnet Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-azure-cli.md
Title: Quickstart - Provision Azure Spring Apps using Azure CLI
description: This quickstart shows you how to use Azure CLI to deploy an Azure Spring Apps cluster into an existing virtual network. -+
spring-apps Quickstart Deploy Infrastructure Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-bicep.md
Title: Quickstart - Provision Azure Spring Apps using Bicep description: This quickstart shows you how to use Bicep to deploy an Azure Spring Apps cluster into an existing virtual network. -+
spring-apps Quickstart Deploy Infrastructure Vnet Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-terraform.md
Title: Quickstart - Provision Azure Spring Apps using Terraform description: This quickstart shows you how to use Terraform to deploy an Azure Spring Apps cluster into an existing virtual network. -+
spring-apps Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet.md
Title: Quickstart - Provision Azure Spring Apps using an Azure Resource Manager
description: This quickstart shows you how to use an ARM template to deploy an Azure Spring Apps cluster into an existing virtual network. -+
spring-apps Quickstart Integrate Azure Database And Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-and-redis-enterprise.md
description: Explains how to provision and prepare an Azure Database for PostgreSQL and an Azure Cache for Redis to be used with apps running Azure Spring Apps Enterprise tier. -+ Last updated 05/31/2022
spring-apps Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-mysql.md
Title: "Quickstart - Integrate with Azure Database for MySQL"
description: Explains how to provision and prepare an Azure Database for MySQL instance, and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database with only one command. -+ Last updated 10/15/2021
spring-apps Quickstart Key Vault Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-key-vault-enterprise.md
description: Explains how to use Azure Key Vault to securely load secrets for apps running Azure Spring Apps Enterprise tier. -+ Last updated 05/31/2022
spring-apps Quickstart Logs Metrics Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-logs-metrics-tracing.md
Title: "Quickstart - Monitoring Azure Spring Apps apps with logs, metrics, and t
description: Use log streaming, log analytics, metrics, and tracing to monitor PetClinic sample apps on Azure Spring Apps. -+ Last updated 10/12/2021
spring-apps Quickstart Monitor End To End Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-monitor-end-to-end-enterprise.md
description: Explains how to monitor apps running Azure Spring Apps Enterprise tier by using Application Insights and Log Analytics. -+ Last updated 05/31/2022
spring-apps Quickstart Provision Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-service-instance.md
Title: "Quickstart - Provision an Azure Spring Apps service"
description: Describes creation of an Azure Spring Apps service instance for app deployment. -+ Last updated 7/28/2022
spring-apps Quickstart Sample App Acme Fitness Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-acme-fitness-store-introduction.md
description: Describes the sample app used in this series of quickstarts for deployment to Azure Spring Apps Enterprise tier. -+ Last updated 05/31/2022
spring-apps Quickstart Sample App Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-introduction.md
Title: "Quickstart - Introduction to the sample app - Azure Spring Apps"
description: Describes the sample app used in this series of quickstarts for deployment to Azure Spring Apps. -+ Last updated 10/12/2021
spring-apps Quickstart Set Request Rate Limits Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-set-request-rate-limits-enterprise.md
description: Explains how to set request rate limits by using Spring Cloud Gateway on Azure Spring Apps Enterprise tier. -+ Last updated 05/31/2022
spring-apps Quickstart Setup Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-setup-config-server.md
Title: "Quickstart - Set up Azure Spring Apps Config Server"
description: Describes the setup of Azure Spring Apps Config Server for app deployment. -+ Last updated 7/19/2022
spring-apps Quickstart Setup Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-setup-log-analytics.md
Title: "Quickstart - Set up a Log Analytics workspace in Azure Spring Apps"
description: This article describes the setup of a Log Analytics workspace for app deployment. -+ Last updated 12/09/2021
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
Title: "Quickstart - Deploy your first application to Azure Spring Apps" description: In this quickstart, we deploy an application to Azure Spring Apps. -+ Last updated 10/18/2021
spring-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quotas.md
Title: Service plans and quotas for Azure Spring Apps description: Learn about service quotas and service plans for Azure Spring Apps -+ Last updated 11/04/2019
spring-apps Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/reference-architecture.md
Title: Azure Spring Apps reference architecture -+ description: This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps.
spring-apps Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/resources.md
Title: Resources for Azure Spring Apps | Microsoft Docs description: Azure Spring Apps resource list -+ Last updated 09/08/2020
spring-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022 -+ # Azure Policy Regulatory Compliance controls for Azure Spring Apps
spring-apps Structured App Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/structured-app-log.md
Title: Structured application log for Azure Spring Apps | Microsoft Docs description: This article explains how to generate and collect structured application log data in Azure Spring Apps. -+ Last updated 02/05/2021
spring-apps Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot.md
Title: Troubleshooting guide for Azure Spring Apps | Microsoft Docs description: Troubleshooting guide for Azure Spring Apps -+ Last updated 09/08/2020
spring-apps Troubleshooting Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshooting-vnet.md
Title: Troubleshooting Azure Spring Apps in virtual network description: Troubleshooting guide for Azure Spring Apps virtual network. -+ Last updated 09/19/2020
spring-apps Tutorial Alerts Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-alerts-action-groups.md
Title: "Tutorial: Monitor Azure Spring Apps resources using alerts and action gr
description: Learn how to use Spring app alerts. -+ Last updated 12/29/2019
spring-apps Tutorial Circuit Breaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-circuit-breaker.md
Title: "Tutorial - Use Circuit Breaker Dashboard with Azure Spring Apps"
description: Learn how to use circuit Breaker Dashboard with Azure Spring Apps. -+ Last updated 04/06/2020
spring-apps Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-custom-domain.md
Title: "Tutorial: Map an existing custom domain to Azure Spring Apps" description: How to map an existing custom Distributed Name Service (DNS) name to Azure Spring Apps -+ Last updated 03/19/2020
spring-apps Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-functions.md
Title: "Tutorial: Managed identity to invoke Azure Functions"
description: Use managed identity to invoke Azure Functions from an Azure Spring Apps app -+ Last updated 07/10/2020
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-key-vault.md
Title: "Tutorial: Managed identity to connect Key Vault"
description: Set up managed identity to connect Key Vault to an Azure Spring Apps app -+ Last updated 04/15/2022
spring-apps Tutorial Managed Identities Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-mysql.md
Title: "Tutorial: Managed identity to connect an Azure Database for MySQL to ap
description: Set up managed identity to connect an Azure Database for MySQL to apps in Azure Spring Apps -+ Last updated 03/30/2022
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
Title: "Customer responsibilities running Azure Spring Apps in vnet"
description: This article describes customer responsibilities running Azure Spring Apps in vnet. -+ Last updated 11/02/2021
storage Blobfuse2 Commands Completion Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-bash.md
+
+ Title: How to use the completion bash command to generate the autocompletion script for BlobFuse2 | Microsoft Docs
+description: Learn how to use the completion bash command to generate the autocompletion script for BlobFuse2.
++++ Last updated : 07/27/2022++++
+# BlobFuse2 completion bash command
+
+Use the `blobfuse2 completion bash` command to generate the autocompletion script for BlobFuse2 for the bash shell.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+
+## Syntax
+
+`blobfuse2 completion bash --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 completion bash` are inherited from the grandparent command, `blobfuse2`, or apply only to the [`blobfuse2 completion`](blobfuse2-commands-completion.md) subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags that apply to the BlobFuse2 completion subcommands
+
+The following flags apply only to the `blobfuse2 completion` subcommands:
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| no-descriptions | boolean | false | Disable completion descriptions |
+
+## Usage
+
+The generated script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager.
+
+To load completions in your current shell session:
+
+```bash
+source <(blobfuse2 completion bash)
+```
+
+To load completions for every new session, execute once:
+
+- On Linux:
+
+ ```bash
+ blobfuse2 completion bash > /etc/bash_completion.d/blobfuse2
+ ```
+
+- On macOS:
+
+ ```bash
+ blobfuse2 completion bash > /usr/local/etc/bash_completion.d/blobfuse2
+ ````
+
+> [!NOTE]
+> You will need to start a new shell for this setup to take effect.
+
+## See also
+
+- [The Blobfuse2 completion command](blobfuse2-commands-completion.md)
+- [The Blobfuse2 completion fish command](blobfuse2-commands-completion-fish.md)
+- [The Blobfuse2 completion PowerShell command](blobfuse2-commands-completion-powershell.md)
+- [The Blobfuse2 completion zsh command](blobfuse2-commands-completion-zsh.md)
storage Blobfuse2 Commands Completion Fish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-fish.md
+
+ Title: How to use the completion fish command to generate the autocompletion script for BlobFuse2 | Microsoft Docs
+description: Learn how to use the completion fish command to generate the autocompletion script for BlobFuse2.
++++ Last updated : 07/27/2022++++
+# BlobFuse2 completion fish command
+
+Use the `blobfuse2 completion fish` command to generate the autocompletion script for BlobFuse2 for the fish shell.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+
+## Syntax
+
+`blobfuse2 completion fish --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 completion fish` are inherited from the grandparent command, `blobfuse2`, or apply only to the [`blobfuse2 completion`](blobfuse2-commands-completion.md) subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags that apply to the BlobFuse2 completion subcommands
+
+The following flags apply only to the `blobfuse2 completion` subcommands:
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| no-descriptions | boolean | false | Disable completion descriptions |
+
+## Usage
+
+To load completions in your current shell session:
+
+`blobfuse2 completion fish | source`
+
+To load completions for every new session, execute once:
+
+`blobfuse2 completion fish > ~/.config/fish/completions/blobfuse2.fish`
+
+> [!NOTE]
+> You will need to start a new shell for this setup to take effect.
+
+## See also
+
+- [The Blobfuse2 completion command](blobfuse2-commands-completion.md)
+- [The Blobfuse2 completion bash command](blobfuse2-commands-completion-bash.md)
+- [The Blobfuse2 completion PowerShell command](blobfuse2-commands-completion-powershell.md)
+- [The Blobfuse2 completion zsh command](blobfuse2-commands-completion-zsh.md)
storage Blobfuse2 Commands Completion Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-powershell.md
+
+ Title: How to use the "completion powershell" command to generate the autocompletion script for BlobFuse2 | Microsoft Docs
+description: Learn how to use the "completion powershell" command to generate the autocompletion script for BlobFuse2.
++++ Last updated : 07/27/2022++++
+# BlobFuse2 completion powershell command
+
+Use the `blobfuse2 completion powershell` command to generate the autocompletion script for BlobFuse2 for PowerShell.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+
+## Syntax
+
+`blobfuse2 completion powershell --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 completion powershell` are inherited from the grandparent command, `blobfuse2`, or apply only to the [`blobfuse2 completion`](blobfuse2-commands-completion.md) subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags that apply to the BlobFuse2 completion subcommands
+
+The following flags apply only to the `blobfuse2 completion` subcommands:
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| no-descriptions | boolean | false | Disable completion descriptions |
+
+## Usage
+
+To load completions in your current PowerShell session:
+
+```powershell
+blobfuse2 completion powershell | Out-String | Invoke-Expression
+```
+
+To load completions for every new session, add the output of the above command
+to your PowerShell profile.
+
+> [!NOTE]
+> You will need to start a new shell for this setup to take effect.
+
+## See also
+
+- [The Blobfuse2 completion command](blobfuse2-commands-completion.md)
+- [The Blobfuse2 completion bash command](blobfuse2-commands-completion-bash.md)
+- [The Blobfuse2 completion fish command](blobfuse2-commands-completion-fish.md)
+- [The Blobfuse2 completion zsh command](blobfuse2-commands-completion-zsh.md)
storage Blobfuse2 Commands Completion Zsh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-zsh.md
+
+ Title: How to use the completion zsh command to generate the autocompletion script for BlobFuse2 | Microsoft Docs
+description: Learn how to use the completion zsh command to generate the autocompletion script for BlobFuse2.
++++ Last updated : 07/27/2022++++
+# BlobFuse2 completion zsh command
+
+Use the `blobfuse2 completion zsh` command to generate the autocompletion script for BlobFuse2 for the zsh shell.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+
+## Syntax
+
+`blobfuse2 completion zsh --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 completion zsh` are inherited from the grandparent command, `blobfuse2`, or apply only to the [`blobfuse2 completion`](blobfuse2-commands-completion.md) subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags that apply to the BlobFuse2 completion subcommands
+
+The following flags apply only to the `blobfuse2 completion` subcommands:
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| no-descriptions | boolean | false | Disable completion descriptions |
+
+## Usage
+
+If shell completion is not already enabled in your environment you will need
+to enable it. To do so, run the following command once:
+
+`echo "autoload -U compinit; compinit" >> ~/.zshrc`
+
+To load completions in your current shell session:
+
+`source <(blobfuse2 completion zsh); compdef _blobfuse2 blobfuse2`
+
+To load completions for every new session, execute once:
+
+- On Linux:
+
+ `blobfuse2 completion zsh > "${fpath[1]}/_blobfuse2"`
+
+- On macOS:
+
+ `blobfuse2 completion zsh > /usr/local/share/zsh/site-functions/_blobfuse2`
+
+> [!NOTE]
+> You will need to start a new shell for this setup to take effect.
+
+## See also
+
+- [The Blobfuse2 completion command](blobfuse2-commands-completion.md)
+- [The Blobfuse2 completion bash command](blobfuse2-commands-completion-bash.md)
+- [The Blobfuse2 completion fish command](blobfuse2-commands-completion-fish.md)
+- [The Blobfuse2 completion PowerShell command](blobfuse2-commands-completion-powershell.md)
storage Blobfuse2 Commands Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion.md
+
+ Title: How to use the completion command to generate the autocompletion script for BlobFuse2 | Microsoft Docs
+description: Learn how to use the completion command to generate the autocompletion script for BlobFuse2.
++++ Last updated : 07/27/2022++++
+# BlobFuse2 completion command
+
+Use the `blobfuse2 completion` command to generate the autocompletion script for BlobFuse2 for a specified shell.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+
+## Syntax
+
+`blobfuse2 completion [command] --[flag-name]=[flag-value]`
+
+## Arguments
+
+`[command]`
+
+The supported subcommands for `blobfuse2 completion` are:
+
+| Command | Description |
+|--|--|
+| [bash](blobfuse2-commands-completion-bash.md) | Generate the autocompletion script for bash |
+| [fish](blobfuse2-commands-completion-fish.md) | Generate the autocompletion script for fish |
+| [powershell](blobfuse2-commands-completion-powershell.md) | Generate the autocompletion script for PowerShell |
+| [zsh](blobfuse2-commands-completion-zsh.md) | Generate the autocompletion script for zsh |
+
+Select one of the command links in the table above to view the documentation for the individual subcommands, including how to use the generated script.
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 completion` are inherited from the grandparent command, `blobfuse2`, or apply only to the `blobfuse2 completion` subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+## See also
+
+- [The Blobfuse2 completion bash command](blobfuse2-commands-completion-bash.md)
+- [The Blobfuse2 completion fish command](blobfuse2-commands-completion-fish.md)
+- [The Blobfuse2 completion PowerShell command](blobfuse2-commands-completion-powershell.md)
+- [The Blobfuse2 completion zsh command](blobfuse2-commands-completion-zsh.md)
storage Blobfuse2 Commands Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-help.md
+
+ Title: How to use BlobFuse2 help to get help info for the BlobFuse2 command and subcommands | Microsoft Docs
+
+description: Learn how to use BlobFuse2 help to get help info for the BlobFuse2 command and subcommands.
++++ Last updated : 08/01/2022++++
+# BlobFuse2 help command
+
+Use the `blobfuse2 help` command to get help info for the BlobFuse2 command and subcommands.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+
+## Syntax
+
+The `blobfuse2 help` command has 2 formats:
+
+`blobfuse2 help --[flag-name]=[flag-value]`
+
+`blobfuse2 help [command] --[flag-name]=[flag-value]`
+
+## Arguments
+
+`[command]`
+
+You can get help information for any of the specific BlobFuse2 commands. The supported `blobfuse2` commands are:
+
+| Command | Description |
+|--|--|
+| [mount](blobfuse2-commands-mount.md) | Mounts blob storage containers and displays existing mount points |
+| [unmount](blobfuse2-commands-unmount.md) | Unmounts previously mounted blob storage containers |
+| [mountv1](blobfuse2-commands-mountv1.md) | Generates a configuration file for BlobFuse2 from a BlobFuse v1 configuration file |
+| completion | Generates the autocompletion script for BlobFuse2 for a specified shell |
+| secure | Encrypts, decrypts, or accesses settings in a BlobFuse2 configuration file |
+| [version](blobfuse2-commands-version.md) | Displays the current version of BlobFuse2, and optionally checks for the latest version |
+
+Select one of the command links in the table above to view the documentation for the individual commands, including the arguments and flags they support.
+
+## Flags (options)
+
+The flags available for `blobfuse2 help` are inherited from the parent command, [`blobfuse2`](blobfuse2-commands.md).
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from parent command [`blobfuse2`](blobfuse2-commands.md)):
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | n/a | Help info for the blobfuse2 command and subcommands |
+
+## Examples
+
+Get general BlobFuse2 help:
+
+`blobfuse2 help`
+
+Get help for the `blobfuse2 mount` command:
+
+`blobfuse2 help mount`
+
+Get help for the `blobfuse2 secure encrypt` subcommand:
+
+`blobfuse2 help secure encrypt`
storage Blobfuse2 Commands Mount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-all.md
+
+ Title: How to use the BlobFuse2 mount all command to mount all blob containers in a storage account as a Linux file system | Microsoft Docs
+
+description: Learn how to use the BlobFuse2 mount all command to mount all blob containers in a storage account as a Linux file system.
++++ Last updated : 08/01/2022++++
+# How to use the BlobFuse2 mount all command to mount all blob containers in a storage account as a Linux file system
+
+Use the `BlobFuse2 mount all` command to mount all blob containers in a storage account as a Linux file system. Each container will be mounted to a unique subdirectory under the path specified. The subdirectory names will correspond to the container names.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+`blobfuse2 mount all [path] --[flag-name]=[flag-value]`
+
+## Arguments
+
+`[path]`
+
+Specify a file path to the directory where all of the blob storage containers in the storage account will be mounted. Example:
+
+```bash
+blobfuse2 mount all ./mount_path ...
+```
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 mount all` are inherited from the parent commands, [`blobfuse2`](blobfuse2-commands.md) and [`blobfuse2 mount`](blobfuse2-commands-mount.md).
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command [`blobfuse2`](blobfuse2-commands.md):
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags inherited from the BlobFuse2 mount command
+
+The following flags are inherited from parent command [`blobfuse2 mount`](blobfuse2-commands-mount.md):
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| allow-other | boolean | false | Allow other users to access this mount point |
+| attr-cache-timeout | uint32 | 120 | Attribute cache timeout<br /><sub>(in seconds)</sub> |
+| attr-timeout | uint32 | | Attribute timeout <br /><sub>(in seconds)</sub> |
+| config-file | string | ./config.yaml | The path for the file where the account credentials are provided Default is config.yaml in current directory. |
+| container-name | string | | The name of the container to be mounted |
+| entry-timeout | uint32 | | Entry timeout <br /><sub>(in seconds)</sub> |
+| file-cache-timeout | uint32 | 120 | File cache timeout <br /><sub>(in seconds)</sub>|
+| foreground | boolean | false | Whether the file system is mounted in foreground mode |
+| log-file-path | string | $HOME/.blobfuse2/blobfuse2.log | The path for log files|
+| log-level | LOG_OFF <br />LOG_CRIT<br />LOG_ERR<br />LOG_WARNING<br />LOG_INFO<br />LOG_DEBUG<br />LOG_WARNING | LOG_WARNING | The level of logging written to `--log-file-path`. |
+| negative-timeout | uint32 | | The negative entry timeout<br /><sub>(in seconds)</sub> |
+| no-symlinks | boolean | false | Whether or not symlinks should be supported |
+| passphrase | string | | Key to decrypt config file.<br />Can also be specified by env-variable BLOBFUSE2_SECURE_CONFIG_PASSPHRASE<br />The key length shall be 16 (AES-128), 24 (AES-192), or 32 (AES-256) bytes in length. |
+| read-only | boolean | false | Mount the system in read only mode |
+| secure-config | boolean | false | Encrypt auto generated config file for each container |
+| tmp-path | string | n/a | Configures the tmp location for the cache.<br />(Configure the fastest disk (SSD or ramdisk) for best performance). |
+
+## Examples
+
+> [!NOTE]
+> The following examples assume you have already created a configuration file in the current directory.
+
+Mount all blob storage containers in the storage account specified in the configuration file to the path specified in the command. (Each container will be a subdirectory under the directory specified):
+
+```bash
+~$ mkdir bf2all
+~$ blobfuse2 mount all ./bf2all --config-file=./config.yaml
+Mounting container : blobfuse2a to path : bf2all/blobfuse2a
+Mounting container : blobfuse2b to path : bf2all/blobfuse2b
+
+~$ blobfuse2 mount list
+1 : /home/<user>/bf2all/blobfuse2a
+2 : /home/<user>/bf2all/blobfuse2b
+```
+
+## See also
+
+- [The Blobfuse2 unmount all command](blobfuse2-commands-unmount-all.md)
+- [The Blobfuse2 mount command](blobfuse2-commands-mount.md)
+- [The Blobfuse2 unmount command](blobfuse2-commands-unmount.md)
storage Blobfuse2 Commands Mount List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-list.md
+
+ Title: How to use the BlobFuse2 mount list command to display all BlobFuse2 mount points | Microsoft Docs
+
+description: Learn how to use the BlobFuse2 mount list command to display all BlobFuse2 mount points.
++++ Last updated : 08/01/2022++++
+# How to use the BlobFuse2 mount list command to display all BlobFuse2 mount points
+
+Use the `BlobFuse2 mount list` command to display all existing BlobFuse2 mount points.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+`blobfuse2 mount list --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 mount all` are inherited from the parent commands, [`blobfuse2`](blobfuse2-commands.md) and [`blobfuse2 mount`](blobfuse2-commands-mount.md).
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command [`blobfuse2`](blobfuse2-commands.md):
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags inherited from the BlobFuse2 mount command
+
+The following flags are inherited from parent command [`blobfuse2 mount`](blobfuse2-commands-mount.md):
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| allow-other | boolean | false | Allow other users to access this mount point |
+| attr-cache-timeout | uint32 | 120 | Attribute cache timeout<br /><sub>(in seconds)</sub> |
+| attr-timeout | uint32 | | Attribute timeout <br /><sub>(in seconds)</sub> |
+| config-file | string | ./config.yaml | The path for the file where the account credentials are provided Default is config.yaml in current directory. |
+| container-name | string | | The name of the container to be mounted |
+| entry-timeout | uint32 | | Entry timeout <br /><sub>(in seconds)</sub> |
+| file-cache-timeout | uint32 | 120 | File cache timeout <br /><sub>(in seconds)</sub>|
+| foreground | boolean | false | Whether the file system is mounted in foreground mode |
+| log-file-path | string | $HOME/.blobfuse2/blobfuse2.log | The path for log files|
+| log-level | LOG_OFF <br />LOG_CRIT<br />LOG_ERR<br />LOG_WARNING<br />LOG_INFO<br />LOG_DEBUG<br />LOG_WARNING | LOG_WARNING | The level of logging written to `--log-file-path`. |
+| negative-timeout | uint32 | | The negative entry timeout<br /><sub>(in seconds)</sub> |
+| no-symlinks | boolean | false | Whether or not symlinks should be supported |
+| passphrase | string | | Key to decrypt config file.<br />Can also be specified by env-variable BLOBFUSE2_SECURE_CONFIG_PASSPHRASE<br />The key length shall be 16 (AES-128), 24 (AES-192), or 32 (AES-256) bytes in length. |
+| read-only | boolean | false | Mount the system in read only mode |
+| secure-config | boolean | false | Encrypt auto generated config file for each container |
+| tmp-path | string | n/a | Configures the tmp location for the cache.<br />(Configure the fastest disk (SSD or ramdisk) for best performance). |
+
+## Examples
+
+Display all current BlobFuse2 mount points:
+
+```bash
+~$ blobfuse2 mount list
+1 : /home/<user>/bf2a
+2 : /home/<user>/bf2b
+```
+
+## See also
+
+- [The `Blobfuse2 unmount` command](blobfuse2-commands-unmount.md)
+- [The `Blobfuse2 mount` command](blobfuse2-commands-mount.md)
+- [The `Blobfuse2 mount all` command](blobfuse2-commands-mount-all.md)
storage Blobfuse2 Commands Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount.md
+
+ Title: How to use the BlobFuse2 mount command to mount a blob storage container as a file system in Linux, or to display and manage existing mount points. | Microsoft Docs
+
+description: Learn how to use the BlobFuse2 mount command to mount a blob storage container as a file system in Linux, or to display and manage existing mount points.
++++ Last updated : 08/01/2022++++
+# How to use the BlobFuse2 mount command
+
+Use the `blobfuse2 mount` command to mount a blob storage container as a file system in Linux, or to display existing mount points.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Command Syntax
+
+The `blobfuse2 mount` command has 2 formats:
+
+`blobfuse2 mount [path] --[flag-name]=[flag-value]`
+
+`blobfuse2 mount [command] --[flag-name]=[flag-value]`
+
+## Arguments
+
+`[path]`
+
+Specify a file path to the directory where the storage container will be mounted. Example:
+
+```bash
+blobfuse2 mount ./mount_path ...
+```
+
+`[command]`
+
+The supported subcommands for `blobfuse2 mount` are:
+
+| Command | Description |
+|--|--|
+| [all](blobfuse2-commands-mount-all.md) | Mounts all azure blob containers in a specified storage account |
+| [list](blobfuse2-commands-mount-list.md) | Lists all BlobFuse2 mount points |
+
+Select one of the command links in the table above to view the documentation for the individual subcommands, including the arguments and flags they support.
+
+## Flags (options)
+
+Some flags are inherited from the parent command, [`blobfuse2`](blobfuse2-commands.md), and others only apply to the `blobfuse2 mount` command.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from parent command [`blobfuse2`](blobfuse2-commands.md)):
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | n/a | Help info for the blobfuse2 command and subcommands |
+
+### Flags that apply only to the BlobFuse2 mount command
+
+The following flags apply only to command `blobfuse2 mount`:
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| allow-other | boolean | false | Allow other users to access this mount point |
+| attr-cache-timeout | uint32 | 120 | Attribute cache timeout<br /><sub>(in seconds)</sub> |
+| attr-timeout | uint32 | | Attribute timeout <br /><sub>(in seconds)</sub> |
+| config-file | string | ./config.yaml | The path for the file where the account credentials are provided Default is config.yaml in current directory. |
+| container-name | string | | The name of the container to be mounted |
+| entry-timeout | uint32 | | Entry timeout <br /><sub>(in seconds)</sub> |
+| file-cache-timeout | uint32 | 120 | File cache timeout <br /><sub>(in seconds)</sub>|
+| foreground | boolean | false | Whether the file system is mounted in foreground mode |
+| log-file-path | string | $HOME/.blobfuse2/blobfuse2.log | The path for log files|
+| log-level | LOG_OFF <br />LOG_CRIT<br />LOG_ERR<br />LOG_WARNING<br />LOG_INFO<br />LOG_DEBUG<br />LOG_WARNING | LOG_WARNING | The level of logging written to `--log-file-path`. |
+| negative-timeout | uint32 | | The negative entry timeout<br /><sub>(in seconds)</sub> |
+| no-symlinks | boolean | false | Whether or not symlinks should be supported |
+| passphrase | string | | Key to decrypt config file.<br />Can also be specified by env-variable BLOBFUSE2_SECURE_CONFIG_PASSPHRASE<br />The key length shall be 16 (AES-128), 24 (AES-192), or 32 (AES-256) bytes in length. |
+| read-only | boolean | false | Mount the system in read only mode |
+| secure-config | boolean | false | Encrypt auto generated config file for each container |
+| tmp-path | string | n/a | Configures the tmp location for the cache.<br />(Configure the fastest disk (SSD or ramdisk) for best performance). |
+
+## Examples
+
+> [!NOTE]
+> The following examples assume you have already created a configuration file in the current directory.
+
+1. Mount an individual Azure blob storage container to a new directory using the settings from a configuration file, and with foreground mode disabled:
+
+ ```bash
+ ~$ mkdir bf2a
+ ~$ blobfuse2 mount ./bf2a --config-file=./config.yaml --foreground=false
+
+ ~$ blobfuse2 mount list
+ 1 : /home/<user>/bf2a
+ ```
+
+1. Mount all blob storage containers in the storage account specified in the configuration file to the path specified in the command. (Each container will be a subdirectory under the directory specified):
+
+ ```bash
+ ~$ mkdir bf2all
+ ~$ blobfuse2 mount all ./bf2all --config-file=./config.yaml
+ Mounting container : blobfuse2a to path : bf2all/blobfuse2a
+ Mounting container : blobfuse2b to path : bf2all/blobfuse2b
+
+ ~$ blobfuse2 mount list
+ 1 : /home/<user>/bf2all/blobfuse2a
+ 2 : /home/<user>/bf2all/blobfuse2b
+ ```
+
+1. Mount a fast storage device, then mount a blob storage container specifying the path to the mounted disk as the BlobFuse2 file caching location:
+
+ ```bash
+ ~$ sudo mkdir /mnt/resource/blobfuse2tmp -p
+ ~$ sudo chown <youruser> /mnt/resource/blobfuse2tmp
+ ~$ mkdir bf2a
+ ~$ blobfuse2 mount ./bf2a --config-file=./config.yaml --tmp-path=/mnt/resource/blobfuse2tmp
+
+ ~$ blobfuse2 mount list
+ 1 : /home/<user>/bf2a/blobfuse2a
+ ```
+
+1. Mount a blob storage container in read-only mode and skipping the automatic BlobFuse2 version check:
+
+ ```bash
+ blobfuse2 mount ./mount_dir --config-file=./config.yaml --read-only --disable-version-check=true
+ ```
+
+1. Mount a blob storage container using an existing configuration file, but override the container name (mounting another container in the same storage account):
+
+ ```bash
+ blobfuse2 mount ./mount_dir2 --config-file=./config.yaml --container-name=container2
+ ```
+
+## See also
+
+- [The Blobfuse2 mount all command](blobfuse2-commands-mount-all.md)
+- [The Blobfuse2 mount list command](blobfuse2-commands-mount-list.md)
+- [The Blobfuse2 unmount command](blobfuse2-commands-unmount.md)
+- [The Blobfuse2 mountv1 command](blobfuse2-commands-mountv1.md)
storage Blobfuse2 Commands Mountv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mountv1.md
+
+ Title: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file | Microsoft Docs
+
+description: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file.
++++ Last updated : 08/01/2022++++
+# How to use the BlobFuse2 mountv1 command
+
+Use the `blobfuse2 mountv1` command to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+
+## Syntax
+
+`blobfuse2 mountv1 [path] --[flag-name]=[flag-value]`
+
+## Arguments
+
+`[path]`
+
+Specify a file path to the directory where the storage container will be mounted. Example:
+
+```bash
+blobfuse2 mountv1 ./mount_path ...
+```
+
+## Flags (options)
+
+Some flags are inherited from the parent command, [`blobfuse2`](blobfuse2-commands.md), and others only apply to the `blobfuse2 mountv1` command.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from parent command [`blobfuse2`](blobfuse2-commands.md)):
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | n/a | Help info for the blobfuse2 command and subcommands |
+
+### Flags that apply only to the BlobFuse2 mountv1 command
+
+The following flags apply only to command `blobfuse2 mountv1` command:
+
+| Flag | Short<br />version | Value<br />type | Default<br />value | Description |
+|--|--|--|--|--|
+| background-download | | boolean | false | File download to run in the background on open call |
+| basic-remount-check | | boolean | false | Check for an already mounted status using /etc/mtab |
+| block-size-mb | | uint | | Size of a block to be downloaded during streaming<br /><sub>(in MB)</sub> |
+| ca-cert-file | | string | | Specifies the proxy pem certificate path if it's not in the default path |
+| cache-on-list | | boolean | true | Cache attributes on listing |
+| cache-poll-timeout-msec | | uint | | Time in milliseconds in order to poll for possible expired files awaiting cache eviction<br /><sub>(in milliseconds)</sub> |
+| cache-size-mb | | float | | File cache size<br /><sub>(in MB)</sub> |
+| cancel-list-on-mount-seconds | | uint16 | | A list call to the container is by default issued on mount<br /><sub>(in seconds)</sub> |
+| config-file | | string | ./config.cfg | Input BlobFuse configuration file |
+| container-name | | string | | Required if no configuration file is specified |
+| convert-config-only | | boolean | | Don't mount - only convert v1 configuration to v2 |
+| d | -d | boolean |false | Mount with foreground and FUSE logs on |
+| empty-dir-check | | boolean |false | Disallows remounting using a non-empty tmp-path |
+| enable-gen1 | | boolean |false | To enable Gen1 mount |
+| file-cache-timeout-in-seconds | | uint32 | 120 | During this time, blobfuse will not check whether the file is up to date or not<br /><sub>(in seconds)</sub> |
+| high-disk-threshold | | uint32 | | High disk threshold<br /><sub>(as a percentage)</sub> |
+| http-proxy | | string | | HTTP Proxy address |
+| https-proxy | | string | | HTTPS Proxy address |
+| invalidate-on-sync | | boolean | true | Invalidate file/dir on sync/fsync |
+| log-level | | LOG_OFF <br />LOG_CRIT<br />LOG_ERR<br />LOG_WARNING<br />LOG_INFO<br />LOG_DEBUG<br />LOG_WARNING | LOG_WARNING | The level of logging written to syslog. |
+| low-disk-threshold | | uint32 | | Low disk threshold<br /><sub>(as a percentage)</sub> |
+| max-blocks-per-file | | int | | Maximum number of blocks to be cached in memory for streaming |
+| max-concurrency | | uint16 | | Option to override default number of concurrent storage connections |
+| max-eviction | | uint32 | | Number of files to be evicted from cache at once |
+| max-retry | | int32 | | Maximum retry count if the failure codes are retryable |
+| max-retry-interval-in-seconds | | int32 | | Maximum length of time between 2 retries<br /><sub>(in seconds)</sub> |
+| no-symlinks | | boolean | false | Whether or not symlinks should be supported |
+| o | -o | strings | | FUSE options |
+| output-file | | string | ./config.yaml | Output Blobfuse configuration file |
+| pre-mount-validate | | boolean | true | Validate blobfuse2 is mounted |
+| required-free-space-mb | | int | | Required free space<br /><sub>(in MB)</sub> |
+| retry-delay-factor | | int32 | | Retry delay between two tries<br /><sub>(in seconds)</sub> |
+| set-content-type | | boolean | false | Turns on automatic 'content-type' property based on the file extension |
+| stream-cache-mb | | uint | | Limit total amount of data being cached in memory to conserve memory footprint of blobfuse<br /><sub>(in MB)</sub> |
+| streaming | | boolean | false | Enable Streaming |
+| tmp-path | | string | n/a | Configures the tmp location for the cache.<br />(Configure the fastest disk (SSD or ramdisk) for best performance). |
+| upload-modified-only | | boolean | false | Turn off unnecessary uploads to storage |
+| use-adls | | boolean | false | Enables blobfuse to access Azure DataLake storage account |
+| use-attr-cache | | boolean | false | Enable attribute cache |
+| use-https | | boolean | false | Enables HTTPS communication with Blob storage |
+
+## Examples
+
+1. Mount a blob container in an Azure Data Lake Storage account using a BlobFuse v1 configuration file:
+
+ ```bash
+ blobfuse2 mountv1 ./mount_dir --config-file=./config.cfg --use-adls=true
+ ```
+
+1. Create a BlobFuse2 configuration file from a v1 configuration file in the same directory, but do not mount any containers:
+
+ ```bash
+ blobfuse2 mountv1 --config-file=./config.cfg --output-file=./config.yaml --convert-config-only=true
+ ```
+
+## See also
+
+- [The Blobfuse2 mount command](blobfuse2-commands-mount.md)
+- [The Blobfuse2 command set](blobfuse2-commands.md)
storage Blobfuse2 Commands Secure Decrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-decrypt.md
+
+ Title: How to use the BlobFuse2 secure decrypt command to decrypt a BlobFuse2 configuration file | Microsoft Docs
+description: Learn how to use the BlobFuse2 secure decrypt command to decrypt a BlobFuse2 configuration file.
++++ Last updated : 07/26/2022++++
+# How to use the BlobFuse2 secure decrypt command to decrypt a BlobFuse2 configuration file
+
+Use the `BlobFuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+`blobfuse2 secure decrypt --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 secure decrypt` are inherited from the grandparent command, `blobfuse2`, or apply only to the [`blobfuse2 secure`](blobfuse2-commands-secure.md) subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags inherited from the BlobFuse2 secure command
+
+The following flags are inherited from parent command [`blobfuse2 secure`](blobfuse2-commands-secure.md):
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| config-file | string | ./config.yaml | The path configuration file |
+| output-file | string | | Path and name for the output file |
+| passphrase | string | | The Key to be used for encryption or decryption<br />Can also be specified by environment variable BLOBFUSE2_SECURE_CONFIG_PASSPHRASE.<br />The key must be 16 (AES-128), 24 (AES-192), or 32 (AES-256) bytes in length. |
+
+## Examples
+
+> [!NOTE]
+> The following examples assume you have already created a configuration file in the current directory.
+
+Decrypt a BlobFuse2 configuration file using a passphrase:
+
+`blobfuse2 secure decrypt --config-file=./config.yaml --passphrase=PASSPHRASE`
+
+## See also
+
+- [The Blobfuse2 secure encrypt command](blobfuse2-commands-secure-encrypt.md)
+- [The Blobfuse2 secure get command](blobfuse2-commands-secure-get.md)
+- [The Blobfuse2 secure set command](blobfuse2-commands-secure-set.md)
+- [The Blobfuse2 secure command](blobfuse2-commands-secure.md)
storage Blobfuse2 Commands Secure Encrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-encrypt.md
+
+ Title: How to use the BlobFuse2 secure encrypt command to encrypt a BlobFuse2 configuration file | Microsoft Docs
+description: Learn how to use the BlobFuse2 secure encrypt command to encrypt a BlobFuse2 configuration file.
++++ Last updated : 07/26/2022++++
+# How to use the BlobFuse2 secure encrypt command to encrypt a BlobFuse2 configuration file
+
+Use the `BlobFuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+`blobfuse2 secure encrypt --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 secure encrypt` are inherited from the grandparent command, `blobfuse2`, or apply only to the [`blobfuse2 secure`](blobfuse2-commands-secure.md) subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags inherited from the BlobFuse2 secure command
+
+The following flags are inherited from parent command [`blobfuse2 secure`](blobfuse2-commands-secure.md):
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| config-file | string | ./config.yaml | The path configuration file |
+| output-file | string | | Path and name for the output file |
+| passphrase | string | | The Key to be used for encryption or decryption<br />Can also be specified by environment variable BLOBFUSE2_SECURE_CONFIG_PASSPHRASE.<br />The key must be 16 (AES-128), 24 (AES-192), or 32 (AES-256) bytes in length. |
+
+## Examples
+
+> [!NOTE]
+> The following examples assume you have already created a configuration file in the current directory.
+
+Encrypt a BlobFuse2 configuration file using a passphrase:
+
+`blobfuse2 secure encrypt --config-file=./config.yaml --passphrase=PASSPHRASE`
+
+## See also
+
+- [The Blobfuse2 secure decrypt command](blobfuse2-commands-secure-decrypt.md)
+- [The Blobfuse2 secure get command](blobfuse2-commands-secure-get.md)
+- [The Blobfuse2 secure set command](blobfuse2-commands-secure-set.md)
+- [The Blobfuse2 secure command](blobfuse2-commands-secure.md)
storage Blobfuse2 Commands Secure Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-get.md
+
+ Title: How to use the BlobFuse2 secure get command to display the value of a parameter from an encrypted BlobFuse2 configuration file | Microsoft Docs
+description: Learn how to use the BlobFuse2 secure get command to display the value of a parameter from an encrypted BlobFuse2 configuration file
++++ Last updated : 07/29/2022++++
+# How to use the BlobFuse2 secure get command to display the value of a parameter from an encrypted BlobFuse2 configuration file
+
+Use the `BlobFuse2 secure get` command to display the value of a specified parameter from an encrypted BlobFuse2 configuration file.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+`blobfuse2 secure get --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 secure get` are inherited from the grandparent command, `blobfuse2`, or apply only to the [`blobfuse2 secure`](blobfuse2-commands-secure.md) subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags inherited from the BlobFuse2 secure command
+
+The following flags are inherited from parent command [`blobfuse2 secure`](blobfuse2-commands-secure.md):
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| config-file | string | ./config.yaml | The path configuration file |
+| output-file | string | | Path and name for the output file |
+| passphrase | string | | The Key to be used for encryption or decryption<br />Can also be specified by environment variable BLOBFUSE2_SECURE_CONFIG_PASSPHRASE.<br />The key must be 16 (AES-128), 24 (AES-192), or 32 (AES-256) bytes in length. |
+
+### Flags that apply only to the BlobFuse2 secure get command
+
+The following flags apply only to command `blobfuse2 secure get` command:
+
+| Flag | Short<br />version | Value<br />type | Default<br />value | Description |
+|--|--|--|--|--|
+| key | | string | | Configuration key (parameter) to be searched in an encrypted config file |
+
+## Examples
+
+> [!NOTE]
+> The following examples assume you have already created a configuration file in the current directory.
+
+Get the value of parameter `logging.log_level` from an encrypted BlobFuse2 configuration file using a passphrase:
+
+`blobfuse2 secure get --config-file=./config.yaml --passphrase=PASSPHRASE --key=logging.log_level`
+
+## See also
+
+- [The Blobfuse2 secure set command](blobfuse2-commands-secure-set.md)
+- [The Blobfuse2 secure encrypt command](blobfuse2-commands-secure-encrypt.md)
+- [The Blobfuse2 secure decrypt command](blobfuse2-commands-secure-decrypt.md)
+- [The Blobfuse2 secure command](blobfuse2-commands-secure.md)
storage Blobfuse2 Commands Secure Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-set.md
+
+ Title: How to use the BlobFuse2 secure set command to change the value of a parameter in an encrypted BlobFuse2 configuration file | Microsoft Docs
+description: Learn how to use the BlobFuse2 secure set command to change the value of a parameter in an encrypted BlobFuse2 configuration file
++++ Last updated : 07/26/2022++++
+# How to use the BlobFuse2 secure set command to change the value of a parameter in an encrypted BlobFuse2 configuration file
+
+Use the `BlobFuse2 secure set` command to change the value of a specified parameter in an encrypted BlobFuse2 configuration file.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+`blobfuse2 secure set --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 secure set` are inherited from the grandparent command, `blobfuse2`, or apply only to the [`blobfuse2 secure`](blobfuse2-commands-secure.md) subcommands.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from grandparent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | Help info for the blobfuse2 command and subcommands |
+
+### Flags inherited from the BlobFuse2 secure command
+
+The following flags are inherited from parent command [`blobfuse2 secure`](blobfuse2-commands-secure.md):
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| config-file | string | ./config.yaml | The path configuration file |
+| output-file | string | | Path and name for the output file |
+| passphrase | string | | The Key to be used for encryption or decryption<br />Can also be specified by environment variable BLOBFUSE2_SECURE_CONFIG_PASSPHRASE.<br />The key must be 16 (AES-128), 24 (AES-192), or 32 (AES-256) bytes in length. |
+
+### Flags that apply only to the BlobFuse2 secure set command
+
+The following flags apply only to command `blobfuse2 secure set` command:
+
+| Flag | Short<br />version | Value<br />type | Default<br />value | Description |
+|--|--|--|--|--|
+| key | | string | | Configuration key (parameter) to be updated in an encrypted config file |
+| value | | string | | New value for the configuration key (parameter) to be updated in an encrypted config file |
+
+## Examples
+
+> [!NOTE]
+> The following examples assume you have already created a configuration file in the current directory.
+
+Set the value of parameter `logging.log_level` in an encrypted BlobFuse2 configuration file to "log_debug":
+
+`blobfuse2 secure set --config-file=config.yaml --passphrase=PASSPHRASE --key=logging.log_level --value=log_debug`
+
+## See also
+
+- [The Blobfuse2 secure get command](blobfuse2-commands-secure-get.md)
+- [The Blobfuse2 secure encrypt command](blobfuse2-commands-secure-encrypt.md)
+- [The Blobfuse2 secure decrypt command](blobfuse2-commands-secure-decrypt.md)
+- [The Blobfuse2 secure command](blobfuse2-commands-secure.md)
storage Blobfuse2 Commands Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure.md
+
+ Title: How to use the BlobFuse2 secure command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file | Microsoft Docs
+description: Learn how to use the BlobFuse2 secure command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file.
++++ Last updated : 07/26/2022++++
+# How to use the BlobFuse2 secure command
+
+Use the `blobfuse2 secure` command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Command Syntax
+
+`blobfuse2 secure [command] --[flag-name]=[flag-value]`
+
+## Arguments
+
+`[command]`
+
+The supported subcommands for `blobfuse2 secure` are:
+
+| Command | Description |
+|--|--|
+| [encrypt](blobfuse2-commands-secure-encrypt.md) | Encrypts a BlobFuse2 configuration file |
+| [decrypt](blobfuse2-commands-secure-decrypt.md) | Decrypts a BlobFuse2 configuration file |
+| [get](blobfuse2-commands-secure-get.md) | Gets a specified value from an encrypted BlobFuse2 configuration file |
+| [set](blobfuse2-commands-secure-set.md) | Sets a specified value in an encrypted BlobFuse2 configuration file |
+
+Select one of the command links in the table above to view the documentation for the individual subcommands, including the arguments and flags they support.
+
+## Flags (options)
+
+Some flags are inherited from the parent command, `blobfuse2`, and others only apply to the `blobfuse2 secure` command.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from parent command `blobfuse2`:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | n/a | Help info for the blobfuse2 command and subcommands |
+
+### Flags that apply only to the BlobFuse2 secure command
+
+The following flags apply only to command `blobfuse2 secure`:
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| config-file | string | ./config.yaml | The path configuration file |
+| output-file | string | | Path and name for the output file |
+| passphrase | string | | The Key to be used for encryption or decryption<br />Can also be specified by environment variable BLOBFUSE2_SECURE_CONFIG_PASSPHRASE.<br />The key must be 16 (AES-128), 24 (AES-192), or 32 (AES-256) bytes in length. |
+
+## Examples
+
+For examples, see the documentation for [the individual subcommands](#arguments).
+
+## See also
+
+- [The Blobfuse2 secure encrypt command](blobfuse2-commands-secure-encrypt.md)
+- [The Blobfuse2 secure decrypt command](blobfuse2-commands-secure-decrypt.md)
+- [The Blobfuse2 secure get command](blobfuse2-commands-secure-get.md)
+- [The Blobfuse2 secure set command](blobfuse2-commands-secure-set.md)
storage Blobfuse2 Commands Unmount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount-all.md
+
+ Title: How to use the BlobFuse2 unmount all command to unmount all blob containers in a storage account as a Linux file system | Microsoft Docs
+
+description: Learn how to use the BlobFuse2 unmount all command to unmount all blob containers in a storage account as a Linux file system.
++++ Last updated : 08/01/2022++++
+# How to use the BlobFuse2 unmount all command to unmount all existing mount points
+
+Use the `blobfuse2 unmount all` command to unmount all existing BlobFuse2 mount points.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+`blobfuse2 unmount all --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Flags that apply to `blobfuse2 unmount all` are inherited from the grandparent command, [`blobfuse2`](blobfuse2-commands.md).
+
+### Flags inherited from BlobFuse2
+
+The following flags are inherited from grandparent command [`blobfuse2`](blobfuse2-commands.md):
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | |Help info for the blobfuse2 command and subcommands |
+
+## Examples
+
+Unmount all BlobFuse2 mount points:
+
+```bash
+blobfuse2 unmount all
+```
+
+## See also
+
+- [The Blobfuse2 unmount command](blobfuse2-commands-unmount.md)
+- [The Blobfuse2 mount all command](blobfuse2-commands-mount-all.md)
storage Blobfuse2 Commands Unmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount.md
+
+ Title: How to use the BlobFuse2 unmount command to unmount an existing mount point| Microsoft Docs
+
+description: How to use the BlobFuse2 unmount command to unmount an existing mount point.
++++ Last updated : 08/01/2022++++
+# How to use the BlobFuse2 unmount command
+
+Use the `blobfuse2 unmount` command to unmount one or more existing BlobFuse2 mount points.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+The `blobfuse2 unmount` command has 2 formats:
+
+`blobfuse2 unmount [mount path] [flags]`
+
+`blobfuse2 unmount all [flags]`
+
+## Arguments
+
+`[mount path]`
+
+Specify a file path to the directory that contains the mount point to be unmounted. Example:
+
+```bash
+blobfuse2 unmount ./mount_path ...
+```
+
+`all`
+
+Unmount all existing BlobFuse2 mount points.
+
+## Flags (options)
+
+The following flags were inherited from the parent (the BlobFuse2 command):
+
+| Flag | Short version | Value type | Default value | Example | Description |
+|--|--|--|--|--|--|
+| disable-version-check | | boolean | false | --disable-version-check=true | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | | -h or --help | Help info for the blobfuse2 command and subcommands |
+
+There are no flags only supported by the unmount command.
+
+## Examples
+
+1. Unmount a BlobFuse2 mount instance:
+
+ ```bash
+ blobfuse2 unmount ./mount_path
+ ```
+
+ (Alternatively, you can use a native Linux command to do the same):
+
+ ```bash
+ sudo fusermount3 -u ./mount_path
+ ```
+
+1. Unmount all BlobFuse2 mount points (see also [The BlobFuse2 unmount all command](blobfuse2-commands-unmount-all.md)):
+
+ ```bash
+ blobfuse2 unmount all
+ ```
+
+## See also
+
+- [The Blobfuse2 unmount all command](blobfuse2-commands-unmount-all.md)
+- [The Blobfuse2 mount command](blobfuse2-commands-mount.md)
storage Blobfuse2 Commands Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-version.md
+
+ Title: How to use the BlobFuse2 version command to get the current version and optionally check for a newer one | Microsoft Docs
+
+description: Learn how to use the BlobFuse2 version command to get the current version and optionally check for a newer one.
++++ Last updated : 08/01/2022++++
+# BlobFuse2 version command
+
+Use the `blobfuse2 version` command to display the current version of BlobFuse2, and optionally check for latest version.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+
+## Syntax
+
+`blobfuse2 version --[flag-name]=[flag-value]`
+
+## Flags (options)
+
+Some flags are inherited from the parent command, [`blobfuse2`](blobfuse2-commands.md), and others only apply to the `blobfuse2 version` command.
+
+### Flags inherited from the BlobFuse2 command
+
+The following flags are inherited from parent command [`blobfuse2`](blobfuse2-commands.md)):
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Enables or disables automatic version checking of the BlobFuse2 binaries |
+| help | -h | n/a | n/a | Help info for the blobfuse2 command and subcommands |
+
+### Flags that apply only to the BlobFuse2 version command
+
+The following flags apply only to command `blobfuse2 version`:
+
+| Flag | Value type | Default value | Description |
+|--|--|--|--|
+| check | boolean | false | Check for the latest version |
+
+## Examples
+
+`blobfuse2 version --check=true`
+
+## See also
+
+- [The Blobfuse2 command set](blobfuse2-commands.md)
storage Blobfuse2 Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands.md
+
+ Title: How to use the BlobFuse2 command set | Microsoft Docs
+
+description: Learn how to use the BlobFuse2 command set to mount blob storage containers as file systems on Linux, and manage them.
++++ Last updated : 08/01/2022++++
+# How to use the BlobFuse2 command set
+
+This reference shows how to use the BlobFuse2 command set to mount Azure blob storage containers as file systems on Linux, and how to manage them.
+
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+>
+> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
+> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## Syntax
+
+The `blobfuse2` command has 2 formats:
+
+`blobfuse2 --[flag-name]=[flag-value]`
+
+`blobfuse2 [command] [arguments] --[flag-name]=[flag-value]`
+
+## Flags (Options)
+
+Most flags are specific to individual BlobFuse2 commands. See the documentation for [each command](#commands) for details and examples.
+
+The following options can be used without a command or are inherited by individual commands:
+
+| Flag | Short version | Value type | Default value | Description |
+|--|--|--|--|--|
+| disable-version-check | | boolean | false | Whether to disable the automatic BlobFuse2 version check |
+| help | -h | n/a | n/a | Help info for the blobfuse2 command and subcommands |
+| version | -v | n/a | n/a | Display BlobFuse2 version information |
+
+### Commands
+
+The supported commands for BlobFuse2 are:
+
+| Command | Description |
+|--|--|
+| [mount](blobfuse2-commands-mount.md) | Mounts an Azure blob storage container as a filesystem in Linux or lists mounted file systems |
+| [mountv1](blobfuse2-commands-mountv1.md) | Mounts a blob container using legacy BlobFuse configuration and CLI parameters |
+| [unmount](blobfuse2-commands-unmount.md) | Unmounts a BlobFuse2-mounted file system |
+| completion | Generates an autocompletion script for BlobFuse2 for the specified shell |
+| secure | Encrypts or decrypts a configuration file, or gets or sets values in an encrypted configuration file |
+| [version](blobfuse2-commands-version.md) | Displays the current version of BlobFuse2 |
+| [help](blobfuse2-commands-help.md) | Gives help information about any command |
+
+## Arguments
+
+BlobFuse2 command arguments are specific to the individual commands. See the documentation for [each command](#commands) for details and examples.
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
stream-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
stream-analytics Stream Analytics Job Analysis With Metric Dimensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-analysis-with-metric-dimensions.md
Title: Analyze Azure Stream Analytics job performance by using metric dimensions
-description: This article describes how to analyze stream analytics job with metric dimension.
+ Title: Analyze Stream Analytics job performance by using metrics and dimensions
+description: This article describes how to use Azure Stream Analytics metrics and dimensions to analyze a job's performance.
Last updated 07/07/2022
-# Analyze Stream Analytics job performance with metrics dimensions
+# Analyze Stream Analytics job performance by using metrics and dimensions
-To understand the Stream Analytics jobΓÇÖs health, it's important to know how to utilize the jobΓÇÖs metrics and dimensions. You can use Azure portal or VS code ASA extension or SDK to get and view the metrics and dimensions, which you're interested in.
+To understand an Azure Stream Analytics job's health, it's important to know how to use the job's metrics and dimensions. You can use the Azure portal, the Visual Studio Code Stream Analytics extension, or an SDK to get the metrics and dimensions that you're interested in.
-This article demonstrates how to use Stream Analytics job metrics and dimensions to analyze the jobΓÇÖs performance through the Azure portal.
+This article demonstrates how to use Stream Analytics job metrics and dimensions to analyze a job's performance through the Azure portal.
-Watermark delay and backlogged input events are the main metrics to determine performance of your Streaming analytics job. If your jobΓÇÖs watermark delay is continuously increasing and inputs events are backlogged, it implies that your job is unable to keep up with the rate of input events and produce outputs in a timely manner. LetΓÇÖs look at several examples to analyze the jobΓÇÖs performance through the watermark delay metric data as a starting point.
+Watermark delay and backlogged input events are the main metrics to determine the performance of your Stream Analytics job. If your job's watermark delay is continuously increasing and input events are backlogged, your job can't keep up with the rate of input events and produce outputs in a timely manner.
-## No input for certain partition causes job watermark delay increasing
+Let's look at several examples to analyze a job's performance through the **Watermark Delay** metric data as a starting point.
-If your embarrassingly parallel jobΓÇÖs watermark delay is steadily increased, you can go to **Metrics** and follow these steps to find out if the root cause is due to no data in some partitions of your input source.
-1. First, you can check which partition has the watermark delay increasing by selecting watermark delay metric and splitting it by ΓÇ£Partition IDΓÇ¥ dimension. For example, you identify that the partition#465 has high watermark delay.
+## No input for a certain partition increases job watermark delay
- :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/01-watermark-delay-splitting-with-partition-id.png" alt-text="Diagram that show the watermark delay splitting with Partition ID for the case of no input in certain partition." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/01-watermark-delay-splitting-with-partition-id.png":::
+If your embarrassingly parallel job's watermark delay is steadily increasing, go to **Metrics**. Then use these steps to find out if the root cause is a lack of data in some partitions of your input source:
-2. You can then check if there's any input data missing for this partition. To do this, you can select Input Events metric and filter it to this specific partition ID.
+1. Check which partition has the increasing watermark delay. Select the **Watermark Delay** metric and split it by the **Partition ID** dimension. In the following example, partition 465 has a high watermark delay.
- :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/02-input-events-splitting-with-partition-id.png" alt-text="Diagram that shows the Input Events splitting with Partition ID for the case of no input in certain partition." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/02-input-events-splitting-with-partition-id.png":::
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/01-watermark-delay-splitting-with-partition-id.png" alt-text="Screenshot of a chart that shows watermark delay splitting by Partition ID for the case of no input in a partition." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/01-watermark-delay-splitting-with-partition-id.png":::
+2. Check if any input data is missing for this partition. Select the **Input Events** metric and filter it to this specific partition ID.
-What action could you take further?
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/02-input-events-splitting-with-partition-id.png" alt-text="Screenshot of a chart that shows Input Events splitting by Partition ID for the case of no input in a partition." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/02-input-events-splitting-with-partition-id.png":::
-- As you can see, the watermark delay for this partition is increasing as there's no input events flowing into this partition. If your job's late arrival tolerance window is several hours and no input data is flowing into a partition, it's expected that the watermark delay for that partition continues to increase until the late arrival window is reached. For example, if your late arrival tolerance is 6 hours and input data isn't flowing into input partition 1, watermark delay for output partition 1 will increase until it reaches 6 hours. You can check if your input source is producing data as expected.
+### What further action can you take?
-## Input data-skew causes high watermark delay
+The watermark delay for this partition is increasing because no input events are flowing into this partition. If your job's tolerance window for late arrivals is several hours and no input data is flowing into a partition, it's expected that the watermark delay for that partition will continue to increase until the late arrival window is reached.
-As mentioned in the above case, when you see your embarrassingly parallel job having high watermark delay, the first thing to do is to check the watermark delay splitting by ΓÇ£Partition IDΓÇ¥ dimension to identify if all the partitions have high watermark delay or just a few of them.
+For example, if your late arrival window is 6 hours and input data isn't flowing into input partition 1, the watermark delay for output partition 1 will increase until it reaches 6 hours. You can check if your input source is producing data as expected.
-For this example, you can start by splitting the watermark delay metric by **Partition ID** dimension.
+## Input data skew causes a high watermark delay
+As mentioned in the preceding case, when your embarrassingly parallel job has a high watermark delay, the first thing to do is to split the **Watermark Delay** metric by the **Partition ID** dimension. You can then identify whether all the partitions have high watermark delay, or just a few of them.
-As you can see, partition#0 and partition#1 have higher watermark delay (20 ~ 30s) than other eight partitions. The other partitionsΓÇÖ watermark delays are always steady at 8s~10 s. Then, letΓÇÖs check what the input data looks like for all these partitions with the metric ΓÇ£Input EventsΓÇ¥ splitting by ΓÇ£Partition IDΓÇ¥:
+In the following example, partitions 0 and 1 have higher watermark delay (about 20 to 30 seconds) than the other eight partitions have. The other partitions' watermark delays are always steady at about 8 to 10 seconds.
+Let's check what the input data looks like for all these partitions with the metric **Input Events** split by **Partition ID**:
-What action could you take further?
-As shown in screenshot above, partition#0 and partition#1 that have high watermark delay, are receiving significantly more input data than other partitions. We call this ΓÇ£data-skewΓÇ¥. This means that the streaming nodes processing the partitions with data-skew need to consume more resources (CPU and memory) than others as shown below.
+### What further action can you take?
+As shown in the example, the partitions (0 and 1) that have a high watermark delay are receiving significantly more input data than other partitions are. We call this *data skew*. The streaming nodes that are processing the partitions with data skew need to consume more CPU and memory resources than others do, as shown in the following screenshot.
-Streaming nodes that process partitions with higher data skew will exhibit higher CPU and/or SU (memory) utilization that will affect job's performance and result in increasing watermark delay. To mitigate this, you'll need to repartition your input data more evenly.
+Streaming nodes that process partitions with higher data skew will exhibit higher CPU and/or streaming unit (SU) utilization. This utilization will affect the job's performance and increase watermark delay. To mitigate this, you need to repartition your input data more evenly.
-## Overloaded CPU/memory causes watermark delay increasing
+## Overloaded CPU or memory increases watermark delay
-When an embarrassingly parallel job has watermark delay increasing, it may not just happen on one or several partitions, but all of the partitions. How to confirm my job is falling into this case?
-1. First, split the watermark delay with ΓÇ£Partition IDΓÇ¥ dimension, same as the case above. For example, the below job:
+When an embarrassingly parallel job has an increasing watermark delay, it might happen on not just one or several partitions, but all of the partitions. How do you confirm that your job is falling into this case?
- :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/06-watermark-delay-splitting-with-partition-id-all-increasing.png" alt-text="Diagram that shows the watermark delay splitting with Partition ID for the case of overloaded cpu and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/06-watermark-delay-splitting-with-partition-id-all-increasing.png":::
+1. Split the **Watermark Delay** metric by **Partition ID**. For example:
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/06-watermark-delay-splitting-with-partition-id-all-increasing.png" alt-text="Screenshot of a chart that shows Watermark Delay split by Partition ID for the case of overloaded CPU and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/06-watermark-delay-splitting-with-partition-id-all-increasing.png":::
-2. Split the ΓÇ£Input EventsΓÇ¥ metric with ΓÇ£Partition IDsΓÇ¥ to confirm if there's data-skew in input data per partitions.
-3. Then, check the CPU and SU utilization to see if the utilization in all streaming nodes is too high.
+2. Split the **Input Events** metric by **Partition ID** to confirm if there's data skew in input data for each partition.
+3. Check the CPU and SU utilization to see if the utilization in all streaming nodes is too high.
- :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/07-cpu-and-memory-utilization-splitting-with-node-name.png" alt-text="Diagram that show the CPU and memory utilization splitting by Node name for the case of overloaded cpu and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/07-cpu-and-memory-utilization-splitting-with-node-name.png":::
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/07-cpu-and-memory-utilization-splitting-with-node-name.png" alt-text="Screenshot of a chart that shows CPU and memory utilization split by node name for the case of overloaded CPU and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/07-cpu-and-memory-utilization-splitting-with-node-name.png":::
+4. If the CPU and SU utilization is very high (more than 80 percent) in all streaming nodes, you can conclude that this job has a large amount of data being processed within each streaming node.
-4. If the utilization of CPU and SU is very high (>80%) in all streaming nodes, it can conclude that this job has a large amount of data being processed within each streaming node. You further check how many partitions are allocated to one streaming node by checking the ΓÇ£Input EventsΓÇ¥ metrics with filter by a streaming node ID with "Node Name" dimension and splitting by "Partition IDΓÇ¥. See the screenshot below:
+ You can further check how many partitions are allocated to one streaming node by checking the **Input Events** metric. Filter by streaming node ID with the **Node Name** dimension, and split by **Partition ID**.
- :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/08-partition-count-on-one-streaming-node.png" alt-text="Diagram that shows the partition count on one streaming node for the case of overloaded cpu and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/08-partition-count-on-one-streaming-node.png":::
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/08-partition-count-on-one-streaming-node.png" alt-text="Screenshot of a chart that shows the partition count on one streaming node for the case of overloaded CPU and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/08-partition-count-on-one-streaming-node.png":::
-5. From the above screenshot, you can see there are four partitions allocated to one streaming node that occupied nearly 90% ~ 100% of the streaming node resource. You can use the similar approach to check the rest streaming nodes to confirm if they're also processing four partitions data.
+5. The preceding screenshot shows that four partitions are allocated to one streaming node that occupies about 90 to 100 percent of the streaming node resource. You can use a similar approach to check the rest of the streaming nodes to confirm that they're also processing data from four partitions.
-What action could you take further?
-
-1. Naturally, youΓÇÖd think to reduce the partition count for each streaming node to reduce the input data for each streaming node. To achieve this, you can double the SUs to have each streaming node to handle two partitions data, or four times the SUs to have each streaming node to handle one partition data. Refer to [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md) for the relationship between SUs assignment and streaming node count.
-2. What should I do if the watermark delay is still increasing when one streaming node is handling one partition data? Repartition your input with more partitions to reduce the amount of data in each partition. Refer to this document for details: [Use repartitioning to optimize Azure Stream Analytics jobs](./repartition.md)
+### What further action can you take?
+You might want to reduce the partition count for each streaming node to reduce the input data for each streaming node. To achieve this, you can double the SUs to have each streaming node handle data from two partitions. Or you can quadruple the SUs to have each streaming node handle data from one partition. For information about the relationship between SU assignment and streaming node count, see [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md).
+What should you do if the watermark delay is still increasing when one streaming node is handling data from one partition? Repartition your input with more partitions to reduce the amount of data in each partition. For details, see [Use repartitioning to optimize Azure Stream Analytics jobs](./repartition.md).
## Next steps
-* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
+* [Monitor a Stream Analytics job with the Azure portal](./stream-analytics-monitoring.md)
* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Azure Stream Analytics job metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
-* [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md)
+* [Dimensions for Azure Stream Analytics metrics](./stream-analytics-job-metrics-dimensions.md)
+* [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md)
stream-analytics Stream Analytics Job Metrics Dimensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics-dimensions.md
Title: Azure Stream Analytics metrics dimensions
-description: This article describes the Azure Stream Analytics metric dimensions.
+ Title: Dimensions for Azure Stream Analytics metrics
+description: This article describes dimensions for Azure Stream Analytics metrics.
Last updated 06/30/2022
-# Azure Stream Analytics metrics dimensions
+# Dimensions for Azure Stream Analytics metrics
-Stream Analytics provides a serverless, distributed streaming processing service. Jobs can run on one or more distributed streaming nodes, which the service automatically manages. The input data are partitioned and allocated to different streaming nodes for processing. Azure Stream Analytics has many metrics available to monitor job's health. Metrics can be split by dimensions, like Partition ID or Node name that helps troubleshoot performance issues with your job. To get the metrics full list, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
+Azure Stream Analytics provides a serverless, distributed streaming processing service. Jobs can run on one or more distributed streaming nodes, which the service automatically manages. The input data is partitioned and allocated to different streaming nodes for processing.
-## Stream Analytics metrics dimensions
-
-Azure Stream Analytics provides three important dimensions: ΓÇ£Logic NameΓÇ¥, ΓÇ£Partition IDΓÇ¥, and ΓÇ£Node NameΓÇ¥ for metrics splitting and filtering.
+Stream Analytics has [many metrics](./stream-analytics-job-metrics.md) available to monitor a job's health. To troubleshoot performance problems with your job, you can split and filter metrics by using the following dimensions.
| Dimension | Definition | | - | - |
-| Logic Name | The input or output name for a given Azure Stream Analytics (ASA) job. |
-| Partition ID | The ID of the input data partition from input source, for example, if the input source is from event hub, the partition ID is the EH partition ID. For embarrassingly parallel job, the ΓÇ£Partition IDΓÇ¥ in output is the same as the input partition ID. |
-| Node Name | Identifier of a streaming node that is provisioned when your job runs. A streaming node represents amount of compute and memory resources allocated to your job. |
--
+| **Logical Name** | The input or output name for a Stream Analytics job. |
+| **Partition ID** | The ID of the input data partition from an input source. For example, if the input source is an event hub, the partition ID is the event hub's partition ID. For embarrassingly parallel jobs, **Partition ID** in the output is the same as it is in the input. |
+| **Node Name** | The identifier of a streaming node that's provisioned when your job runs. A streaming node represents the amount of compute and memory resources allocated to your job. |
-## "Logic Name" dimension
-The ΓÇ£Logic NameΓÇ¥ is the input or output name for a given Azure Stream Analytics (ASA) job. For example: if an ASA job has four inputs and five outputs, you'll see the four individual logic inputs and five individual logical outputs when splitting input and output related metrics with this dimension. (for example, Input Events, Output Events, etc.)
+## Logical Name dimension
-<!--:::image type="content" source="./media/stream-analytics-job-metrics-dimensions/05-input-events-splitting-by-logic-name.png" alt-text="Diagram that shows the Input events metric splitting by Logic Name."::: -->
+**Logical Name** is the input or output name for a Stream Analytics job. For example, assume that a Stream Analytics job has four inputs and five outputs. You'll see the four individual logical inputs and five individual logical outputs when you split input-related and output-related metrics by this dimension.
+<!--:::image type="content" source="./media/stream-analytics-job-metrics-dimensions/05-input-events-splitting-by-logic-name.png" alt-text="Screenshot that shows splitting the Input Events metric by Logical Name."::: -->
-ΓÇ£Logic NameΓÇ¥ dimension is available for the metrics below for filtering and splitting:
-- Backlogged Input Events -- Data Conversion Errors-- Early Input Events-- Input Deserialization Errors-- Input Event Bytes-- Input Events-- Input Source Received-- Late Input Events-- Out of order Events-- Output Events-- Watermark delay
-## "Node Name" dimension
+The **Logical Name** dimension is available for filtering and splitting the following metrics:
+- **Backlogged Input Events**
+- **Data Conversion Errors**
+- **Early Input Events**
+- **Input Deserialization Errors**
+- **Input Event Bytes**
+- **Input Events**
+- **Input Source Received**
+- **Late Input Events**
+- **Out-of-Order Events**
+- **Output Events**
+- **Watermark Delay**
-A streaming node represents a set of compute resources that is used to process your input data. Every six Streaming Units (SUs) translates to one node, which the service automatically manages on your behalf. For more information for the relationship between streaming unit and streaming node, see [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md).
+## Node Name dimension
-The ΓÇ£Node NameΓÇ¥ is ΓÇ£Streaming NodeΓÇ¥ level dimension that could help you to drill down certain metrics to the specific streaming node level. For example, the CPU utilization metrics could be split into streaming node level to check the CPU utilization of an individual streaming node.
+A streaming node represents a set of compute resources that's used to process your input data. Every six streaming units (SUs) translates to one node, which the service automatically manages on your behalf. For more information about the relationship between streaming units and streaming nodes, see [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md).
+**Node Name** is a dimension at the streaming node level. It can help you to drill down certain metrics to the specific streaming node level. For example, you can split the **CPU % Utilization** metric by streaming node level to check the CPU utilization of an individual streaming node.
-ΓÇ£Node NameΓÇ¥ dimension is available for the metrics below for filtering and splitting:
-- CPU % Utilization (Preview)-- SU % Utilization-- Input Events
-## "Partition ID" dimension
+The **Node Name** dimension is available for filtering and splitting the following metrics:
+- **CPU % Utilization** (preview)
+- **SU (Memory) % Utilization**
+- **Input Events**
-When streaming data is ingested into Azure Stream Analytics service for processing, the input data is distributed to streaming nodes according to the partitions in input source. The ΓÇ£Partition IDΓÇ¥ is the ID of the input data partition from input source, for example, if the input source is from event hub, the partition ID is the EH partition ID. The ΓÇ£Partition IDΓÇ¥ is the same as it in the output as well.
+## Partition ID dimension
+When streaming data is ingested into the Azure Stream Analytics service for processing, the input data is distributed to streaming nodes according to the partitions in the input source. The **Partition ID** dimension is the ID of the input data partition from the input source.
+For example, if the input source is an event hub, the partition ID is the event hub's partition ID. **Partition ID** in the input is the same as it is in the output.
-ΓÇ£Partition IDΓÇ¥ dimension is available for the metrics below for filtering and splitting:
-- Backlogged Input Events-- Data Conversion Errors-- Early Input Events-- Input Deserialization Errors-- Input Event Bytes-- Input Events-- Input Source Received-- Late Input Events-- Output Events-- Watermark delay
+The **Partition ID** dimension is available for filtering and splitting the following metrics:
+- **Backlogged Input Events**
+- **Data Conversion Errors**
+- **Early Input Events**
+- **Input Deserialization Errors**
+- **Input Event Bytes**
+- **Input Events**
+- **Input Source Received**
+- **Late Input Events**
+- **Output Events**
+- **Watermark Delay**
## Next steps * [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Analyze Stream Analytics job performance with metrics dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
-* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
-* [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md)
+* [Analyze Stream Analytics job performance by using metrics and dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
+* [Monitor a Stream Analytics job with the Azure portal](./stream-analytics-monitoring.md)
+* [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md)
stream-analytics Stream Analytics Job Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics.md
Title: Azure Stream Analytics job metrics
-description: This article describes Azure Stream Analytics job metrics.
+description: This article describes job metrics in Azure Stream Analytics.
# Azure Stream Analytics job metrics
-Azure Stream Analytics provides plenty of metrics that can be used to monitor and troubleshoot your query and job performance. These metrics data can be viewed through Azure portal in the **Monitoring** section on the **Overview** page.
+Azure Stream Analytics provides plenty of metrics that you can use to monitor and troubleshoot your query and job performance. You can view data from these metrics on the **Overview** page of the Azure portal, in the **Monitoring** section.
-You can also navigate to the **Monitoring** section and click **Metrics**. The metric page will be shown for adding the specific metric you'd like to check.
+If you want to check a specific metric, select **Metrics** in the **Monitoring** section. On the page that appears, select the metric.
## Metrics available for Stream Analytics
Azure Stream Analytics provides the following metrics for you to monitor your jo
| Metric | Definition | | - | - |
-| Backlogged Input Events | Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently non-zero, you should scale out your job. You can learn more by visiting [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md). |
-| Data Conversion Errors | Number of output events that couldn't be converted to the expected output schema. Error policy can be changed to 'Drop' to drop events that encounter this scenario. |
-| CPU % Utilization (preview) | The percentage of CPU utilized by your job. Even if this value is very high (90% or above), you shouldn't increase number of SUs based on this metric alone. If number of backlogged input events or watermark delay increases, you can then use this CPU% utilization metric to determine if CPU is the bottleneck. It's possible that this metric has spikes intermittently. It's recommended to do scale tests to determine upper bound of your job after which inputs get backlogged or watermark delay increases due to CPU bottleneck. |
-| Early Input Events | Events whose application timestamp is earlier than their arrival time by more than 5 minutes. |
-| Failed Function Requests | Number of failed Azure Machine Learning function calls (if present). |
-| Function Events | Number of events sent to the Azure Machine Learning function (if present). |
-| Function Requests | Number of calls to the Azure Machine Learning function (if present). |
-| Input Deserialization Errors | Number of input events that couldn't be deserialized. |
-| Input Event Bytes | Amount of data received by the Stream Analytics job, in bytes. This can be used to validate that events are being sent to the input source. |
-| Input Events | Number of records deserialized from the input events. This count doesn't include incoming events that result in deserialization errors. The same events can be ingested by Stream Analytics multiple times in scenarios such as internal recoveries and self joins. Therefore it is recommended not to expect Input Events and Output Events metrics to match if your job has a simple 'pass through' query. |
-| Input Sources Received | Number of messages received by the job. For Event Hub, a message is a single EventData. For Blob, a message is a single blob. Please note that Input Sources are counted before deserialization. If there are deserialization errors, input sources can be greater than input events. Otherwise, it can be less than or equal to input events since each message can contain multiple events. |
-| Late Input Events | Events that arrived later than the configured late arrival tolerance window. Learn more about [Azure Stream Analytics event order considerations](./stream-analytics-time-handling.md) . |
-| Out-of-Order Events | Number of events received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. This can be impacted by the configuration of the Out of Order Tolerance Window setting. |
-| Output Events | Amount of data sent by the Stream Analytics job to the output target, in number of events. |
-| Runtime Errors | Total number of errors related to query processing (excluding errors found while ingesting events or outputting results) |
-| SU (Memory) % Utilization | The percentage of memory utilized by your job. If SU % utilization is consistently over 80%, the watermark delay is rising, and the number of backlogged events is rising, consider increasing streaming units. High utilization indicates that the job is using close to the maximum allocated resources. |
-| Watermark Delay | The maximum watermark delay across all partitions of all outputs in the job. |
+| **Backlogged Input Events** | Number of input events that are backlogged. A nonzero value for this metric implies that your job can't keep up with the number of incoming events. If this value is slowly increasing or is consistently nonzero, you should scale out your job. To learn more, see [Understand and adjust streaming units](stream-analytics-streaming-unit-consumption.md). |
+| **Data Conversion Errors** | Number of output events that couldn't be converted to the expected output schema. To drop events that encounter this scenario, you can change the error policy to **Drop**. |
+| **CPU % Utilization** (preview) | Percentage of CPU that your job utilizes. Even if this value is very high (90 percent or more), you shouldn't increase the number of SUs based on this metric alone. If the number of backlogged input events or watermark delays increases, you can then use this metric to determine if the CPU is the bottleneck. <br><br>This metric might have intermittent spikes. We recommend that you do scale tests to determine the upper bound of your job after which inputs are backlogged or watermark delays increase because of a CPU bottleneck. |
+| **Early Input Events** | Events whose application time stamp is earlier than their arrival time by more than 5 minutes. |
+| **Failed Function Requests** | Number of failed Azure Machine Learning function calls (if present). |
+| **Function Events** | Number of events sent to the Azure Machine Learning function (if present). |
+| **Function Requests** | Number of calls to the Azure Machine Learning function (if present). |
+| **Input Deserialization Errors** | Number of input events that couldn't be deserialized. |
+| **Input Event Bytes** | Amount of data that the Stream Analytics job receives, in bytes. You can use this metric to validate that events are being sent to the input source. |
+| **Input Events** | Number of records deserialized from the input events. This count doesn't include incoming events that result in deserialization errors. Stream Analytics can ingest the same events multiple times in scenarios like internal recoveries and self-joins. Don't expect **Input Events** and **Output Events** metrics to match if your job has a simple pass-through query. |
+| **Input Sources Received** | Number of messages that the job receives. For Azure Event Hubs, a message is a single `EventData` item. For Azure Blob Storage, a message is a single blob. <br><br>Note that input sources are counted before deserialization. If there are deserialization errors, input sources can be greater than input events. Otherwise, input sources can be less than or equal to input events because each message can contain multiple events. |
+| **Late Input Events** | Events that arrived later than the configured tolerance window for late arrivals. [Learn more about Azure Stream Analytics event order considerations](./stream-analytics-time-handling.md). |
+| **Out-of-Order Events** | Number of events received out of order that were either dropped or given an adjusted time stamp, based on the event ordering policy. This metric can be affected by the configuration of the **Out-of-Order Tolerance Window** setting. |
+| **Output Events** | Amount of data that the Stream Analytics job sends to the output target, in number of events. |
+| **Runtime Errors** | Total number of errors related to query processing. It excludes errors found while ingesting events or outputting results. |
+| **SU (Memory) % Utilization** | Percentage of memory that your job utilizes. If this metric is consistently over 80 percent, the watermark delay is rising, and the number of backlogged events is rising, consider increasing streaming units (SUs). High utilization indicates that the job is using close to the maximum allocated resources. |
+| **Watermark Delay** | Maximum watermark delay across all partitions of all outputs in the job. |
## Scenarios to monitor
-|Metric|Condition|Time Aggregation|Threshold|Corrective Actions|
+|Metric|Condition|Time aggregation|Threshold|Corrective actions|
|-|-|-|-|-|
-|SU% Utilization|Greater than|Average|80|There are multiple factors that increase SU% Utilization. You can scale with query parallelization or increase the number of streaming units. For more information, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).|
-|CPU % Utilization|Greater than|Average|90|This likely means that there are some operations such as UDFs, UDAs or complex input deserialization which is requiring a lot of CPU cycles. This is usually overcome by increasing number of Streaming Units of the job.|
-|Runtime errors|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the inputs, query, or outputs.|
-|Watermark delay|Greater than|Average|When average value of this metric over the last 15 minutes is greater than late arrival tolerance (in seconds). If you have not modified the late arrival tolerance, the default is set to 5 seconds.|Try increasing the number of SUs or parallelizing your query. For more information on SUs, see [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md#how-many-sus-are-required-for-a-job). For more information on parallelizing your query, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).|
-|Input deserialization errors|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the input. For more information on resource logs, see [Troubleshoot Azure Stream Analytics using resource logs](stream-analytics-job-diagnostic-logs.md)|
+|**SU (Memory) % Utilization**|Greater than|Average|80|Multiple factors increase the utilization of SUs. You can scale with query parallelization or increase the number of SUs. For more information, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).|
+|**CPU % Utilization**|Greater than|Average|90|This likely means that some operations (such as user-defined functions, user-defined aggregates, or complex input deserialization) are requiring a lot of CPU cycles. You can usually overcome this problem by increasing the number of SUs for the job.|
+|**Runtime Errors**|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the inputs, query, or outputs.|
+|**Watermark Delay**|Greater than|Average|When the average value of this metric over the last 15 minutes is greater than the late arrival tolerance (in seconds). If you haven't modified the late arrival tolerance, the default is set to 5 seconds.|Try increasing the number of SUs or parallelizing your query. For more information on SUs, see [Understand and adjust streaming units](stream-analytics-streaming-unit-consumption.md#how-many-sus-are-required-for-a-job). For more information on parallelizing your query, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).|
+|**Input Deserialization Errors**|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the input. For more information on resource logs, see [Troubleshoot Azure Stream Analytics by using resource logs](stream-analytics-job-diagnostic-logs.md).|
## Get help
-For further assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html)
+For further assistance, try the [Microsoft Q&A page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html).
## Next steps * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Azure Stream Analytics job metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
-* [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md)
-* [Analyze Stream Analytics job performance with metrics dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
-* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
-* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
+* [Dimensions for Azure Stream Analytics metrics](./stream-analytics-job-metrics-dimensions.md)
+* [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md)
+* [Analyze Stream Analytics job performance by using metrics and dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
+* [Monitor a Stream Analytics job with the Azure portal](./stream-analytics-monitoring.md)
+* [Get started with Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
synapse-analytics Apache Spark Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-advisor.md
+
+ Title: Spark Advisor
+description: Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when a customer executes code or query.
+++++++ Last updated : 06/23/2022++
+# Spark Advisor
+
+Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when customer executes code or query. After applying the advice, you would have chance to improve your execution performance, decrease cost and fix the execution failures.
++
+## Advice provided
+
+### May return inconsistent results when using 'randomSplit'
+Inconsistent or inaccurate results may be returned when working with the results of the 'randomSplit' method. Use Apache Spark (RDD) caching before using the 'randomSplit' method.
+
+Method randomSplit() is equivalent to performing sample() on your data frame multiple times, with each sample refetching, partitioning, and sorting your data frame within partitions. The data distribution across partitions and sorting order is important for both randomSplit() and sample(). If either changes upon data refetch, there may be duplicates, or missing values across splits and the same sample using the same seed may produce different results.
+
+These inconsistencies may not happen on every run, but to eliminate them completely, cache your data frame, repartition on a column(s), or apply aggregate functions such as groupBy.
+
+### Table/view name is already in use
+A view already exists with the same name as the created table, or a table already exists with the same name as the created view.
+When this name is used in queries or applications, only the view will be returned no matter, which one created first. To avoid conflicts, rename either the table or the view.
+
+## Hints related advise
+### Unable to recognize a hint
+The selected query contains a hint that isn't recognized. Verify that the hint is spelled correctly.
+
+```scala
+spark.sql("SELECT /*+ unknownHint */ * FROM t1")
+```
+
+### Unable to find a specified relation name(s)
+Unable to find the relation(s) specified in the hint. Verify that the relation(s) are spelled correctly and accessible within the scope of the hint.
+
+```scala
+spark.sql("SELECT /*+ BROADCAST(unknownTable) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str")
+```
+
+### A hint in the query prevents another hint from being applied
+The selected query contains a hint that prevents another hint from being applied.
+
+```scala
+spark.sql("SELECT /*+ BROADCAST(t1), MERGE(t1, t2) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str")
+```
+
+## Enable 'spark.advise.divisionExprConvertRule.enable' to reduce rounding error propagation
+This query contains the expression with Double type. We recommend that you enable the configuration 'spark.advise.divisionExprConvertRule.enable', which can help reduce the division expressions and to reduce the rounding error propagation.
+
+```text
+"t.a/t.b/t.c" convert into "t.a/(t.b * t.c)"
+```
+
+## Enable 'spark.advise.nonEqJoinConvertRule.enable' to improve query performance
+This query contains time consuming join due to "Or" condition within query. We recommend that you enable the configuration 'spark.advise.nonEqJoinConvertRule.enable', which can help to convert the join triggered by "Or" condition to SMJ or BHJ to accelerate this query.
+
+## Next steps
+
+For more information on monitoring pipeline runs, see the [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md) article.
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
synapse-analytics Synapse File Mount Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-file-mount-api.md
Title: Introduction to file mount in Azure Synapse Analytics
-description: "Tutorial: How to use file mount/unmount API in Azure Synapse Analytics"
+ Title: Introduction to file APIs in Azure Synapse Analytics
+description: This tutorial describes how to use the file mount and file unmount APIs in Azure Synapse Analytics, for both Azure Data Lake Storage Gen2 and Azure Blob Storage.
Previously updated : 07/12/2022 Last updated : 07/27/2022
-# How to use file mount/unmount API in Synapse
+# Introduction to file mount/unmount APIs in Azure Synapse Analytics
-Synapse studio team built two new mount/unmount APIs in the Microsoft Spark Utilities (MSSparkUtils) package, you can use mount to attach remote storage (Azure Blob Storage, Azure Data Lake Storage (ADLS) Gen2) to all working nodes (driver node and worker nodes). Once in place, you can access data in storage as if they were one the local file system with local file API. For more information, see [Introduction to Microsoft Spark Utilities](microsoft-spark-utilities.md).
+The Azure Synapse Studio team built two new mount/unmount APIs in the Microsoft Spark Utilities (`mssparkutils`) package. You can use these APIs to attach remote storage (Azure Blob Storage or Azure Data Lake Storage Gen2) to all working nodes (driver node and worker nodes). After the storage is in place, you can use the local file API to access data as if it's stored in the local file system. For more information, see [Introduction to Microsoft Spark Utilities](microsoft-spark-utilities.md).
-The document will show you how to use mount/unmount API in your workspace, mainly includes below sections:
+The article shows you how to use mount/unmount APIs in your workspace. You'll learn:
-+ How to mount Azure Data Lake Storage (ADLS) Gen2 or Azure Blob Storage
-+ How to access files under mount point via local file system API
-+ How to access files under mount point using `mssparktuils` fs API
-+ How to access files under mount point using Spark Read API
-+ How to unmount the mount point
++ How to mount Data Lake Storage Gen2 or Blob Storage.++ How to access files under the mount point via the local file system API. ++ How to access files under the mount point by using the `mssparktuils fs` API. ++ How to access files under the mount point by using the Spark read API.++ How to unmount the mount point. > [!WARNING]
-> + Azure Fileshare mount is temporarily disabled, you can use ADLS Gen2/blob mount following the [How to mount Gen2/blob Storage](#How-to-mount-Gen2/blob-Storage).
+> Azure file-share mounting is temporarily disabled. You can use Data Lake Storage Gen2 or Azure Blob Storage mounting instead, as described in the next section.
>
-> + Azure Data Lake storage Gen1 storage is not supported, you can migrate to ADLS Gen2 following the [Migration gudiance](../../storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md) before using mount APIs.
+> Azure Data Lake Storage Gen1 storage is not supported. You can migrate to Data Lake Storage Gen2 by following the [Azure Data Lake Storage Gen1 to Gen2 migration guidance](../../storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md) before using the mount APIs.
<a id="How-to-mount-Gen2/blob-Storage"></a>
-## How to mount storage
+## Mount storage
-Here we will illustrate how to mount Azure Data Lake Storage (ADLS) Gen2 or Azure Blob Storage step by step as an example, mounting blob storage works similarly.
+This section illustrates how to mount Data Lake Storage Gen2 step by step as an example. Mounting Blob Storage works similarly.
-Assuming you have one ADLS Gen2 account named `storegen2` and the account has one container name `mycontainer`, and you want to mount the `mycontainer` to `/test` of your Spark pool.
+The example assumes that you have one Data Lake Storage Gen2 account named `storegen2`. The account has one container named `mycontainer` that you want to mount to `/test` in your Spark pool.
-![Screenshot of ADLS Gen2 storage account](./media/synapse-file-mount-api/gen2-storage-account.png)
+![Screenshot of a Data Lake Storage Gen2 storage account.](./media/synapse-file-mount-api/gen2-storage-account.png)
-To mount container `mycontainer`, `mssparkutils` need to check whether you have the permission to access the container at first, currently we support three authentication methods to trigger mount operation, **LinkedService**, **accountKey**, and **sastoken**.
+To mount the container called `mycontainer`, `mssparkutils` first needs to check whether you have the permission to access the container. Currently, Azure Synapse Analytics supports three authentication methods for the trigger mount operation: `linkedService`, `accountKey`, and `sastoken`.
-### Via Linked Service (recommend):
+### Mount by using a linked service (recommended)
-Trigger mount via linked Service is our recommend way, there won't have any security leak issue by this way, since `mssparkutils` doesn't store any secret/auth value itself, and it will always fetch auth value from linked service to request blob data from the remote storage.
+We recommend a trigger mount via linked service. This method avoids security leaks, because `mssparkutils` doesn't store any secret or authentication values itself. Instead, `mssparkutils` always fetches authentication values from the linked service to request blob data from remote storage.
-![Screenshot of link services](./media/synapse-file-mount-api/synapse-link-service.png)
+![Screenshot of linked services.](./media/synapse-file-mount-api/synapse-link-service.png)
-You can create linked service for ADLS Gen2 or blob storage. Currently, two authentication methods are supported when created linked service, one is using account key, another is using managed identity.
+You can create a linked service for Data Lake Storage Gen2 or Blob Storage. Currently, Azure Synapse Analytics supports two authentication methods when you create a linked service:
-+ **Create linked service using account key**
- ![Screenshot of link services using account key](./media/synapse-file-mount-api/synapse-link-service-using-account-key.png)
++ **Create a linked service by using an account key**
-+ **Create linked service using Managed Identity**
- ![Screenshot of link services using managed identity](./media/synapse-file-mount-api/synapse-link-service-using-managed-identity.png)
+ ![Screenshot of selections for creating a linked service by using an account key.](./media/synapse-file-mount-api/synapse-link-service-using-account-key.png)
+++ **Create a linked service by using a managed identity**+
+ ![Screenshot of selections for creating a linked service by using a managed identity.](./media/synapse-file-mount-api/synapse-link-service-using-managed-identity.png)
> [!NOTE]
-> + If you create linked service using managed identity as authentication method, please make sure that the workspace MSI has the Storage Blob Data Contributor role of the mounted container.
-> + Please always check the linked service connection to guarantee that the linked service is created successfully.
+> If you create a linked service by using a managed identity as the authentication method, make sure that the workspace MSI file has the Storage Blob Data Contributor role of the mounted container.
-After you create linked service successfully, you can easily mount the container to your Spark pool with below Python code.
+After you create linked service successfully, you can easily mount the container to your Spark pool by using the following Python code:
```python mssparkutils.fs.mount(
mssparkutils.fs.mount(
) ```
-**Notice**:
-
-+ You may need to import `mssparkutils` if it not available.
-
- ```python
- From notebookutils import mssparkutils
- ```
-+ It's not recommended to mount a root folder, no matter which authentication method is used.
+> [!NOTE]
+> You might need to import `mssparkutils` if it's not available:
+> ```python
+> From notebookutils import mssparkutils
+> ```
+> We don't recommend that you mount a root folder, no matter which authentication method you use.
-### Via Shared Access Signature Token or Account Key
+### Mount via shared access signature token or account key
-In addition to mount with linked service, `mssparkutils` also support explicitly passing account key or [SAS (shared access signature)](/samples/azure-samples/storage-dotnet-sas-getting-started/storage-dotnet-sas-getting-started/) token as parameter to mount the target.
+In addition to mounting through a linked service, `mssparkutils` supports explicitly passing an account key or [shared access signature (SAS)](/samples/azure-samples/storage-dotnet-sas-getting-started/storage-dotnet-sas-getting-started/) token as a parameter to mount the target.
-For security reasons, it is recommended to store Account key or SAS token in Azure Key Vaults (as the below example figure shows), then retrieving them with `mssparkutil.credentials.getSecret` API. For more information, see [Manage storage account keys with Key Vault and the Azure CLI (legacy)](../../key-vault/secrets/overview-storage-keys.md).
+For security reasons, we recommend that you store account keys or SAS tokens in Azure Key Vault (as the following example screenshot shows). You can then retrieve them by using the `mssparkutil.credentials.getSecret` API. For more information, see [Manage storage account keys with Key Vault and the Azure CLI (legacy)](../../key-vault/secrets/overview-storage-keys.md).
-![Screenshot of key vaults](./media/synapse-file-mount-api/key-vaults.png)
+![Screenshot that shows a secret stored in a key vault.](./media/synapse-file-mount-api/key-vaults.png)
-Here is the sample code.
+Here's the sample code:
```python from notebookutils import mssparkutils
mssparkutils.fs.mount(
) ```
-> [!Note]
-> For security reasons, do not store credentials in code.
+> [!NOTE]
+> For security reasons, don't store credentials in code.
<!
-## How to mount Azure File Shares
+## Mount Azure file shares
> [!WARNING]
-> Fileshare mount is temporarily disable due to tech limitation issue. Please using blob/gen2 mount following above steps as workaround.
+> File-share mounting is temporarily disabled because of technical limitations. As a workaround, use a Data Lake Storage Gen2 or Blob Storage mount by following the preceding steps.
+
+The following example assumes that you have a Data Lake Storage Gen2 storage account named `storegen2`. The account has one file share named `myfileshare`. You want to mount `myfileshare` to `/test` for your Spark pool.
-Assuming you have a ADLS Gen2 storage account named `storegen2` and the account has one file share named `myfileshare`, and you want to mount the `myfileshare` to `/test` of your spark pool.
-![Screenshot of file share](./media/synapse-file-mount-api/file-share.png)
+![Screenshot of a file share.](./media/synapse-file-mount-api/file-share.png)
-Mount azure file share only supports the account key authentication method, below is the code sample to mount **myfileshare** to `/test` and we reuse the Azure Key Value settings of `MountKV` here:
+A mounted Azure file share supports only the account key authentication method. The following code example mounts `myfileshare` to `/test`. The example reuses the Azure Key Vault settings of `MountKV`.
```python from notebookutils import mssparkutils
mssparkutils.fs.mount(
) ```
-In the above example, we pre-defined the schema format of source URL for the file share to: `https://<filesharename>@<accountname>.file.core.windows.net`, and we stored the account key in AKV, and retrieving them with `mssparkutil.credentials.getSecret` API instead of explicitly passing it to the mount API.
+The example predefines the schema format of the source URL for the file share to `https://<filesharename>@<accountname>.file.core.windows.net`. We stored the account key in Key Vault. We retrieved the account key by using the `mssparkutil.credentials.getSecret` API instead of explicitly passing it to the mount API.
-
+## Access files under the mount point via the local file system API
-## How to access files under mount point via local file system API
+After the mount runs successfully, you can access the data via the local file system API. The mount point must always be created under the `/synfs` folder of node, and it's scoped to the job/session level.
-Once the mount run successfully, you can access data via local file system API, while currently we limit the mount point always be created under **/synfs** folder of node and it was scoped to job/session level.
+For example, if you mount `mycontainer` to the `/test` folder, the created local mount point is `/synfs/{jobid}/test`. If you want to access the mount point via local `fs` APIs after a successful mount, the local path should be `/synfs/{jobid}/test`.
-So, for example if you mount `mycontainer` to `/test` folder, the created local mount point is `/synfs/{jobid}/test`, that means if you want to access mount point via local fs APIs after a successful mount, the local path used should be `/synfs/{jobid}/test`
-
-Below is an example to show how it works.
+The following example shows how it works:
```python jobId = mssparkutils.env.getJobId()
f.close()
``` >
-## How to access files under mount point using mssparktuils fs API
-
-The main purpose of the mount operation is to let customer access the data stored in remote storage account using local file system API, you can also access data using `mssparkutils fs` API with mounted path as a parameter. The path format used here is a little different.
+## Access files under the mount point by using the mssparktuils fs API
-Assuming you mounted to the ADLS Gen2 container `mycontainer` to `/test` using mount API.
+The main purpose of the mount operation is to let customers access the data stored in a remote storage account by using a local file system API. You can also access the data by using the `mssparkutils fs` API with a mounted path as a parameter. The path format used here is a little different.
-When you access the data using local file system API, as above section shared, the path format is like
+Assume that you mounted the Data Lake Storage Gen2 container `mycontainer` to `/test` by using the mount API. When you access the data by using a local file system API, the path format is like this:
`/synfs/{jobId}/test/{filename}`
-While when you want to access the data with `mssparkutils fs` API, the path format is like:
+When you want to access the data by using the `mssparkutils fs` API, the path format is like this:
`synfs:/{jobId}/test/{filename}`
-You can see the `synfs` is used as schema in this case instead of a part of the mounted path.
+You can see that `synfs` is used as the schema in this case, instead of a part of the mounted path.
-Below are three examples to show how to access file with mount point path using `mssparkutils fs`, while **49** is a Spark job ID we got from calling `mssparkutils.env.getJobId()`.
+The following three examples show how to access a file with a mount point path by using `mssparkutils fs`. In the examples, `49` is a Spark job ID that we got from calling `mssparkutils.env.getJobId()`.
-+ List dirs:
++ List directories: ```python mssparkutils.fs.ls("synfs:/49/test")
Below are three examples to show how to access file with mount point path using
mssparkutils.fs.head("synfs:/49/test/myFile.txt") ```
-+ Create directory:
++ Create a directory: ```python mssparkutils.fs.mkdirs("synfs:/49/test/newdir") ```
-
-
-## How to access files under mount point using Spark Read API
+## Access files under the mount point by using the Spark read API
-You can also use Spark read API with mounted path as parameter to access the data after mount as well, the path format here is same with the format of using `mssparkutils fs` API:
+You can provide a parameter to access the data through the Spark read API. The path format here is the same when you use the `mssparkutils fs` API:
-`synfs:/{jobId}/test/{filename} `
-
-Below are two code examples, one is for a mounted ADLS Gen2 storage, another for a mounted blob storage.
+`synfs:/{jobId}/test/{filename}`
<a id="read-file-from-a-mounted-gen2-storage-account"></a>
-### Read file from a mounted ADLS Gen2 storage account
+### Read a file from a mounted Data Lake Storage Gen2 storage account
-The below example assumes an ADLS Gen2 storage was already mounted then read file using mount path.
+The following example assumes that a Data Lake Storage Gen2 storage account was already mounted, and then you read the file by using a mount path:
```python %%pyspark
-# Assume a ADLS Gen2 storage was already mounted then read file using mount path
- df = spark.read.load("synfs:/49/test/myFile.csv", format='csv') df.show() ```
-### Read file from a mounted blob storage account
+### Read a file from a mounted Blob Storage account
-Notice that if you mounted a blob storage account then want to access it using `mssparkutils` or Spark API, you need to explicitly configure the sas token via spark configuration at first before try to mount container using mount API.
+If you mounted a Blob Storage account and want to access it by using `mssparkutils` or the Spark API, you need to explicitly configure the SAS token via Spark configuration before you try to mount the container by using the mount API:
-1. Update Spark configuration as below code example if you want to access it using `mssparkutils` or Spark API after trigger mount, you can bypass this step if you only want to access it using local file api after mount:
+1. To access a Blob Storage account by using `mssparkutils` or the Spark API after a trigger mount, update the Spark configuration as shown in the following code example. You can bypass this step if you want to access the Spark configuration only by using the local file API after mounting.
```python blob_sas_token = mssparkutils.credentials.getConnectionStringOrCreds("myblobstorageaccount")
Notice that if you mounted a blob storage account then want to access it using `
spark.conf.set('fs.azure.sas.mycontainer.<blobStorageAccountName>.blob.core.windows.net', blob_sas_token) ```
-2. Create link service `myblobstorageaccount` and mount blob storage account with link service:
+2. Create the linked service `myblobstorageaccount`, and mount the Blob Storage account by using the linked service:
```python %%spark
Notice that if you mounted a blob storage account then want to access it using `
) ```
-3. Mount the blob storage container and then read file using mount path through the local file API:
+3. Mount the Blob Storage container, and then read the file by using a mount path through the local file API:
```python
- # mount blob storage container and then read file using mount path
- with open("/synfs/64/test/myFile.txt") as f:
+ # mount the Blob Storage container, and then read the file by using a mount path
+ with open("/synfs/64/test/myFile.txt") as f:
print(f.read()) ```
-4. Read data from mounted blob storage through Spark read API:
+4. Read the data from the mounted Blob Storage container through the Spark read API:
```python %%spark
Notice that if you mounted a blob storage account then want to access it using `
df.show() ```
-## How to unmount the mount point
+## Unmount the mount point
-Unmount with your mount point, `/test` in our example:
+Use the following code to unmount your mount point (`/test` in this example):
```python mssparkutils.fs.unmount("/test")
mssparkutils.fs.unmount("/test")
## Known limitations
-+ The `mssparkutils fs help` function hasn't added the description about mount/unmount part yet.
++ The `mssparkutils fs help` function hasn't added the description about the mount/unmount part yet.
-+ In further, we will support auto unmount mechanism to remove the mount point when application run finished, currently it not implemented yet. If you want to unmount the mount point to release the disk space, you need to explicitly call unmount API in your code, otherwise, the mount point will still exist in the node even after the application run finished.
++ The unmount mechanism is not automatic. When the application run finishes, to unmount the mount point to release the disk space, you need to explicitly call an unmount API in your code. Otherwise, the mount point will still exist in the node after the application run finishes.
-+ Mounting ADLS Gen1 storage account is not supported for now.
++ Mounting a Data Lake Storage Gen1 storage account is not supported for now. ## Next steps -- [Get Started with Azure Synapse Analytics](../get-started.md)-- [Monitor your Synapse Workspace](../get-started-monitor.md)-- [Introduction to Microsoft Spark Utilities](microsoft-spark-utilities.md)
+- [Get started with Azure Synapse Analytics](../get-started.md)
+- [Monitor your Synapse workspace](../get-started-monitor.md)
+- [Introduction to Microsoft Spark Utilities](microsoft-spark-utilities.md)
virtual-desktop Fslogix Profile Container Configure Azure Files Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-profile-container-configure-azure-files-active-directory.md
To use Active Directory accounts for the share permissions of your file share, y
1. Tick the box to **Enable Azure Active Directory Domain Services (Azure AD DS) for this file share**, then select **Save**. An Organizational Unit (OU) called **AzureFilesConfig** will be created at the root of your domain and a computer account named the same as the storage account will be created in that OU.
-1. To verify the storage account has joined your domain, run the commands below and review the output, replacing the values for `$resourceGroupName` and `$storageAccountName` with your values:
-
- ```powershell
- $resourceGroupName = "resource-group-name"
- $storageAccountName = "storage-account-name"
-
- (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).AzureFilesIdentityBasedAuth.DirectoryServiceOptions; (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).AzureFilesIdentityBasedAuth.ActiveDirectoryProperties
- ```
- ## Assign RBAC role to users
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
There are different automation and deployment options available depending on whi
There are several network requirements you'll need to meet to successfully deploy Azure Virtual Desktop. This lets users connect to their virtual desktops and remote apps while also giving them the best possible user experience.
-Users connecting to Azure Virtual Desktop use Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) on port 443, which securely establishes a reverse connection to the service. This means you don't need to open any inbound ports.
+Users connecting to Azure Virtual Desktop securely establish a reverse connection to the service, which means you don't need to open any inbound ports. Transmission Control Protocol (TCP) on port 443 is used by default, however RDP Shortpath can be used for [managed networks](shortpath.md) and [public networks](shortpath-public.md) that establishes a direct User Datagram Protocol (UDP)-based transport.
To successfully deploy Azure Virtual Desktop, you'll need to meet the following network requirements:
To successfully deploy Azure Virtual Desktop, you'll need to meet the following
- Make sure this virtual network can connect to your domain controllers and relevant DNS servers if you're using AD DS or Azure AD DS, since you'll need to join session hosts to the domain. -- Your session hosts and users need to be able to connect to the Azure Virtual Desktop service. This connection also uses TCP on port 443 to a specific list of URLs. For more information, see [Required URL list](safe-url-list.md). You must make sure these URLs aren't blocked by network filtering or a firewall in order for your deployment to work properly and be supported. If your users need to access Microsoft 365, make sure your session hosts can connect to [Microsoft 365 endpoints](/microsoft-365/enterprise/microsoft-365-endpoints).
+- Your session hosts and users need to be able to connect to the Azure Virtual Desktop service. These connections also use TCP on port 443 to a specific list of URLs. For more information, see [Required URL list](safe-url-list.md). You must make sure these URLs aren't blocked by network filtering or a firewall in order for your deployment to work properly and be supported. If your users need to access Microsoft 365, make sure your session hosts can connect to [Microsoft 365 endpoints](/microsoft-365/enterprise/microsoft-365-endpoints).
Also consider the following:
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Below is the list of URLs your session host VMs need to access for Azure Virtual
| `*.wvd.microsoft.com` | 443 | Service traffic | WindowsVirtualDesktop | | `*.prod.warm.ingest.monitor.core.windows.net` | 443 | Agent traffic | AzureMonitor | | `catalogartifact.azureedge.net` | 443 | Azure Marketplace | AzureFrontDoor.Frontend |
+| `gcs.prod.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud |
| `kms.core.windows.net` | 1688 | Windows activation | Internet | | `azkms.core.windows.net` | 1688 | Windows activation | Internet | | `mrsglobalsteus2prod.blob.core.windows.net` | 443 | Agent and SXS stack updates | AzureCloud |
Below is the list of URLs your session host VMs need to access for Azure Virtual
> > | Address | Outbound TCP port | Purpose | Service Tag | > |--|--|--|--|
-> | `gcs.prod.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud |
> | `production.diagnostics.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud | > | `*xt.blob.core.windows.net` | 443 | Agent traffic | AzureCloud | > | `*eh.servicebus.windows.net` | 443 | Agent traffic | AzureCloud |
The following table lists optional URLs that your session host virtual machines
|--|--|--|--| | `*.wvd.azure.us` | 443 | Service traffic | WindowsVirtualDesktop | | `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | 443 | Agent traffic | AzureMonitor |
+| `gcs.monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud |
| `kms.core.usgovcloudapi.net` | 1688 | Windows activation | Internet | | `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | 443 | Agent and SXS stack updates | AzureCloud | | `wvdportalstorageblob.blob.core.usgovcloudapi.net` | 443 | Azure portal support | AzureCloud |
The following table lists optional URLs that your session host virtual machines
> > | Address | Outbound TCP port | Purpose | Service Tag | > |--|--|--|--|
-> | `gcs.monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud |
> | `monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | > | `fairfax.warmpath.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | > | `*xt.blob.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud |
virtual-desktop Security Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-guide.md
By restricting operating system capabilities, you can strengthen the security of
Trusted launch are Gen2 Azure VMs with enhanced security features aimed to protect against ΓÇ£bottom of the stackΓÇ¥ threats through attack vectors such as rootkits, boot kits, and kernel-level malware. The following are the enhanced security features of trusted launch, all of which are supported in Azure Virtual Desktop. To learn more about trusted launch, visit [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
->[!NOTE]
->Bring your own custom image for Trusted Launch VM is not yet supported. When deploying an AVD Host Pool, you will be limited to the list of pre-canned OS images listed in the dropdown "Image" combo box.
### Secure Boot
virtual-machine-scale-sets Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/azure-hybrid-benefit-linux.md
Title: Azure Hybrid Benefit for Linux virtual machine scale sets
-description: Learn how Azure Hybrid Benefit can apply to virtual machine scale sets and save you money on Linux Virtual Machines in Azure.
+description: Learn how Azure Hybrid Benefit can apply to virtual machine scale sets and save you money on Linux virtual machines in Azure.
documentationcenter: ''
# Explore Azure Hybrid Benefit for Linux virtual machine scale sets
-> [!NOTE]
-> This article focuses on virtual machine scale sets running in Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+Azure Hybrid Benefit can reduce the cost of running your Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) [virtual machine scale sets](./overview.md). Azure Hybrid Benefit for Linux virtual machine scale sets is generally available now. It's available for all RHEL and SLES pay-as-you-go images from Azure Marketplace.
-**Azure Hybrid Benefit for Linux virtual machine scale set is generally available now**. *Azure Hybrid Benefit (AHB)* can reduce the cost of running your *Red Hat Enterprise Linux (RHEL)* and *SUSE Linux Enterprise Server (SLES)* [virtual machine scale sets](./overview.md). AHB is available for all RHEL and SLES Marketplace pay-as-you-go (PAYG) images.
+When you enable Azure Hybrid Benefit, the only fee that you incur is the cost of your scale set infrastructure.
-When you enable AHB, the only fee that you incur is the cost of your scale set infrastructure.
+> [!NOTE]
+> This article focuses on virtual machine scale sets running in Uniform orchestration mode. We recommend using Flexible orchestration for new workloads. For more information, see [Orchestration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-## What is AHB for Linux virtual machine scale sets?
-AHB allows you to switch your virtual machine scale sets to *bring-your-own-subscription (BYOS)* billing. You can use your cloud access licenses from Red Hat or SUSE for this. You can also switch PAYG instances to BYOS without the need to redeploy.
+## What is Azure Hybrid Benefit for Linux virtual machine scale sets?
+Azure Hybrid Benefit allows you to switch your virtual machine scale sets to *bring-your-own-subscription (BYOS)* billing. You can use your cloud access licenses from Red Hat or SUSE for this. You can also switch pay-as-you-go instances to BYOS without the need to redeploy.
-A virtual machine scale set deployed from PAYG marketplace images is charged both infrastructure and software fees when AHB is enabled.
+A virtual machine scale set deployed from pay-as-you-go Azure Marketplace images is charged both infrastructure and software fees when Azure Hybrid Benefit is enabled.
-## Which Linux Virtual Machines can use AHB?
-AHB can be used on all RHEL and SLES PAYG images from Azure Marketplace. AHB isn't yet available for RHEL or SLES BYOS images or custom images from Azure Marketplace.
+## Which Linux virtual machines can use Azure Hybrid Benefit?
+Azure Hybrid Benefit can be used on all RHEL and SLES pay-as-you-go images from Azure Marketplace. Azure Hybrid Benefit isn't yet available for RHEL or SLES BYOS images or custom images from Azure Marketplace.
-Azure Dedicated Host instances, and SQL hybrid benefits are not eligible for AHB if you're already using AHB with Linux Virtual Machines.
+Azure dedicated host instances and SQL hybrid benefits are not eligible for Azure Hybrid Benefit if you're already using Azure Hybrid Benefit with Linux virtual machines.
## Get started
-### How to enable AHB for Red Hat virtual machine scale sets
+### Enable Azure Hybrid Benefit for Red Hat virtual machine scale sets
-AHB for RHEL is available to Red Hat customers who meet the following criteria:
+Azure Hybrid Benefit for RHEL is available to Red Hat customers who meet the following criteria:
- Have active or unused RHEL subscriptions that are eligible for use in Azure-- Have enabled one or more subscriptions for use in Azure with the [Red Hat Cloud Access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program-
-> [!IMPORTANT]
-> Ensure the correct subscription has been enabled in the [cloud-access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program.
+- Have correctly enabled one or more subscriptions for use in Azure with the [Red Hat Cloud Access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program
-To start using AHB for Red Hat:
+To start using Azure Hybrid Benefit for Red Hat:
1. Enable your eligible RHEL subscriptions in Azure by using the [Red Hat Cloud Access customer interface](https://access.redhat.com/management/cloud).
- The Azure subscriptions that you provide during the Red Hat Cloud Access enablement process are permitted to use AHB.
-1. Apply AHB to any of your new or existing RHEL PAYG virtual machine scale sets. You can use Azure portal or Azure *command-line interface (CLI)* to enable AHB.
-1. Configure update sources for your RHEL Virtual Machines and RHEL subscription compliance guidelines with the following, recommended [next steps](https://access.redhat.com/articles/5419341).
--
-### How to enable AHB for SUSE virtual machine scale sets
+ The Azure subscriptions that you provide during the Red Hat Cloud Access enablement process are permitted to use Azure Hybrid Benefit.
+1. Apply Azure Hybrid Benefit to any of your new or existing RHEL pay-as-you-go virtual machine scale sets. You can use the Azure portal or the Azure CLI to enable Azure Hybrid Benefit.
+1. Follow the recommended [next steps](https://access.redhat.com/articles/5419341) to configure update sources for your RHEL virtual machines and for RHEL subscription compliance guidelines.
-To start using AHB for SUSE:
+### Enable Azure Hybrid Benefit for SUSE virtual machine scale sets
-1. Register with the SUSE Public Cloud Program.
-1. Apply AHB to your newly created or existing virtual machine scale set via Azure portal or Azure CLI.
-1. Register your Virtual Machines that are receiving AHB with a separate source of updates.
+To start using Azure Hybrid Benefit for SUSE:
+1. Register with the SUSE public cloud program.
+1. Apply Azure Hybrid Benefit to your newly created or existing virtual machine scale sets via the Azure portal or the Azure CLI.
+1. Register your virtual machines that are receiving Azure Hybrid Benefit with a separate source of updates.
-## How to enable and disable AHB in Azure portal
-### How to enable AHB during virtual machine scale set creation in Azure portal:
-1. Visit [Microsoft Azure portal](https://portal.azure.com/)
-1. Go to 'Create a virtual machine scale set' page on the portal.
- :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb.png" alt-text="Screenshot of the virtual machine scale set blade in the Azure portal.":::
-1. Click on the checkbox to enable AHB and to use cloud access licenses.
- :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb-checkbox.png" alt-text="Screenshot of the check box associated with hybrid benefit during the virtual machine scale set create phase in the Azure portal.":::
-1. Create a virtual machine scale set following the next set of instructions
-1. Check the **Configuration** blade. You'll see the option enabled.
- :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb-os-blade.png" alt-text="Screenshot of the virtual machine scale set create page in the Azure portal after the use select the hybrid benefit check box.":::
-### How to enable AHB in virtual machine scale sets in Azure portal:
-1. Visit [Microsoft Azure portal](https://portal.azure.com/)
-1. Open the 'virtual machine scale set' page on which you want to apply the conversion.
-1. Go the **Operating system** option on the left. You will see the Licensing section. To enable the AHB conversion, check the 'Yes' radio button and check the Confirmation checkbox.
-![AHB Configuration blade after creating](./media/azure-hybrid-benefit-linux/create-vmss-ahb-os-blade.png)
+## Enable Azure Hybrid Benefit in the Azure portal
+### Enable Azure Hybrid Benefit during virtual machine scale set creation
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Go to **Create a virtual machine scale set**.
+
+ :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb.png" alt-text="Screenshot of the portal page for creating a virtual machine scale set.":::
+1. In the **Licensing** section, select the checkbox that asks if you want to use an existing RHEL subscription and the checkbox to confirm that your subscription is eligible.
+
+ :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb-checkbox.png" alt-text="Screenshot of the Azure portal that shows checkboxes selected for licensing.":::
+1. Create a virtual machine scale set by following the next set of instructions.
+1. On the **Operating system** pane, confirm that the option is enabled.
+
+ :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb-os-blade.png" alt-text="Screenshot of the Azure Hybrid Benefit pane for the operating system after you create a virtual machine.":::
+### Enable Azure Hybrid Benefit in an existing virtual machine scale set
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Open the page for the virtual machine scale set on which you want to apply the conversion.
+1. Go to **Operating system** > **Licensing**. To enable the Azure Hybrid Benefit conversion, select **Yes**, and then select the confirmation checkbox.
+![Screenshot of the Azure portal that shows the Licensing section of the pane for the operating system.](./media/azure-hybrid-benefit-linux/create-vmss-ahb-os-blade.png)
-## How to enable and disable AHB using Azure CLI
+## Enable and disable Azure Hybrid Benefit by using the Azure CLI
-You can use the `az vmss update` command to update Virtual Machines. For RHEL Virtual Machines, run the command with a `--license-type` parameter of `RHEL_BYOS`. For SLES Virtual Machines, run the command with a `--license-type` parameter of `SLES_BYOS`.
+In the Azure CLI, you can use the `az vmss update` command to enable Azure Hybrid Benefit. For RHEL virtual machines, run the command with a `--license-type` parameter of `RHEL_BYOS`. For SLES virtual machines, run the command with a `--license-type` parameter of `SLES_BYOS`.
-### How to enable AHB using a CLI
```azurecli
-# This will enable AHB on a RHEL virtual machine scale set
+# This will enable Azure Hybrid Benefit on a RHEL virtual machine scale set
az vmss update --resource-group myResourceGroup --name myVmName --license-type RHEL_BYOS
-# This will enable AHB on a SLES virtual machine scale set
+# This will enable Azure Hybrid Benefit on a SLES virtual machine scale set
az vmss update --resource-group myResourceGroup --name myVmName --license-type SLES_BYOS ```
-### How to disable AHB using a CLI
-To disable AHB, use a `--license-type` value of `None`:
+
+To disable Azure Hybrid Benefit, use a `--license-type` value of `None`:
```azurecli
-# This will disable AHB on a Virtual Machine
+# This will disable Azure Hybrid Benefit on a virtual machine
az vmss update -g myResourceGroup -n myVmName --license-type None ``` >[!NOTE]
-> Scale sets have an ["upgrade policy"](./virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) that determine how Virtual Machine's are brought up-to-date with the latest scale set model.
-Hence, if your virtual machine scale set have 'Automatic' upgrade policy, AHB will be applied automatically as Virtual Machine instances get updated.
-If virtual machine scale set have 'Rolling' upgrade policy, based on the scheduled updates, AHB will be applied.
-In case of 'Manual' upgrade policy, you will have to perform a "manual upgrade" of your Virtual Machines.
-
-### How to upgrade virtual machine scale set instances in case of "Manual Upgrade" policy using a CLI
-```azurecli
-# This will bring virtual machine scale set instances up to date with latest virtual machine scale set model
-az vmss update-instances --resource-group myResourceGroup --name myScaleSet --instance-ids {instanceIds}
-```
-
-## How to apply AHB at virtual machine scale set creation time
-In addition to applying AHB to pay-as-you-go virtual machine scale set, you can invoke it when you create virtual machine scale sets. The benefits of doing so are threefold:
-- You can provision both PAYG and BYOS virtual machine scale set instances by using the same image and process.-- It enables future licensing mode changes, something not available with a BYOS-only image.-- The virtual machine scale set instances will be connected to *Red Hat Update Infrastructure (RHUI)* by default, to ensure that it remains up to date and secure. You can change the updated mechanism after deployment at any time.
+> Scale sets have an [upgrade policy](./virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) that determines how virtual machines are brought up to date with the latest scale set model.
+>
+> If your scale sets have an **Automatic** upgrade policy, Azure Hybrid Benefit will be applied automatically as virtual machines are updated. If your scale sets have a **Rolling** upgrade policy, based on the scheduled updates, Azure Hybrid Benefit will be applied.
+>
+> If your scale sets have a **Manual** upgrade policy, you'll have to manually upgrade your virtual machines by using the Azure CLI:
+>
+> ```azurecli
+> # This will bring virtual machine scale set instances up to date with the latest virtual machine scale set model
+> az vmss update-instances --resource-group myResourceGroup --name myScaleSet --instance-ids {instanceIds}
+> ```
+
+## Apply Azure Hybrid Benefit to virtual machine scale sets at creation time
+In addition to applying Azure Hybrid Benefit to existing pay-as-you-go virtual machine scale sets, you can invoke it when you create virtual machine scale sets. The benefits of doing so are threefold:
+
+- You can provision both pay-as-you-go and BYOS virtual machine scale sets by using the same image and process.
+- It enables future licensing mode changes. These changes aren't available with a BYOS-only image.
+- The virtual machine scale sets will be connected to Red Hat Update Infrastructure (RHUI) by default, to help keep it up to date and secure. You can change the updated mechanism after deployment at any time.
+
+To apply Azure Hybrid Benefit to virtual machine scale sets at creation time by using the Azure CLI, use one of the following commands:
-### How to apply AHB at virtual machine scale set creation time using a CLI
```azurecli
-# This will enable AHB while creating RHEL virtual machine scale set
+# This will enable Azure Hybrid Benefit while creating a RHEL virtual machine scale set
az vmss create --name myVmName --resource-group myResourceGroup --vnet-name myVnet --subnet mySubnet --image myRedHatImageURN --admin-username myAdminUserName --admin-password myPassword --instance-count myInstanceCount --license-type RHEL_BYOS
-# This will enable AHB while creating RHEL virtual machine scale set
+# This will enable Azure Hybrid Benefit while creating a SLES virtual machine scale set
az vmss create --name myVmName --resource-group myResourceGroup --vnet-name myVnet --subnet mySubnet --image myRedHatImageURN --admin-username myAdminUserName --admin-password myPassword --instance-count myInstanceCount --license-type SLES_BYOS ``` ## Next steps
-* [Learn how to create and update Virtual Machines and add license types (RHEL_BYOS, SLES_BYOS) for AHB by using the Azure CLI](/cli/azure/vmss)
+* [Learn how to create and update virtual machines and add license types (RHEL_BYOS, SLES_BYOS) for Azure Hybrid Benefit by using the Azure CLI](/cli/azure/vmss)
virtual-machines Automatic Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-extension-upgrade.md
Automatic Extension Upgrade supports the following extensions (and more are adde
- [Guest Configuration Extension](./extensions/guest-configuration.md) ΓÇô Linux and Windows - Key Vault ΓÇô [Linux](./extensions/key-vault-linux.md) and [Windows](./extensions/key-vault-windows.md) - [Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-overview.md)
+- [DSC extension for Linux](extensions/dsc-linux.md)
## Enabling Automatic Extension Upgrade
virtual-machines Dedicated Host General Purpose Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-general-purpose-skus.md
The sizes and hardware types available for dedicated hosts vary by region. Refer
## Dadsv5 ### Dadsv5-Type1
-The Dadsv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dadsv5-Type1 runs [Dadsv5-series](dasv5-dadsv5-series.md#dadsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+The Dadsv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dadsv5-Type1 runs [Dadsv5-series](dasv5-dadsv5-series.md#dadsv5-series) VMs. Refer to the VM size documentation to better understand specific VM performance information.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dadsv5-Type1 host.
The following packing configuration outlines the max packing of uniform VMs you
## Dasv5 ### Dasv5-Type1
-The Dasv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dasv5-Type1 runs [Dasv5-series](dasv5-dadsv5-series.md#dasv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+The Dasv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dasv5-Type1 runs [Dasv5-series](dasv5-dadsv5-series.md#dasv5-series) VMs. Refer to the VM size documentation to better understand specific VM performance information.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dasv5-Type1 host.
The following packing configuration outlines the max packing of uniform VMs you
| | | | D64as v5 | 1 | | | | | D96as v5 | 1 |
-## Ddsv5
-### Ddsv5-Type1
-
-The Ddsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Ddsv5-Type1 runs [Ddsv5-series](ddv5-ddsv5-series.md#ddsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
-
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Ddsv5-Type1 host.
-
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--||-|-|
-| 64 | 119 | 768 GiB | D2ds v5 | 32 |
-| | | | D4ds v5 | 22 |
-| | | | D8ds v5 | 11 |
-| | | | D16ds v5 | 5 |
-| | | | D32ds v5 | 2 |
-| | | | D48ds v5 | 1 |
-| | | | D64ds v5 | 1 |
-| | | | D96ds v5 | 1 |
-
-## Dsv5
-### Dsv5-Type1
-
-The Dsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Dsv5-Type1 runs [Dsv5-series](dv5-dsv5-series.md#dsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
-
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv5-Type1 host.
-
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--|||-|
-| 64 | 119 | 768 GiB | D2s v5 | 32 |
-| | | | D4s v5 | 25 |
-| | | | D8s v5 | 12 |
-| | | | D16s v5 | 6 |
-| | | | D32s v5 | 3 |
-| | | | D48s v5 | 2 |
-| | | | D64s v5 | 1 |
-| | | | D96s v5 | 1 |
- ## Dasv4 ### Dasv4-Type1
-The Dasv4-Type1 is a Dedicated Host SKU utilizing AMD's 2.35 GHz EPYCΓäó 7452 processor. It offers 64 physical cores, 96 vCPUs, and 672 GiB of RAM. The Dasv4-Type1 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+The Dasv4-Type1 is a Dedicated Host SKU utilizing AMD's 2.35 GHz EPYCΓäó 7452 processor. It offers 64 physical cores, 96 vCPUs, and 672 GiB of RAM. The Dasv4-Type1 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Refer to the VM size documentation to better understand specific VM performance information.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dasv4-Type1 host.
You can also mix multiple VM sizes on the Dasv4-Type1. The following are sample
- 20 D4asv4 + 8 D2asv4 ### Dasv4-Type2
-The Dasv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dasv4-Type2 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+The Dasv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dasv4-Type2 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Refer to the VM size documentation to better understand specific VM performance information.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dasv4-Type2 host.
The following packing configuration outlines the max packing of uniform VMs you
| | | | D64as v4 | 1 | | | | | D96as v4 | 1 |
+## DCadsv5
+### DCadsv5-Type1
+
+The DCadsv5-Type1 is a Dedicated Host SKU utilizing the AMD 3rd Generation EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The DCadsv5-Type1 runs [DCadsv5-series](dcasv5-dcadsv5-series.md#dcadsv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto an DCadsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 64 | 112 | 768 GiB | DC2ads v5 | 32 |
+| | | | DC4ads v5 | 27 |
+| | | | DC8ads v5 | 14 |
+| | | | DC16ads v5 | 7 |
+| | | | DC32ads v5 | 3 |
+| | | | DC48ads v5 | 2 |
+| | | | DC64ads v5 | 1 |
+| | | | DC96ads v5 | 1 |
+
+## DCasv5
+### DCasv5-Type1
+
+The DCasv5-Type1 is a Dedicated Host SKU utilizing the AMD 3rd Generation EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The DCasv5-Type1 runs [DCasv5-series](dcasv5-dcadsv5-series.md#dcasv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto an DCasv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||--|-|
+| 64 | 112 | 768 GiB | DC2as v5 | 32 |
+| | | | DC4as v5 | 28 |
+| | | | DC8as v5 | 14 |
+| | | | DC16as v5 | 7 |
+| | | | DC32as v5 | 3 |
+| | | | DC48as v5 | 2 |
+| | | | DC64as v5 | 1 |
+| | | | DC96as v5 | 1 |
+
+## DCsv2
+### DCsv2-Type1
+
+The DCsv2-Type1 is a Dedicated Host SKU utilizing the Intel® Coffee Lake (Xeon® E-2288G with SGX technology) processor. It offers 8 physical cores, 8 vCPUs, and 64 GiB of RAM. The DCsv2-Type1 runs [DCsv2-series](dcv2-series.md) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a DCsv2-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 8 | 8 | 64 GiB | DC8 v2 | 1 |
+
+## Ddsv5
+### Ddsv5-Type1
+
+The Ddsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Ddsv5-Type1 runs [Ddsv5-series](ddv5-ddsv5-series.md#ddsv5-series) VMs. Refer to the VM size documentation to better understand specific VM performance information.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Ddsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||-|-|
+| 64 | 119 | 768 GiB | D2ds v5 | 32 |
+| | | | D4ds v5 | 22 |
+| | | | D8ds v5 | 11 |
+| | | | D16ds v5 | 5 |
+| | | | D32ds v5 | 2 |
+| | | | D48ds v5 | 1 |
+| | | | D64ds v5 | 1 |
+| | | | D96ds v5 | 1 |
+ ## Ddsv4 ### Ddsv4-Type1 The Ddsv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Ddsv4-Type1 runs [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you
| | | | D48ds v4 | 1 | | | | | D64ds v4 | 1 |
+## Dsv5
+### Dsv5-Type1
+
+The Dsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Dsv5-Type1 runs [Dsv5-series](dv5-dsv5-series.md#dsv5-series) VMs. Refer to the VM size documentation to better understand specific VM performance information.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 64 | 119 | 768 GiB | D2s v5 | 32 |
+| | | | D4s v5 | 25 |
+| | | | D8s v5 | 12 |
+| | | | D16s v5 | 6 |
+| | | | D32s v5 | 3 |
+| | | | D48s v5 | 2 |
+| | | | D64s v5 | 1 |
+| | | | D96s v5 | 1 |
+ ## Dsv4 ### Dsv4-Type1
The following packing configuration outlines the max packing of uniform VMs you
| | | | D48s v3 | 2 | | | | | D64s v3 | 1 |
-## DCsv2
-### DCsv2-Type1
-
-The DCsv2-Type1 is a Dedicated Host SKU utilizing the Intel® Coffee Lake (Xeon® E-2288G with SGX technology) processor. It offers 8 physical cores, 8 vCPUs, and 64 GiB of RAM. The DCsv2-Type1 runs [DCsv2-series](dcv2-series.md) VMs.
-
-The following packing configuration outlines the max packing of uniform VMs you can put onto a DCsv2-Type1 host.
-
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--|||-|
-| 8 | 8 | 64 GiB | DC8 v2 | 1 |
- ## Next steps - For more information, see the [Dedicated hosts](dedicated-hosts.md) overview. -- There is sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
+- There is sample template, available at [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md) that uses both zones and fault domains for maximum resiliency in a region.
virtual-machines Dedicated Host Memory Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-memory-optimized-skus.md
The following packing configuration outlines the max packing of uniform VMs you
| | | | E64as v5 | 1 | | | | | E96as v5 | 1 |
-## Edsv5
-### Edsv5-Type1
-
-The Edsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Edsv5-Type1 runs [Edsv5-series](edv5-edsv5-series.md#edsv5-series) VMs.
-
-The following packing configuration outlines the max packing of uniform VMs you can put onto an Edsv5-Type1 host.
-
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--||-|-|
-| 64 | 119 | 768 GiB | E2ds v5 | 32 |
-| | | | E4ds v5 | 21 |
-| | | | E8ds v5 | 10 |
-| | | | E16ds v5 | 5 |
-| | | | E20ds v5 | 4 |
-| | | | E32ds v5 | 2 |
-| | | | E48ds v5 | 1 |
-| | | | E64ds v5 | 1 |
-
-## Esv5
-### Esv5-Type1
-
-The Esv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Esv5-Type1 runs [Esv5-series](ev5-esv5-series.md#esv5-series) VMs.
-
-The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv5-Type1 host.
-
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--|||-|
-| 64 | 119 | 768 GiB | E2s v5 | 32 |
-| | | | E4s v5 | 21 |
-| | | | E8s v5 | 10 |
-| | | | E16s v5 | 5 |
-| | | | E20s v5 | 4 |
-| | | | E32s v5 | 2 |
-| | | | E48s v5 | 1 |
-| | | | E64s v5 | 1 |
- ## Easv4 ### Easv4-Type1
The following packing configuration outlines the max packing of uniform VMs you
| | | | E64as v4 | 1 | | | | | E96as v4 | 1 |
+## Ebdsv5
+### Ebdsv5-Type1
+
+The Ebdsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Ebdsv5-Type1 runs [Ebdsv5-series](ebdsv5-ebsv5-series.md#ebdsv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Ebdsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||--|-|
+| 64 | 119 | 768 GiB | E2bds v5 | 8 |
+| | | | E4bds v5 | 8 |
+| | | | E8bds v5 | 6 |
+| | | | E16bds v5 | 3 |
+| | | | E32bds v5 | 1 |
+| | | | E48bds v5 | 1 |
+| | | | E64bds v5 | 1 |
+
+## Ebsv5
+### Ebsv5-Type1
+
+The Ebsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Ebsv5-Type1 runs [Ebsv5-series](ebdsv5-ebsv5-series.md#ebsv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Ebsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||-|-|
+| 64 | 119 | 768 GiB | E2bs v5 | 8 |
+| | | | E4bs v5 | 8 |
+| | | | E8bs v5 | 6 |
+| | | | E16bs v5 | 3 |
+| | | | E32bs v5 | 1 |
+| | | | E48bs v5 | 1 |
+| | | | E64bs v5 | 1 |
+
+## ECadsv5
+### ECadsv5-Type1
+
+The ECadsv5-Type1 is a Dedicated Host SKU utilizing the AMD 3rd Generation EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The ECadsv5-Type1 runs [ECadsv5-series](ecasv5-ecadsv5-series.md#ecadsv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto an ECadsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 64 | 112 | 768 GiB | EC2ads v5 | 32 |
+| | | | EC4ads v5 | 21 |
+| | | | EC8ads v5 | 10 |
+| | | | EC16ads v5 | 5 |
+| | | | EC20ads v5 | 4 |
+| | | | EC32ads v5 | 3 |
+| | | | EC48ads v5 | 1 |
+| | | | EC64ads v5 | 1 |
+| | | | EC96ads v5 | 1 |
+
+## ECasv5
+### ECasv5-Type1
+
+The ECasv5-Type1 is a Dedicated Host SKU utilizing the AMD 3rd Generation EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The ECasv5-Type1 runs [ECasv5-series](ecasv5-ecadsv5-series.md#ecasv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto an ECasv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||--|-|
+| 64 | 112 | 768 GiB | EC2as v5 | 32 |
+| | | | EC4as v5 | 21 |
+| | | | EC8as v5 | 10 |
+| | | | EC16as v5 | 5 |
+| | | | EC20as v5 | 4 |
+| | | | EC32as v5 | 3 |
+| | | | EC48as v5 | 1 |
+| | | | EC64as v5 | 1 |
+| | | | EC96as v5 | 1 |
++
+## Edsv5
+### Edsv5-Type1
+
+The Edsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Edsv5-Type1 runs [Edsv5-series](edv5-edsv5-series.md#edsv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Edsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||-|-|
+| 64 | 119 | 768 GiB | E2ds v5 | 32 |
+| | | | E4ds v5 | 21 |
+| | | | E8ds v5 | 10 |
+| | | | E16ds v5 | 5 |
+| | | | E20ds v5 | 4 |
+| | | | E32ds v5 | 2 |
+| | | | E48ds v5 | 1 |
+| | | | E64ds v5 | 1 |
## Edsv4 ### Edsv4-Type1
The following packing configuration outlines the max packing of uniform VMs you
| | | | E48ds v4 | 1 | | | | | E64ds v4 | 1 |
+## Esv5
+### Esv5-Type1
+
+The Esv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Esv5-Type1 runs [Esv5-series](ev5-esv5-series.md#esv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto an Esv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 64 | 119 | 768 GiB | E2s v5 | 32 |
+| | | | E4s v5 | 21 |
+| | | | E8s v5 | 10 |
+| | | | E16s v5 | 5 |
+| | | | E20s v5 | 4 |
+| | | | E32s v5 | 2 |
+| | | | E48s v5 | 1 |
+| | | | E64s v5 | 1 |
+ ## Esv4 ### Esv4-Type1
The following packing configuration outlines the max packing of uniform VMs you
| | | | M128-32ms | 1 | | | | | M128-64ms | 1 |
-## Mv2
-### Msmv2-Type1
-
-The Msm-Type1 is a Dedicated Host SKU utilizing the Intel® Skylake (Xeon® Platinum 8180M) processor. It offers 224 physical cores, 416 vCPUs, and 11,400 GiB of RAM. The Msmv2-Type1 runs [Mv2-series](mv2-series.md) VMs, including Msv2 and Mmsv2 VMs.
-
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Msm-Type1 host.
-
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--|||-|
-| 224 | 416 | 11,400 GiB | M208ms v2 | 2 |
-| | | | M208s v2 | 2 |
-| | | | M416-208ms v2 | 1 |
-| | | | M416-208s v2 | 1 |
-| | | | M416ms v2 | 1 |
-| | | | M416s v2 | 1 |
-
-### Msv2-Type1
+## Mdsv2
+### Mdmsv2MedMem-Type1
+The Mdmsv2MedMem-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8280) processor. It offers 112 physical cores, 192 vCPUs, and 4,096 GiB of RAM. The Mdmsv2MedMem-Type1 runs [Msv2-series](msv2-mdsv2-series.md) VMs, including Mdsv2 and Mdmsv2 VMs.
-The Msv2-Type1 is a Dedicated Host SKU utilizing the Intel® Skylake (Xeon® Platinum 8180M) processor. It offers 224 physical cores, 416 vCPUs, and 5,700 GiB of RAM. The Msv2-Type1 runs [Mv2-series](mv2-series.md) VMs, including Msv2 and Mmsv2 VMs.
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 112 | 192 | 4,096 GiB | M32dms v2 | 4 |
+| | | | M64ds v2 | 2 |
+| | | | M64dms v2 | 2 |
+| | | | M128ds v2 | 1 |
+| | | | M128dms v2 | 1 |
-The following packing configuration outlines the max packing of uniform VMs you can put onto a Msv2-Type1 host.
+### Mdsv2MedMem-Type1
+The Mdsv2MedMem-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8280) processor. It offers 112 physical cores, 192 vCPUs, and 2,048 GiB of RAM. The Mdsv2MedMem-Type1 runs [Msv2-series](msv2-mdsv2-series.md) VMs, including Mdsv2 and Mdmsv2 VMs.
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--|||-|
-| 224 | 416 | 5,700 GiB | M208ms v2 | 2 |
-| | | | M208s v2 | 1 |
-| | | | M416-208s v2 | 1 |
-| | | | M416s v2 | 1 |
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||--|-|
+| 112 | 192 | 2,048 GiB | M32dms v2 | 2 |
+| | | | M64ds v2 | 2 |
+| | | | M64dms v2 | 1 |
+| | | | M128ds v2 | 1 |
## Msv2 ### Mmsv2MedMem-Type1
The Msv2MedMem-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake
| | | | M64ms v2 | 1 | | | | | M128s v2 | 1 |
-## Mdsv2
-### Mdmsv2MedMem-Type1
-The Mdmsv2MedMem-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8280) processor. It offers 112 physical cores, 192 vCPUs, and 4,096 GiB of RAM. The Mdmsv2MedMem-Type1 runs [Msv2-series](msv2-mdsv2-series.md) VMs, including Mdsv2 and Mdmsv2 VMs.
+## Mv2
+### Msmv2-Type1
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--|||-|
-| 112 | 192 | 4,096 GiB | M32dms v2 | 4 |
-| | | | M64ds v2 | 2 |
-| | | | M64dms v2 | 2 |
-| | | | M128ds v2 | 1 |
-| | | | M128dms v2 | 1 |
+The Msm-Type1 is a Dedicated Host SKU utilizing the Intel® Skylake (Xeon® Platinum 8180M) processor. It offers 224 physical cores, 416 vCPUs, and 11,400 GiB of RAM. The Msmv2-Type1 runs [Mv2-series](mv2-series.md) VMs, including Msv2 and Mmsv2 VMs.
-### Mdsv2MedMem-Type1
-The Mdsv2MedMem-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8280) processor. It offers 112 physical cores, 192 vCPUs, and 2,048 GiB of RAM. The Mdsv2MedMem-Type1 runs [Msv2-series](msv2-mdsv2-series.md) VMs, including Mdsv2 and Mdmsv2 VMs.
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Msm-Type1 host.
-| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
-|-|--||--|-|
-| 112 | 192 | 2,048 GiB | M32dms v2 | 2 |
-| | | | M64ds v2 | 2 |
-| | | | M64dms v2 | 1 |
-| | | | M128ds v2 | 1 |
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 224 | 416 | 11,400 GiB | M208ms v2 | 2 |
+| | | | M208s v2 | 2 |
+| | | | M416-208ms v2 | 1 |
+| | | | M416-208s v2 | 1 |
+| | | | M416ms v2 | 1 |
+| | | | M416s v2 | 1 |
+
+### Msv2-Type1
+
+The Msv2-Type1 is a Dedicated Host SKU utilizing the Intel® Skylake (Xeon® Platinum 8180M) processor. It offers 224 physical cores, 416 vCPUs, and 5,700 GiB of RAM. The Msv2-Type1 runs [Mv2-series](mv2-series.md) VMs, including Msv2 and Mmsv2 VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Msv2-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 224 | 416 | 5,700 GiB | M208ms v2 | 2 |
+| | | | M208s v2 | 1 |
+| | | | M416-208s v2 | 1 |
+| | | | M416s v2 | 1 |
## Next steps
virtual-machines Dedicated Host Storage Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-storage-optimized-skus.md
This document goes through the hardware specifications and VM packings for all s
The sizes and hardware types available for dedicated hosts vary by region. Refer to the host [pricing page](https://aka.ms/ADHPricing) to learn more.
+## Lasv3
+### Lasv3-Type1
+
+The Lasv3-Type1 is a Dedicated Host SKU utilizing the AMD 3rd Generation EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 1024 GiB of RAM. The Lasv3-Type1 runs [Lasv3-series](lasv3-series.md) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Lasv3-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||-|-|
+| 64 | 112 | 1024 GiB | L8as v3 | 10 |
+| | | | L16as v3 | 5 |
+| | | | L32as v3 | 2 |
+| | | | L48as v3 | 1 |
+| | | | L64as v3 | 1 |
+| | | | L80as v3 | 1 |
+
+## Lsv3
+### Lsv3-Type1
+
+The Lsv3-Type1 is a Dedicated Host SKU utilizing the Intel® 3rd Generation Xeon® Platinum 8370C (Ice Lake) processor. It offers 64 physical cores, 119 vCPUs, and 1024 GiB of RAM. The Lsv3-Type1 runs [Lsv3-series](lsv3-series.md) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Lsv3-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 64 | 119 | 1024 GiB | L8s v3 | 10 |
+| | | | L16s v3 | 5 |
+| | | | L32s v3 | 2 |
+| | | | L48s v3 | 1 |
+| | | | L64s v3 | 1 |
+| | | | L80s v3 | 1 |
+ ## Lsv2 ### Lsv2-Type1
-The Lsv2-Type1 is a Dedicated Host SKU utilizing the AMD's 2.55 GHz EPYCΓäó 7551 processor. It offers 64 physical cores, 80 vCPUs, and 640 GiB of RAM. The Lsv2-Type1 runs [Lsv2-series](lsv2-series.md) VMs.
+The Lsv2-Type1 is a Dedicated Host SKU utilizing the AMD 2.55 GHz EPYCΓäó 7551 processor. It offers 64 physical cores, 80 vCPUs, and 640 GiB of RAM. The Lsv2-Type1 runs [Lsv2-series](lsv2-series.md) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Lsv2-Type1 host.
The following packing configuration outlines the max packing of uniform VMs you
- For more information, see the [Dedicated hosts](dedicated-hosts.md) overview. -- There is sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
+- There's a sample template, available at [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md) that uses both zones and fault domains for maximum resiliency in a region.
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
Title: Azure Hybrid Benefit for BYOS Linux Virtual Machines
+ Title: Azure Hybrid Benefit for BYOS Linux virtual machines
description: Learn how Azure Hybrid Benefit can provide updates and support for Linux virtual machines.
# Explore Azure Hybrid Benefit for bring-your-own-subscription Linux virtual machines
->[!IMPORTANT]
->This article explores *Azure Hybrid Benefit (AHB)* for *bring-your-own-subscription (BYOS) virtual machines*. AHB lets you switch to custom image Virtual Machines, RHEL BYOS Virtual Machines, and SLES BYOS Virtual Machines. For steps to switch in the reverse from a BYOS Virtual Machine to a RHEL PAYG Virtual Machine or SLES PAYG Virtual Machine, refer to [Hybrid Benefit for PAYG Virtual Machines](./azure-hybrid-benefit-linux.md).
+Azure Hybrid Benefit provides software updates and integrated support directly from Azure for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines. Azure Hybrid Benefit for bring-your-own-subscription (BYOS) virtual machines is a licensing benefit that's currently in public preview. It lets you switch RHEL and SLES BYOS virtual machines generated from custom on-premises images or from Azure Marketplace to pay-as-you-go billing.
->[!NOTE]
->AHB for BYOS Virtual Machines is in public preview at this time. To use this option on Azure, follow the steps in the [Getting Started](#get-started) section of this article.
+>[!IMPORTANT]
+> To do the reverse and switch from a RHEL pay-as-you-go virtual machine or SLES pay-as-you-go virtual machine to a BYOS virtual machine, see [Explore Azure Hybrid Benefit for pay-as-you-go Linux virtual machines](./azure-hybrid-benefit-linux.md).
+## How does Azure Hybrid Benefit work?
- AHB for bring-your-own-subscription (BYOS) Virtual Machines is a licensing benefit. It is available to RHEL and SLES custom image Virtual Machines (Virtual Machines generated from on-premises images), and to RHEL and SLES Marketplace BYOS Virtual Machines. AHB provides software updates and integrated support directly from Azure for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (Virtual Machines).
+Azure Hybrid Benefit converts BYOS billing to pay-as-you-go, so that you pay only pay-as-you-go software fees. You don't have to restart a machine for Azure Hybrid Benefit to be applied.
-## How does AHB work?
- When you switch to AHB you get software updates and integrated support for Marketplace BYOS or on-premises migrated RHEL and SLES BYOS Virtual Machines. AHB converts BYOS billing to *pay-as-you-go (PAYG)*, so that you pay only PAYG software fees. You don't have to reboot for AHBs to be applied.
+## Which Linux virtual machines qualify for Azure Hybrid Benefit?
-## Which Linux Virtual Machines qualifies for AHB for BYOS Virtual Machines?
+Azure Hybrid Benefit for BYOS virtual machines is available to all RHEL and SLES virtual machines that come from a custom image. It's also available to all RHEL and SLES BYOS virtual machines that come from an Azure Marketplace image.
-**AHB for BYOS Virtual Machines** is available to all RHEL and SLES custom image Virtual Machines, as well as RHEL and SLES Marketplace BYOS Virtual Machines. Azure Dedicated Host instances and SQL hybrid benefits aren't eligible for AHB if you're already using it with Linux Virtual Machines. Virtual Machine Scale Sets are Reserved Instances (RIs) and can't use AHB BYOS.
+Azure Dedicated Host instances and SQL hybrid benefits aren't eligible for Azure Hybrid Benefit if you're already using Azure Hybrid Benefit with Linux virtual machines. Virtual machine scale sets are reserved instances, so they also can't use Azure Hybrid Benefit for BYOS virtual machines.
## Get started
-### AHB for Red Hat customers
+### Azure Hybrid Benefit for Red Hat customers
-To start using AHB for Red Hat:
+To start using Azure Hybrid Benefit for Red Hat:
-1. Install the 'AHBForRHEL' extension on the virtual machine on which you wish to apply the AHB BYOS benefit. You can do this installation via Azure command-line interface (CLI) or PowerShell.
+1. Install the `AHBForRHEL` extension on the virtual machine on which you want to apply the Azure Hybrid Benefit BYOS benefit. You can do this installation via the Azure CLI or PowerShell.
+1. Depending on the software updates that you want, change the license type to a relevant value. Here are the available license type values and the software updates associated with them:
-1. Depending on the software updates you want, change the license type to relevant value. Here are the available license type values and the software updates associated with them:
-
- | License Type | Software Updates | Allowed Virtual Machines|
+ | License type | Software updates | Allowed virtual machines|
||||
- | RHEL_BASE | Installs Red Hat regular/base repositories into your virtual machine. | RHEL BYOS Virtual Machines, RHEL custom image Virtual Machines|
- | RHEL_EUS | Installs Red Hat Extended Update Support (EUS) repositories into your virtual machine. | RHEL BYOS Virtual Machines, RHEL custom image Virtual Machines|
- | RHEL_SAPAPPS | Installs RHEL for SAP Business Apps repositories into your virtual machine. | RHEL BYOS Virtual Machines, RHEL custom image Virtual Machines|
- | RHEL_SAPHA | Installs RHEL for SAP with HA repositories into your virtual machine. | RHEL BYOS Virtual Machines, RHEL custom image Virtual Machines|
- | RHEL_BASESAPAPPS | Installs RHEL regular/base SAP Business Apps repositories into your virtual machine. | RHEL BYOS Virtual Machines, RHEL custom image Virtual Machines|
- | RHEL_BASESAPHA | Installs regular/base RHEL for SAP with HA repositories into your virtual machine.| RHEL BYOS Virtual Machines, RHEL custom image Virtual Machines|
-
-1. Wait for one hour for the extension to read the license type value and install the repositories.
+ | RHEL_BASE | Installs Red Hat regular/base repositories on your virtual machine. | RHEL BYOS virtual machines, RHEL custom image virtual machines|
+ | RHEL_EUS | Installs Red Hat Extended Update Support (EUS) repositories on your virtual machine. | RHEL BYOS virtual machines, RHEL custom image virtual machines|
+ | RHEL_SAPAPPS | Installs RHEL for SAP Business Apps repositories on your virtual machine. | RHEL BYOS virtual machines, RHEL custom image virtual machines|
+ | RHEL_SAPHA | Installs RHEL for SAP with High Availability (HA) repositories on your virtual machine. | RHEL BYOS virtual machines, RHEL custom image virtual machines|
+ | RHEL_BASESAPAPPS | Installs RHEL regular/base SAP Business Apps repositories on your virtual machine. | RHEL BYOS virtual machines, RHEL custom image virtual machines|
+ | RHEL_BASESAPHA | Installs regular/base RHEL for SAP with HA repositories on your virtual machine.| RHEL BYOS virtual machines, RHEL custom image virtual machines|
-1. You should now be connected to Azure Red Hat Update and the relevant repositories will be installed in your machine.
+1. Wait one hour for the extension to read the license type value and install the repositories.
-1. In case the extension isn't running by itself, you can run it on demand as well.
+ > [!NOTE]
+ > If the extension isn't running by itself, you can run it on demand.
-1. In case you want to switch back to the bring-your-own-subscription model, just change the license type to 'None' and run the extension. This action will remove all RHUI repositories from your virtual machine and stop the billing.
+1. You should now be connected to Azure Red Hat Update. The relevant repositories will be installed on your machine.
->[!Note]
-> In the unlikely event that the extension is unable to install repositories or there are any other issues, switch the license type back to empty and reach out to Support. This ensures that you don't get billed for software updates.
+1. If you want to switch back to the bring-your-own-subscription model, just change the license type to `None` and run the extension. This action will remove all Red Hat Update Infrastructure (RHUI) repositories from your virtual machine and stop the billing.
+> [!Note]
+> In the unlikely event that the extension can't install repositories or there are any other issues, switch the license type back to empty and reach out to Microsoft support. This ensures that you don't get billed for software updates.
-### AHB for SUSE customers
+### Azure Hybrid Benefit for SUSE customers
-To start using AHB for SLES Virtual Machines:
+To start using Azure Hybrid Benefit for SLES virtual machines:
-1. Install the AHB for BYOS Virtual Machines extension on the Virtual Machine that will use it.
-1. Change the license type to the value relevant to the software updates you want. Here are the available license type values and the software updates associated with them:
+1. Install the `AHBForSLES` extension on the virtual machine that will use it.
+1. Change the license type to the value that reflects the software updates you want. Here are the available license type values and the software updates associated with them:
- | License Type | Software Updates | Allowed Virtual Machines|
+ | License type | Software updates | Allowed virtual machines|
||||
- | SLES | Installs SLES standard repositories into your virtual machine. | SLES BYOS Virtual Machines, SLES custom image Virtual Machines|
- | SLES_SAP | Installs SLES SAP repositories into your virtual machine. | SLES SAP BYOS Virtual Machines, SLES custom image Virtual Machines|
- | SLES_HPC | Installs SLES High Performance Compute related repositories into your virtual machine. | SLES HPC BYOS Virtual Machines, SLES custom image Virtual Machines|
+ | SLES | Installs SLES Standard repositories on your virtual machine. | SLES BYOS virtual machines, SLES custom image virtual machines|
+ | SLES_SAP | Installs SLES SAP repositories on your virtual machine. | SLES SAP BYOS virtual machines, SLES custom image virtual machines|
+ | SLES_HPC | Installs SLES High Performance Computing repositories on your virtual machine. | SLES HPC BYOS virtual machines, SLES custom image virtual machines|
-1. Wait for 5 minutes for the extension to read the license type value and install the repositories.
+1. Wait five minutes for the extension to read the license type value and install the repositories.
-1. You should now be connected to the SUSE Public Cloud Update Infrastructure on Azure and the relevant repositories will be installed in your machine.
+ > [!NOTE]
+ > If the extension isn't running by itself, you can run it on demand.
-1. In case the extension isn't running by itself, you can run it on demand as well.
+1. You should now be connected to the SUSE public cloud update infrastructure on Azure. The relevant repositories will be installed on your machine.
-1. In case you want to switch back to the bring-your-own-subscription model, just change the license type to 'None' and run the extension. This action will remove all repositories from your virtual machine and stop the billing.
+1. If you want to switch back to the bring-your-own-subscription model, just change the license type to `None` and run the extension. This action will remove all repositories from your virtual machine and stop the billing.
-## How to enable and disable AHB for RHEL
+## Enable Azure Hybrid Benefit for RHEL
-You can install the `AHBForRHEL` extension. After successfully installing the extension, you can use the `az vm update` command to update your existing license type on your running Virtual Machines. For SLES Virtual Machines, run the command and set `--license-type` parameter to one of the following license types: `RHEL_BASE`, `RHEL_EUS`, `RHEL_SAPHA`, `RHEL_SAPAPPS`, `RHEL_BASESAPAPPS` or `RHEL_BASESAPHA`.
+After you successfully install the `AHBForRHEL` extension, you can use the `az vm update` command to update the existing license type on your running virtual machines. For SLES virtual machines, run the command and set the `--license-type` parameter to one of the following license types: `RHEL_BASE`, `RHEL_EUS`, `RHEL_SAPHA`, `RHEL_SAPAPPS`, `RHEL_BASESAPAPPS`, or `RHEL_BASESAPHA`.
-
-### How to enable AHB for RHEL using a CLI
-1. Install the AHB extension on a Virtual Machine that is up and running. You can use Azure portal or the following command via Azure CLI:
+### Enable Azure Hybrid Benefit for RHEL by using the Azure CLI
+1. Install the Azure Hybrid Benefit extension on a running virtual machine. You can use the Azure portal or use the following command via the Azure CLI:
```azurecli az vm extension set -n AHBForRHEL --publisher Microsoft.Azure.AzureHybridBenefit --vm-name myVMName --resource-group myResourceGroup ```
-1. Once, the extension is installed successfully, change the license type based on what you require:
+1. After the extension is installed successfully, change the license type based on what you need:
```azurecli
- # This will enable AHB to fetch software updates for RHEL base/regular repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL base/regular repositories
az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASE
- # This will enable AHB to fetch software updates for RHEL EUS repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL EUS repositories
az vm update -g myResourceGroup -n myVmName --license-type RHEL_EUS
- # This will enable AHB to fetch software updates for RHEL SAP APPS repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL SAP APPS repositories
az vm update -g myResourceGroup -n myVmName --license-type RHEL_SAPAPPS
- # This will enable AHB to fetch software updates for RHEL SAP HA repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL SAP HA repositories
az vm update -g myResourceGroup -n myVmName --license-type RHEL_SAPHA
- # This will enable AHB to fetch software updates for RHEL BASE SAP APPS repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL BASE SAP APPS repositories
az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASESAPAPPS
- # This will enable AHB to fetch software updates for RHEL BASE SAP HA repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL BASE SAP HA repositories
az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASESAPHA ```
-1. Wait for 5 minutes for the extension to read the license type value and install the repositories.
+1. Wait five minutes for the extension to read the license type value and install the repositories.
-1. You should now be connected to Azure Red Hat Update Infrastructure and the relevant repositories will be installed in your machine. You can validate the same by performing the command below on your Virtual Machine:
+1. You should now be connected to Red Hat Update Infrastructure. The relevant repositories will be installed on your machine. You can validate the installation by running the following command on your virtual machine:
```bash yum repolist ```
- 1. In case the extension isn't running by itself, you can try the below command on the Virtual Machine:
+1. If the extension isn't running by itself, you can try the following command on the virtual machine:
```bash
- systemctl start azure-hybrid-benefit.service
+ systemctl start azure-hybrid-benefit.service
```
- 1. You can use the below command in your RHEL Virtual Machine to get the current status of the service:
+1. You can use the following command in your RHEL virtual machine to get the current status of the service:
```bash
- ahb-service -status
+ ahb-service -status
```
-## How to enable and disable AHB for SLES
+## Enable and disable Azure Hybrid Benefit for SLES
+
+After you successfully install the `AHBForSLES` extension, you can use the `az vm update` command to update the existing license type on your running virtual machines. For SLES virtual machines, run the command and set the `--license-type` parameter to one of the following license types: `SLES_STANDARD`, `SLES_SAP`, or `SLES_HPC`.
-You can install the `AHBForSLES` extension. After successfully installing the extension,
-you can use the `az vm update` command to update existing license type on running Virtual Machines. For SLES Virtual Machines, run the command and set `--license-type` parameter to one of the following license types: `SLES_STANDARD`, `SLES_SAP` or `SLES_HPC`.
+### Enable Azure Hybrid Benefit for SLES by using the Azure CLI
+1. Install the Azure Hybrid Benefit extension on a running virtual machine. You can use the Azure portal or use the following command via the Azure CLI:
-### How to enable AHB for SLES using a CLI
-1. Install the AHB extension on a Virtual Machine that is up and running, with the portal, or via Azure CLI using the command below:
```azurecli az vm extension set -n AHBForSLES --publisher SUSE.AzureHybridBenefit --vm-name myVMName --resource-group myResourceGroup ```
-1. Once, the extension is installed successfully, change the license type based on what you require:
+1. After the extension is installed successfully, change the license type based on what you need:
```azurecli
- # This will enable AHB to fetch software updates for SLES STANDARD repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for SLES Standard repositories
az vm update -g myResourceGroup -n myVmName --license-type SLES
- # This will enable AHB to fetch software updates for SLES SAP repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for SLES SAP repositories
az vm update -g myResourceGroup -n myVmName --license-type SLES_SAP
- # This will enable AHB to fetch software updates for SLES HPC repositories
+ # This will enable Azure Hybrid Benefit to fetch software updates for SLES HPC repositories
az vm update -g myResourceGroup -n myVmName --license-type SLES_HPC- ```
-1. Wait for 5 minutes for the extension to read the license type value and install the repositories.
+1. Wait five minutes for the extension to read the license type value and install the repositories.
+
+1. You should now be connected to the SUSE public cloud update infrastructure on Azure. The relevant repositories will be installed on your machine. You can verify this change by running the following command to list SUSE repositories on your machine:
-1. You should now be connected to the SUSE Public Cloud Update Infrastructure on Azure and the relevant repositories will be installed in your machine. You can verify this change by performing the command below on your Virtual Machine, which lists SUSE repositories on your Virtual Machine:
```bash zypper repos ```
-### How to disable AHB using a CLI
-1. Ensure that the AHB extension is installed on your Virtual Machine.
-1. To disable AHB, follow below command:
+### Disable Azure Hybrid Benefit by using the Azure CLI
+1. Ensure that the Azure Hybrid Benefit extension is installed on your virtual machine.
+1. To disable Azure Hybrid Benefit, use the following command:
```azurecli
- # This will disable AHB on a Virtual Machine
+ # This will disable Azure Hybrid Benefit on a virtual machine
az vm update -g myResourceGroup -n myVmName --license-type None ```
-## How to check the AHB status of a Virtual Machine
-To check the AHB status of a Virtual Machine, do the following:
-1. Ensure that the AHB extension is installed:
-1. You can view the AHB status of a Virtual Machine by using the Azure CLI or by using Azure Instance Metadata Service.
-
- You can use the following command for this purpose. Look for a `licenseType` field in the response. If the `licenseType` field exists and the value is one of the following, your Virtual Machine has AHB enabled:
- `RHEL_BASE`, `RHEL_EUS`, `RHEL_BASESAPAPPS`, `RHEL_SAPHA`, `RHEL_BASESAPAPPS`, `RHEL_BASESAPHA`, `SLES`, `SLES_SAP`, `SLES_HPC`.
+## Check the Azure Hybrid Benefit status of a virtual machine
+1. Ensure that the Azure Hybrid Benefit extension is installed.
+1. In the Azure CLI or Azure Instance Metadata Service, run the following command:
```azurecli az vm get-instance-view -g MyResourceGroup -n MyVm ```
+1. Look for a `licenseType` field in the response. If the `licenseType` field exists and the value is one of the following, your virtual machine has Azure Hybrid Benefit enabled:
+
+ `RHEL_BASE`, `RHEL_EUS`, `RHEL_BASESAPAPPS`, `RHEL_SAPHA`, `RHEL_BASESAPAPPS`, `RHEL_BASESAPHA`, `SLES`, `SLES_SAP`, `SLES_HPC`
+ ## Compliance ### Red Hat compliance
-Customers who use AHB for BYOS Virtual Machines for RHEL agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offerings.
+Customers who use Azure Hybrid Benefit for BYOS virtual machines for RHEL agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offerings.
-### Explore AHB for SUSE
+### SUSE compliance
-Customers who use AHB for BYOS Virtual Machines for SLES and want more for information about moving from SLES PAYG to BYOS or moving from SLES BYOS to PAYG, see [SUSE Linux Enterprise and AHB](https://aka.ms/suse-ahb).
+If you use Azure Hybrid Benefit for BYOS virtual machines for SLES and want more information about moving from SLES pay-as-you-go to BYOS, or moving from SLES BYOS to pay-as-you-go, see [Azure Hybrid Benefit Support](https://aka.ms/suse-ahb) on the SUSE website.
## Frequently asked questions
-*Q: What is the licensing cost I pay with AHB for BYOS Virtual Machines?*
+*Q: What is the licensing cost I pay with Azure Hybrid Benefit for BYOS virtual machines?*
-A: On using AHB for BYOS Virtual Machines, you'll essentially convert bring-your-own-subscription (BYOS) billing model to pay-as-you-go (PAYG) billing model. Hence, you'll be paying similar to PAYG Virtual Machines for software subscription cost. The table below maps the PAYG flavors available on Azure and links to pricing page to help you understand the cost associated with AHB for BYOS Virtual Machines.
+A: When you start using Azure Hybrid Benefit for BYOS virtual machines, you'll essentially convert the bring-your-own-subscription billing model to a pay-as-you-go billing model. What you pay will be similar to a software subscription cost for pay-as-you-go virtual machines.
-| License type | Relevant PAYG Virtual Machine image & Pricing Link (Keep the AHB for PAYG filter off) |
+The following table maps the pay-as-you-go options on Azure and links to pricing information to help you understand the cost associated with Azure Hybrid Benefit for BYOS virtual machines. When you go to the pricing pages, keep the Azure Hybrid Benefit for pay-as-you-go filter off.
+
+| License type | Relevant pay-as-you-go virtual machine image and pricing link |
|||| | RHEL_BASE | [Red Hat Enterprise Linux](https://azure.microsoft.com/pricing/details/virtual-machines/red-hat/) | | RHEL_SAPAPPS | [RHEL for SAP Business Applications](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-business/) |
A: On using AHB for BYOS Virtual Machines, you'll essentially convert bring-your
| RHEL_BASESAPAPPS | [RHEL for SAP Business Applications](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-business/) | | RHEL_BASESAPHA | [RHEL for SAP with HA](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-ha/) | | RHEL_EUS | [Red Hat Enterprise Linux](https://azure.microsoft.com/pricing/details/virtual-machines/red-hat/) |
-| SLES | [SLES Standard](https://azure.microsoft.com/pricing/details/virtual-machines/sles-standard/) |
-| SLES_SAP | [SLES SAP](https://azure.microsoft.com/pricing/details/virtual-machines/sles-sap/) |
-| SLES_HPC | [SLES HPC](https://azure.microsoft.com/pricing/details/virtual-machines/sles-hpc-standard/) |
+| SLES | [SLES](https://azure.microsoft.com/pricing/details/virtual-machines/sles-standard/) |
+| SLES_SAP | [SLES for SAP](https://azure.microsoft.com/pricing/details/virtual-machines/sles-sap/) |
+| SLES_HPC | [SLE HPC](https://azure.microsoft.com/pricing/details/virtual-machines/sles-hpc-standard/) |
*Q: Can I use a license type designated for RHEL (such as `RHEL_BASE`) with a SLES image, or vice versa?*
-A: No, you can't. Trying to enter a license type that incorrectly matches the distribution running on your Virtual Machine will fail and you might end up getting billed incorrectly. However, if you accidentally enter the wrong license type, either changing the license type to empty will remove the billing or updating your Virtual Machine again to the correct license type will still enable AHB.
+A: No, you can't. Trying to enter a license type that incorrectly matches the distribution running on your virtual machine will fail, and you might end up getting billed incorrectly.
-*Q: What are the supported versions for RHEL with AHB for BYOS Virtual Machines?*
+If you accidentally enter the wrong license type, remove the billing by changing the license type to empty. Then update your virtual machine to the correct license type to enable Azure Hybrid Benefit.
-A: RHEL versions greater than 7.4 are supported with AHB for BYOS Virtual Machines.
+*Q: What are the supported versions for RHEL with Azure Hybrid Benefit for BYOS virtual machines?*
-*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Can I convert the billing on these images from BYOS to PAYG?*
+A: Azure Hybrid Benefit for BYOS virtual machines supports RHEL versions later than 7.4.
-A: Yes, this capability supports image from on-premises to Azure. Please [follow steps shared here](#get-started).
+*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Can I convert the billing on these images from BYOS to pay-as-you-go?*
-*Q: Can I use AHB for BYOS Virtual Machines on RHEL and SLES PAYG Marketplace Virtual Machines?*
+A: Yes, this capability supports images uploaded from on-premises to Azure. Follow the steps in the [Get started](#get-started) section earlier in this article.
-A: No, as these Virtual Machines are already pay-as-you-go (PAYG). However, with AHB v1 and v2 you can use the license type of `RHEL_BYOS` for RHEL Virtual Machines and `SLES_BYOS` for conversions of RHEL and SLES PAYG Marketplace Virtual Machines. You can read more on [Hybrid Benefit for PAYG Virtual Machines here.](./azure-hybrid-benefit-linux.md)
+*Q: Can I use Azure Hybrid Benefit for BYOS virtual machines on RHEL and SLES pay-as-you-go Azure Marketplace virtual machines?*
-*Q: Can I use AHB for BYOS Virtual Machines on virtual machine scale sets for RHEL and SLES?*
+A: No, because these virtual machines are already pay-as-you-go. However, with Azure Hybrid Benefit, you can use the license type of `RHEL_BYOS` for RHEL virtual machines and `SLES_BYOS` for conversions of RHEL and SLES pay-as-you-go Azure Marketplace virtual machines. For more information, see [Explore Azure Hybrid Benefit for pay-as-you-go Linux virtual machines](./azure-hybrid-benefit-linux.md).
-A: No, Hybrid Benefit for BYOS Virtual Machines isn't available for virtual machine scale sets currently.
+*Q: Can I use Azure Hybrid Benefit for BYOS virtual machines on virtual machine scale sets for RHEL and SLES?*
-*Q: Can I use AHB for BYOS Virtual Machines on a virtual machine deployed for SQL Server on RHEL images?*
+A: No. Hybrid Benefit for BYOS virtual machines isn't currently available for virtual machine scale sets.
-A: No, you can't. There's no plan for supporting these virtual machines.
+*Q: Can I use Azure Hybrid Benefit for BYOS virtual machines on a virtual machine deployed for SQL Server on RHEL images?*
-*Q: Can I use AHB for BYOS Virtual Machines on my RHEL Virtual Data Center subscription?*
+A: No, you can't. There's no plan for supporting these virtual machines.
-A: No, you can't. VDC isn't supported on Azure at all, including AHB.
+*Q: Can I use Azure Hybrid Benefit for BYOS virtual machines on my RHEL for Virtual Datacenters subscription?*
+A: No. RHEL for Virtual Datacenters isn't supported on Azure at all, including Azure Hybrid Benefit.
## Next steps
-* [Learn how to convert RHEL and SLES PAYG Virtual Machines to BYOS using Hybrid Benefit for PAYG Virtual Machines](./azure-hybrid-benefit-linux.md)
+* [Learn how to convert RHEL and SLES pay-as-you-go virtual machines to BYOS by using Azure Hybrid Benefit](./azure-hybrid-benefit-linux.md)
-* [Learn how to create and update Virtual Machines and add license types (RHEL_BYOS, SLES_BYOS) for Hybrid Benefit by using the Azure CLI](/cli/azure/vm)
+* [Learn how to create and update virtual machines and add license types (RHEL_BYOS, SLES_BYOS) for Azure Hybrid Benefit by using the Azure CLI](/cli/azure/vm)
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Title: Azure Hybrid Benefit for PAYG Linux Virtual Machines
+ Title: Azure Hybrid Benefit for pay-as-you-go Linux virtual machines
description: Learn how Azure Hybrid Benefit can save you money on Linux virtual machines.
# Explore Azure Hybrid Benefit for pay-as-you-go Linux virtual machines
->[!IMPORTANT]
->This article explores *Azure Hybrid Benefit (AHB)* for *pay-as-you-go (PAYG)* *virtual machines or virtual machine scale sets (Flexible orchestration mode only)*. It explores how to switch your Virtual Machines to *Red Hat Enterprise Linux (RHEL)* PAYG and *SUSE Linux Enterprise Server (SLES)* PAYG billing. To do the reverse and switch to *bring-your-own-subscription (BYOS)* billing, visit [Azure Hybrid Benefit for BYOS Virtual Machines](./azure-hybrid-benefit-byos-linux.md).
+Azure Hybrid Benefit for pay-as-you-go virtual machines or virtual machine scale sets (Flexible orchestration mode only) is an optional licensing benefit. It significantly reduces the cost of running Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines in the cloud.
+
+This article explores how to use Azure Hybrid Benefit to switch your virtual machines or virtual machine scale sets (Flexible orchestration mode only) to RHEL bring-your-own-subscription (BYOS) and SLES BYOS billing. With this benefit, your RHEL or SLES subscription covers your software fee. So you pay only infrastructure costs for your virtual machine.
-AHB for pay-as-you-go (PAYG) virtual machines or virtual machine scale sets (Flexible orchestration mode only) is an optional licensing benefit. It significantly reduces the cost of running Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) Virtual Machines in the cloud. With this benefit, your RHEL or SLES subscription covers your software fee. So you only pay infrastructure costs for your Virtual Machine. This benefit is available for all RHEL and SLES Marketplace PAYG images.
+>[!IMPORTANT]
+>To do the reverse and switch from BYOS to pay-as-you-go billing, see [Explore Azure Hybrid Benefit for bring-your-own-subscription Linux virtual machines](./azure-hybrid-benefit-byos-linux.md).
-## How does AHB work?
+## How does Azure Hybrid Benefit work?
-You can convert existing RHEL and SLES PAYG Virtual Machines to bring-your-own-subscription (BYOS) billing using AHB. You can switch PAYG Virtual Machines to BYOS billing without having to redeploy. Virtual Machines deployed from PAYG images on Azure pay both an infrastructure fee and a software fee. When you apply AHB, you pay only infrastructure costs for your Virtual Machine.
+Virtual machines deployed from pay-as-you-go images on Azure incur both an infrastructure fee and a software fee. You can convert existing RHEL and SLES pay-as-you-go virtual machines to BYOS billing by using Azure Hybrid Benefit without having to redeploy.
-After you apply AHB to your RHEL or SLES Virtual Machine, you are no longer charged a software fee. Your Virtual Machine is charged a BYOS fee instead. You can also switch back from AHB to PAYG billing at any time.
+After you apply Azure Hybrid Benefit to your RHEL or SLES virtual machine, you're no longer charged a software fee. Your virtual machine is charged a BYOS fee instead. You can use Azure Hybrid Benefit to switch back to pay-as-you-go billing at any time.
-## How to apply AHB to your PAYG Virtual Machines
+## Which Linux virtual machines qualify for Azure Hybrid Benefit?
-**AHB for PAYG Virtual Machines** is available for all RHEL and SLES PAYG images in Azure Marketplace.
+Azure Hybrid Benefit for pay-as-you-go virtual machines is available for all RHEL and SLES pay-as-you-go images in Azure Marketplace.
-Azure Dedicated Host instances, and SQL hybrid benefits are not eligible for AHB if you already use AHB with Linux Virtual Machines.
+Azure dedicated host instances and SQL hybrid benefits are not eligible for Azure Hybrid Benefit if you already use Azure Hybrid Benefit with Linux virtual machines.
## Get started
-### How to apply AHB to Red Hat
+### Apply Azure Hybrid Benefit to Red Hat
-AHB for PAYG Virtual Machines for RHEL is available to Red Hat customers who meet the following criteria:
+Azure Hybrid Benefit for pay-as-you-go virtual machines for RHEL is available to Red Hat customers who meet the following criteria:
- Have active or unused RHEL subscriptions that are eligible for use in Azure-- Have enabled one or more of their subscriptions for use in Azure with the [Red Hat Cloud Access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program-
-> [!IMPORTANT]
-> Ensure the correct subscription has been enabled on the [cloud-access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program.
+- Have correctly enabled one or more of their subscriptions for use in Azure with the [Red Hat Cloud Access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program
-To start using AHB for Red Hat:
+To start using Azure Hybrid Benefit for Red Hat:
1. Enable one or more of your eligible RHEL subscriptions for use in Azure by using the [Red Hat Cloud Access customer interface](https://access.redhat.com/management/cloud).
- The Azure subscriptions that you provide during the Red Hat Cloud Access enablement process will then be permitted to use AHB.
-1. Apply AHB for PAYG Virtual Machines to any RHEL PAYG Virtual Machines that you deploy in Azure Marketplace PAYG images. You can use Azure portal or Azure command-line interface (CLI) to enable AHB.
-1. Follow the recommended [next steps](https://access.redhat.com/articles/5419341) for configuring update sources for your RHEL Virtual Machines and for RHEL subscription compliance guidelines.
+ The Azure subscriptions that you provide during the Red Hat Cloud Access enablement process will then be permitted to use Azure Hybrid Benefit.
+1. Apply Azure Hybrid Benefit to any RHEL pay-as-you-go virtual machines that you deploy in Azure Marketplace pay-as-you-go images. You can use the Azure portal or the Azure CLI to enable Azure Hybrid Benefit.
+1. Follow the recommended [next steps](https://access.redhat.com/articles/5419341) to configure update sources for your RHEL virtual machines and for RHEL subscription compliance guidelines.
-### How to apply AHB to SUSE
+### Apply Azure Hybrid Benefit to SUSE
-AHB for PAYG Virtual Machines for SUSE is available to customers who have:
+Azure Hybrid Benefit for pay-as-you-go virtual machines for SUSE is available to customers who have:
- Unused SUSE subscriptions that are eligible to use in Azure. - One or more active SUSE subscriptions to use on-premises that should be moved to Azure.
AHB for PAYG Virtual Machines for SUSE is available to customers who have:
> [!IMPORTANT] > Ensure that you select the correct subscription to use in Azure.
-To start using AHB for SUSE:
+To start using Azure Hybrid Benefit for SUSE:
1. Register the subscription that you purchased from SUSE or a SUSE distributor with the [SUSE Customer Center](https://scc.suse.com). 2. Activate the subscription in the SUSE Customer Center.
-3. Register your Virtual Machines that are receiving AHB with the SUSE Customer Center to get the updates from the SUSE Customer Center.
+3. Register your virtual machines that are receiving Azure Hybrid Benefit with the SUSE Customer Center to get the updates from the SUSE Customer Center.
-## How to enable and disable AHB in Azure portal
+## Enable Azure Hybrid Benefit in the Azure portal
-In Azure portal, you can enable AHB on existing Virtual Machines or on new Virtual Machines at the time that you create them.
+In the Azure portal, you can enable Azure Hybrid Benefit on existing virtual machines or on new virtual machines at the time that you create them.
-### How to enable AHB on an existing Virtual Machine in Azure portal
+### Enable Azure Hybrid Benefit on an existing virtual machine in the Azure portal
-To enable AHB on an existing Virtual Machine:
+To enable Azure Hybrid Benefit on an existing virtual machine:
-1. Got to [Azure portal](https://portal.azure.com/).
-1. Open the Virtual Machine page on which you want to apply the conversion.
-1. Go the **Configuration** option on the left. You will see the Licensing section. To enable the AHB conversion, check the 'Yes' radio button and check the Confirmation checkbox.
-![AHB Configuration blade after creating](./media/azure-hybrid-benefit/create-configuration-blade.png)
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Open the virtual machine page on which you want to apply the conversion.
+1. Go to **Configuration** > **Licensing**. To enable the Azure Hybrid Benefit conversion, select **Yes**, and then select the confirmation checkbox.
-### How to enable AHB when you create a Virtual Machine in Azure portal
+![Screenshot of the Azure portal that shows the Licensing section of the configuration page for Azure Hybrid Benefit.](./media/azure-hybrid-benefit/create-configuration-blade.png)
-To enable AHB when you create a Virtual Machine (the SUSE workflow is the same as the RHEL example shown here):
+### Enable Azure Hybrid Benefit when you create a virtual machine in the Azure portal
-1. Go to [Azure portal](https://portal.azure.com/).
-1. Go to 'Create a Virtual Machine' page in the portal.
- ![AHB while creating a Virtual Machine](./media/azure-hybrid-benefit/create-vm-ahb.png)
-1. Click on the checkbox to enable AHB conversion and use cloud access licenses.
- ![AHB while creating a Virtual Machine Checkbox](./media/azure-hybrid-benefit/create-vm-ahb-checkbox.png)
-1. Create a Virtual Machine following the next set of instructions.
-1. Check the **Configuration** blade and you will see the option enabled.
-![AHB Configuration blade after creating](./media/azure-hybrid-benefit/create-configuration-blade.png)
+To enable Azure Hybrid Benefit when you create a virtual machine, use the following procedure. (The SUSE workflow is the same as the RHEL example shown here.)
-## How to enable and disable AHB using Azure CLI
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Go to **Create a virtual machine**.
+
+ ![Screenshot of the portal page for creating a virtual machine.](./media/azure-hybrid-benefit/create-vm-ahb.png)
+1. In the **Licensing** section, select the checkbox that asks if you want to use an existing RHEL subscription and the checkbox to confirm that your subscription is eligible.
+
+ ![Screenshot of the Azure portal that shows checkboxes selected for licensing.](./media/azure-hybrid-benefit/create-vm-ahb-checkbox.png)
+1. Create a virtual machine by following the next set of instructions.
+1. On the **Configuration** pane, confirm that the option is enabled.
+
+ ![Screenshot of the Azure Hybrid Benefit configuration pane after you create a virtual machine.](./media/azure-hybrid-benefit/create-configuration-blade.png)
-You can use the `az vm update` command to update existing Virtual Machines. For RHEL Virtual Machines, run the command with a `--license-type` parameter of `RHEL_BYOS`. For SLES Virtual Machines, run the command with a `--license-type` parameter of `SLES_BYOS`.
+## Enable and disable Azure Hybrid Benefit by using the Azure CLI
-### How to enable AHB using a CLI
+You can use the `az vm update` command to update existing virtual machines. For RHEL virtual machines, run the command with a `--license-type` parameter of `RHEL_BYOS`. For SLES virtual machines, run the command with a `--license-type` parameter of `SLES_BYOS`.
+
+### Enable Azure Hybrid Benefit by using the Azure CLI
```azurecli
-# This will enable AHB on a RHEL Virtual Machine
+# This will enable Azure Hybrid Benefit on a RHEL virtual machine
az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS
-# This will enable AHB on a SLES Virtual Machine
+# This will enable Azure Hybrid Benefit on a SLES virtual machine
az vm update -g myResourceGroup -n myVmName --license-type SLES_BYOS ```
-### How to disable AHB using a CLI
-To disable AHB, use a `--license-type` value of `None`:
+### Disable Azure Hybrid Benefit by using the Azure CLI
+To disable Azure Hybrid Benefit, use a `--license-type` value of `None`:
```azurecli
-# This will disable AHB on a Virtual Machine
+# This will disable Azure Hybrid Benefit on a virtual machine
az vm update -g myResourceGroup -n myVmName --license-type None ```
-### How to enable AHB on a large number of Virtual Machines using a CLI
-To enable AHB on a large number of Virtual Machines, you can use the `--ids` parameter in the Azure CLI:
+### Enable Azure Hybrid Benefit on a large number of virtual machines by using the Azure CLI
+To enable Azure Hybrid Benefit on a large number of virtual machines, you can use the `--ids` parameter in the Azure CLI:
```azurecli
-# This will enable AHB on a RHEL Virtual Machine. In this example, ids.txt is an
+# This will enable Azure Hybrid Benefit on a RHEL virtual machine. In this example, ids.txt is an
# existing text file that contains a delimited list of resource IDs corresponding
-# to the Virtual Machines using AHB
+# to the virtual machines using Azure Hybrid Benefit
az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS --ids $(cat ids.txt) ```
The following examples show two methods of getting a list of resource IDs: one a
# To get a list of all the resource IDs in a resource group: $(az vm list -g MyResourceGroup --query "[].id" -o tsv)
-# To get a list of all the resource IDs of Virtual Machines in a subscription:
+# To get a list of all the resource IDs of virtual machines in a subscription:
az vm list -o json | jq '.[] | {Virtual MachineName: .name, ResourceID: .id}' ```
-## How to apply AHB to PAYG Virtual Machines at creation time
-In addition to applying the AHB for PAYG Virtual Machines to existing pay-as-you-go Virtual Machines, you can invoke it at the time of Virtual Machine creation. AHBs of doing so are threefold:
-- You can provision both PAYG and BYOS Virtual Machines by using the same image and process.-- It enables future licensing mode changes, something not available with a BYOS-only image or if you bring your own Virtual Machine.-- The Virtual Machine will be connected to Red Hat Update Infrastructure (RHUI) by default, to ensure that it remains up to date and secure. You can change the updated mechanism after deployment at any time.
+## Apply Azure Hybrid Benefit to pay-as-you-go virtual machines at creation time
+In addition to applying Azure Hybrid Benefit to existing pay-as-you-go virtual machines, you can invoke it at the time of virtual machine creation. Benefits of doing so are threefold:
+- You can provision both pay-as-you-go and BYOS virtual machines by using the same image and process.
+- It enables future licensing mode changes. These changes aren't available with a BYOS-only image or if you bring your own virtual machine.
+- The virtual machine will be connected to Red Hat Update Infrastructure (RHUI) by default, to help keep it up to date and secure. You can change the updated mechanism after deployment at any time.
-## How to check the AHB status of a Virtual Machine
-You can view the AHB for PAYG Virtual Machines status of a Virtual Machine by using the Azure CLI or by using Azure Instance Metadata Service.
+## Check the Azure Hybrid Benefit status of a virtual machine
+You can view the Azure Hybrid Benefit status of a virtual machine by using the Azure CLI or by using Azure Instance Metadata Service.
-### A status check using Azure CLI
+### Check status by using the Azure CLI
-You can use the `az vm get-instance-view` command for this purpose. Look for a `licenseType` field in the response. If the `licenseType` field exists and the value is `RHEL_BYOS` or `SLES_BYOS`, your Virtual Machine has AHB enabled.
+You can use the `az vm get-instance-view` command to check the status. Look for a `licenseType` field in the response. If the `licenseType` field exists and the value is `RHEL_BYOS` or `SLES_BYOS`, your virtual machine has Azure Hybrid Benefit enabled.
```azurecli az vm get-instance-view -g MyResourceGroup -n MyVm ```
-### A status check using Azure Instance Metadata Service
+### Check status by using Azure Instance Metadata Service
-From within the Virtual Machine itself, you can query the attested metadata in Azure Instance Metadata Service to determine the Virtual Machine's `licenseType` value. A `licenseType` value of `RHEL_BYOS` or `SLES_BYOS` will indicate that your Virtual Machine has AHB enabled. [Learn more about attested metadata](./instance-metadata-service.md#attested-data).
+From within the virtual machine itself, you can query the attested metadata in Azure Instance Metadata Service to determine the virtual machine's `licenseType` value. A `licenseType` value of `RHEL_BYOS` or `SLES_BYOS` indicates that your virtual machine has Azure Hybrid Benefit enabled. [Learn more about attested metadata](./instance-metadata-service.md#attested-data).
## Compliance ### Red Hat compliance
-Customers who use AHB for PAYG RHEL Virtual Machines agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offers.
+Customers who use Azure Hybrid Benefit for pay-as-you-go RHEL virtual machines agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offers.
-Customers who use AHB for PAYG RHEL Virtual Machines have three options for providing software updates and patches to those Virtual Machines:
+Customers who use Azure Hybrid Benefit for pay-as-you-go RHEL virtual machines have three options for providing software updates and patches to those virtual machines:
- [Red Hat Update Infrastructure](../workloads/redhat/redhat-rhui.md) (default option) - Red Hat Satellite Server - Red Hat Subscription Manager
-Customers who choose the RHUI option can continue to use RHUI as the main update source for their AHB for PAYG RHEL Virtual Machines without attaching RHEL subscriptions to those Virtual Machines. Customers who choose the RHUI option are responsible for ensuring RHEL subscription compliance.
+Customers who choose the RHUI option can continue to use RHUI as the main update source for Azure Hybrid Benefit for pay-as-you-go RHEL virtual machines without attaching RHEL subscriptions to those virtual machines. Customers who choose the RHUI option are responsible for ensuring RHEL subscription compliance.
-Customers who choose either Red Hat Satellite Server or Red Hat Subscription Manager should remove the RHUI configuration and then attach a Cloud Access enabled RHEL subscription to their AHB for PAYG RHEL Virtual Machines.
+Customers who choose either Red Hat Satellite Server or Red Hat Subscription Manager should remove the RHUI configuration and then attach a cloud-access-enabled RHEL subscription to Azure Hybrid Benefit for pay-as-you-go RHEL virtual machines.
-For more information about Red Hat subscription compliance, software updates, and sources for AHB for PAYG RHEL Virtual Machines, see the [Red Hat article about using RHEL subscriptions with AHB](https://access.redhat.com/articles/5419341).
+For more information about Red Hat subscription compliance, software updates, and sources for Azure Hybrid Benefit for pay-as-you-go RHEL virtual machines, see the [Red Hat article about using RHEL subscriptions with Azure Hybrid Benefit](https://access.redhat.com/articles/5419341).
### SUSE compliance
-To use AHB for PAYG SLES Virtual Machines, and for information about moving from SLES PAYG to BYOS or moving from SLES BYOS to PAYG, see [SUSE Linux Enterprise and AHB](https://aka.ms/suse-ahb).
+To use Azure Hybrid Benefit for pay-as-you-go SLES virtual machines, and to get information about moving from SLES pay-as-you-go to BYOS or moving from SLES BYOS to pay-as-you-go, see [SUSE Linux Enterprise and Azure Hybrid Benefit](https://aka.ms/suse-ahb).
-Customers who use AHB for PAYG SLES Virtual Machines need to move the Cloud Update Infrastructure to one of three options that provide software updates and patches to those Virtual Machines:
+Customers who use Azure Hybrid Benefit for pay-as-you-go SLES virtual machines need to move the cloud update infrastructure to one of three options that provide software updates and patches to those virtual machines:
- [SUSE Customer Center](https://scc.suse.com) - SUSE Manager-- SUSE Repository Mirroring Tool (RMT)
+- SUSE Repository Mirroring Tool
+## Apply Azure Hybrid Benefit for pay-as-you-go virtual machines on reserved instances
-## AHB for PAYG Virtual Machines on Reserved Instances
+[Azure reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md) (Azure Reserved Virtual Machine Instances) help you save money by committing to one-year or three-year plans for multiple products. Azure Hybrid Benefit for pay-as-you-go virtual machines is available for reserved instances.
-Azure Reservations (Azure Reserved Virtual Machine Instances) help you save money by committing to one-year or three-year plans for multiple products. You can learn more about [Reserved instances here](../../cost-management-billing/reservations/save-compute-costs-reservations.md). The AHB for PAYG Virtual Machines is available for [Reserved Virtual Machine Instance(RIs)](../../cost-management-billing/reservations/save-compute-costs-reservations.md#charges-covered-by-reservation).
+This means that if you've purchased compute costs at a discounted rate by using reserved instances, you can apply Azure Hybrid Benefit on the licensing costs for RHEL and SUSE on top of it. The steps to apply Azure Hybrid Benefit for a reserved instance remain exactly same as they are for a regular virtual machine.
-This means that if you have purchased compute costs at a discounted rate using RI, you can apply AHB benefit on the licensing costs for RHEL and SUSE on top of it. The steps to apply AHB benefit for an RI instance remains exactly same as it is for a regular Virtual Machine.
-![AHB for RIs](./media/azure-hybrid-benefit/reserved-instances.png)
+![Screenshot of the interface for purchasing reservations for virtual machines.](./media/azure-hybrid-benefit/reserved-instances.png)
>[!NOTE]
->If you have already purchased reservations for RHEL or SUSE PAYG software on Azure Marketplace, please wait for the reservation tenure to complete before using the AHB for PAYG Virtual Machines.
-
+>If you've already purchased reservations for RHEL or SUSE pay-as-you-go software on Azure Marketplace, please wait for the reservation tenure to finish before using Azure Hybrid Benefit for pay-as-you-go virtual machines.
## Frequently asked questions *Q: Can I use a license type of `RHEL_BYOS` with a SLES image, or vice versa?*
-A: No, you can't. Trying to enter a license type that incorrectly matches the distribution running on your Virtual Machine will not update any billing metadata. But if you accidentally enter the wrong license type, updating your Virtual Machine again to the correct license type will still enable AHB.
+A: No, you can't. Trying to enter a license type that incorrectly matches the distribution running on your virtual machine will not update any billing metadata. But if you accidentally enter the wrong license type, updating your virtual machine again to the correct license type will still enable Azure Hybrid Benefit.
-*Q: I've registered with Red Hat Cloud Access but still can't enable AHB on my RHEL Virtual Machines. What should I do?*
+*Q: I've registered with Red Hat Cloud Access but still can't enable Azure Hybrid Benefit on my RHEL virtual machines. What should I do?*
A: It might take some time for your Red Hat Cloud Access subscription registration to propagate from Red Hat to Azure. If you still see the error after one business day, contact Microsoft support.
-*Q: I've deployed a Virtual Machine by using RHEL BYOS "golden image." Can I convert the billing on these images from BYOS to PAYG?*
+*Q: I've deployed a virtual machine by using a RHEL BYOS "golden image." Can I convert the billing on this image from BYOS to pay-as-you-go?*
-A: Yes, you can use the AHB for BYOS Virtual Machines capability to do this. You can [learn more about this capability here.](./azure-hybrid-benefit-byos-linux.md)
+A: Yes, you can use Azure Hybrid Benefit for BYOS virtual machines to do this. [Learn more about this capability](./azure-hybrid-benefit-byos-linux.md).
-*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Can I convert the billing on these images from BYOS to PAYG?*
+*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Can I convert the billing on these images from BYOS to pay-as-you-go?*
-A: Yes, you can use the AHB for BYOS Virtual Machines capability to do this. You can [learn more about this capability here.](./azure-hybrid-benefit-byos-linux.md)
+A: Yes, you can use Azure Hybrid Benefit for BYOS virtual machines to do this. [Learn more about this capability](./azure-hybrid-benefit-byos-linux.md).
-*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Do I need to do anything to benefit from AHB?*
+*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Do I need to do anything to benefit from Azure Hybrid Benefit?*
A: No, you don't. RHEL or SLES images that you upload are already considered BYOS, and you're charged only for Azure infrastructure costs. You're responsible for RHEL subscription costs, just as you are for your on-premises environments.
-*Q: Can I use AHB for PAYG Virtual Machines for Azure Marketplace RHEL and SLES SAP images?*
+*Q: Can I use Azure Hybrid Benefit for pay-as-you-go virtual machines for Azure Marketplace RHEL and SLES SAP images?*
-A: Yes, you can. You can use the license type of `RHEL_BYOS` for RHEL Virtual Machines and `SLES_BYOS` for conversions of Virtual Machines deployed from Azure Marketplace RHEL and SLES SAP images.
+A: Yes. You can use the license type of `RHEL_BYOS` for RHEL virtual machines and `SLES_BYOS` for conversions of virtual machines deployed from Azure Marketplace RHEL and SLES SAP images.
-*Q: Can I use AHB for PAYG Virtual Machines on virtual machine scale sets for RHEL and SLES?*
+*Q: Can I use Azure Hybrid Benefit for pay-as-you-go virtual machines on virtual machine scale sets for RHEL and SLES?*
-A: Yes, AHB on virtual machine scale sets for RHEL and SLES is available to all users. You can [learn more about this benefit and how to use it here](../../virtual-machine-scale-sets/azure-hybrid-benefit-linux.md).
+A: Yes. Azure Hybrid Benefit on virtual machine scale sets for RHEL and SLES is available to all users. [Learn more about this benefit and how to use it](../../virtual-machine-scale-sets/azure-hybrid-benefit-linux.md).
-*Q: Can I use AHB for PAYG Virtual Machines on reserved instances for RHEL and SLES?*
+*Q: Can I use Azure Hybrid Benefit for pay-as-you-go virtual machines on reserved instances for RHEL and SLES?*
-A: Yes, AHB for PAYG Virtual Machines on reserved instance for RHEL and SLES is available to all users.
+A: Yes. Azure Hybrid Benefit for pay-as-you-go virtual machines on reserved instances for RHEL and SLES is available to all users.
-*Q: Can I use AHB for PAYG Virtual Machines on a virtual machine deployed for SQL Server on RHEL images?*
+*Q: Can I use Azure Hybrid Benefit for pay-as-you-go virtual machines on a virtual machine deployed for SQL Server on RHEL images?*
-A: No, you can't. There is no plan for supporting these virtual machines.
+A: No, you can't. There's no plan for supporting these virtual machines.
-*Q: Can I use AHB on my RHEL Virtual Data Center subscription?*
+*Q: Can I use Azure Hybrid Benefit on my RHEL for Virtual Datacenters subscription?*
-A: No, you cannot. VDC is not supported on Azure at all, including AHB.
+A: No. RHEL for Virtual Datacenters isn't supported on Azure at all, including Azure Hybrid Benefit.
## Common problems
This section lists common problems that you might encounter and steps for mitiga
| Error | Mitigation | | -- | - |
-| "The action could not be completed because our records show that you have not successfully enabled Red Hat Cloud Access on your Azure subscription." | To use AHB with RHEL Virtual Machines, you must first [register your Azure subscriptions with Red Hat Cloud Access](https://access.redhat.com/management/cloud).
+| "The action could not be completed because our records show that you have not successfully enabled Red Hat Cloud Access on your Azure subscription." | To use Azure Hybrid Benefit with RHEL virtual machines, you must first [register your Azure subscriptions with Red Hat Cloud Access](https://access.redhat.com/management/cloud).
## Next steps
-* [Learn how to create and update Virtual Machines and add license types (RHEL_BYOS, SLES_BYOS) for AHB by using the Azure CLI](/cli/azure/vm)
-* AHB on virtual machine scale sets for RHEL and SLES is available to all users. You can [learn more about this benefit and how to use it here](../../virtual-machine-scale-sets/azure-hybrid-benefit-linux.md).
+* [Learn how to create and update virtual machines and add license types (RHEL_BYOS, SLES_BYOS) for Azure Hybrid Benefit by using the Azure CLI](/cli/azure/vm)
+* [Learn about Azure Hybrid Benefit on virtual machine scale sets for RHEL and SLES and how to use it](../../virtual-machine-scale-sets/azure-hybrid-benefit-linux.md)
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
If Image Builder did not create the staging resource group, but it did create re
## Properties: source
-The `source` section contains information about the source image that will be used by Image Builder. Image Builder currently only natively supports creating Hyper-V generation (Gen1) 1 images to the Azure Compute Gallery (SIG) or managed image. If you want to create Gen2 images, then you need to use a source Gen2 image, and distribute to VHD. After, you will then need to create a managed image from the VHD, and inject it into the SIG as a Gen2 image.
+The `source` section contains information about the source image that will be used by Image Builder.
The API requires a `SourceType` that defines the source for the image build, currently there are three types:
The shell customizer supports running PowerShell scripts and inline command, the
"type": "PowerShell", "name": "<name>", "inline": "<PowerShell syntax to run>",
- "validExitCodes": "<exit code>",
+ "validExitCodes": <exit code>,
"runElevated": <true or false> } ],
How to use the `validate` property to validate Windows images
"inline": [ "<command to run inline>" ],
- "validExitCodes": "<exit code>",
+ "validExitCodes": <exit code>,
"runElevated": <true or false>, "runAsSystem": <true or false> },
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-portal.md
Previously updated : 03/29/2022 Last updated : 08/01/2022
Sign in to the [Azure portal](https://portal.azure.com).
Create an SSH connection with the VM.
-1. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a PowerShell prompt.
+1. If you are on a Mac or Linux machine, open a Bash prompt and set read-only permission on the .pem file using `chmod 400 ~/Downloads/myKey.pem`. If you are on a Windows machine, open a PowerShell prompt.
1. At your prompt, open an SSH connection to your virtual machine. Replace the IP address with the one from your VM, and replace the path to the `.pem` with the path to where the key file was downloaded. ```console
-ssh -i .\Downloads\myKey.pem azureuser@10.111.12.123
+ssh -i ~/Downloads/myKey.pem azureuser@10.111.12.123
``` > [!TIP]
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
These VMs are ideal for real-world Applied AI workloads, such as:
- GPU-accelerated analytics and databases - Batch inferencing with heavy pre- and post-processing-- Autonomous driving reinforcement learning
+- Autonomy model training
- Oil and gas reservoir simulation - Machine learning (ML) development - Video processing
virtual-machines Security Controls Policy Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy-image-builder.md
Title: Azure Policy Regulatory Compliance controls for Azure VM Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure VM Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
virtual-machines Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
virtual-network Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
virtual-network Virtual Network Scenario Udr Gw Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-scenario-udr-gw-nva.md
A common scenario among larger Azure customer is the need to provide a two-tiere
* All traffic going to the application server must go through a firewall virtual appliance. This virtual appliance will be used for access to the backend end server, and access coming in from the on-premises network via a VPN Gateway. * Administrators must be able to manage the firewall virtual appliances from their on-premises computers, by using a third firewall virtual appliance used exclusively for management purposes.
-This is a standard perimeter network (also knowns as DMZ) scenario with a DMZ and a protected network. Such scenario can be constructed in Azure by using NSGs,firewall virtual appliances, or a combination of both. The table below shows some of the pros and cons between NSGs and firewall virtual appliances.
+This is a standard perimeter network (also knowns as DMZ) scenario with a DMZ and a protected network. Such scenario can be constructed in Azure by using NSGs, firewall virtual appliances, or a combination of both. The table below shows some of the pros and cons between NSGs and firewall virtual appliances.
| | Pros | Cons | | | | |