Updates from: 02/24/2024 02:11:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Captcha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-captcha.md
+
+ Title: Enable CAPTCHA in Azure Active Directory B2C
+description: How to enable CAPTCHA for user flows and custom policies in Azure Active Directory B2C.
++++ Last updated : 01/17/2024+++
+zone_pivot_groups: b2c-policy-type
+
+#Customer intent: As a developer, I want to enable CAPTCHA in consumer-facing application that is secured by Azure Active Directory B2C, so that I can protect my sign-in and sign-up flows from automated attacks.
+++
+# Enable CAPTCHA in Azure Active Directory B2C
++
+Azure Active Directory B2C (Azure AD B2C) allows you to enable CAPTCHA prevent to automated attacks on your consumer-facing applications. Azure AD B2CΓÇÖs CAPTCHA supports both audio and visual CAPTCHA challenges. You can enable this security feature in both sign-up and sign-in flows for your local accounts. CAPTCHA isn't applicable for social identity providers' sign-in.
+
+> [!NOTE]
+> This feature is in public preview
+
+## Prerequisites
++
+## Enable CAPTCHA
++
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
+
+1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**.
+
+1. Select **User flows**.
+
+1. Select the user flow for which you want to enable CAPTCHA. For example, *B2C_1_signinsignup*.
+
+1. Select **Properties**.
+
+1. Under **CAPTCHA (Preview)**, select the flow for which to enable CAPTCHA for, such as **Enable CAPTCHA - Sign Up**.
+
+1. Select **Save**.
+
+## Test the user flow
+
+Use the steps in [Test the user flow](tutorial-create-user-flows.md?pivots=b2c-user-flow#test-the-user-flow-1) to test and confirm that CAPTCHA is enabled for your chosen flow. You should be prompted to enter the characters you see or hear depending on the CAPTCHA type, visual or audio, you choose.
++++
+To enable CAPTCHA in your custom policy, you need to update your existing custom policy files. If you don't have any existing custom policy files, [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository from `https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack`. In this article, we update the XML files in */Display Controls Starterpack/LocalAccounts/* folder.
+
+### Declare claims
+
+You need more claims to enable CAPTCHA in your custom policy:
+
+1. In VS Code, open the *TrustFrameworkBase.XML* file.
+
+1. In the `ClaimsSchema` section, declare claims by using the following code:
+
+ ```xml
+ <!--<ClaimsSchema>-->
+ ...
+ <ClaimType Id="inputSolution">
+ <DataType>string</DataType>
+ </ClaimType>
+
+ <ClaimType Id="solved">
+ <DataType>boolean</DataType>
+ </ClaimType>
+
+ <ClaimType Id="reason">
+ <DataType>string</DataType>
+ </ClaimType>
+
+ <ClaimType Id="azureregion">
+ <DataType>string</DataType>
+ </ClaimType>
+
+ <ClaimType Id="challengeId">
+ <DisplayName>The ID of the generated captcha</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Captcha challenge identifier</UserHelpText>
+ <UserInputType>Paragraph</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="challengeType">
+ <DisplayName>Type of captcha (visual / audio)</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Captcha challenge type</UserHelpText>
+ <UserInputType>Paragraph</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="challengeString">
+ <DisplayName>Captcha challenge code</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Captcha challenge code</UserHelpText>
+ <UserInputType>Paragraph</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="captchaEntered">
+ <DisplayName>Captcha entered by the user</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Enter the characters you see</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="isCaptchaSolved">
+ <DisplayName>Flag indicating that the captcha was successfully solved</DisplayName>
+ <DataType>boolean</DataType>
+ </ClaimType>
+ ...
+ <!--<ClaimsSchema>-->
+ ```
+
+### Configure a display control
+
+To enable CAPTCHA for your custom policy, you use a [CAPTCHA display Control](display-control-captcha.md). The CAPTCHA display control generates and renders the CAPTCHA image.
+
+In the *TrustFrameworkBase.XML* file, locate the `DisplayControls` element, then add the following display control as a child element. If you don't already have `DisplayControls` element, add one.
+
+```xml
+<!--<DisplayControls>-->
+...
+<DisplayControl Id="captchaControlChallengeCode" UserInterfaceControlType="CaptchaControl" DisplayName="Help us beat the bots">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="challengeType" />
+ <InputClaim ClaimTypeReferenceId="challengeId" />
+ </InputClaims>
+
+ <DisplayClaims>
+ <DisplayClaim ClaimTypeReferenceId="challengeType" ControlClaimType="ChallengeType" />
+ <DisplayClaim ClaimTypeReferenceId="challengeId" ControlClaimType="ChallengeId" />
+ <DisplayClaim ClaimTypeReferenceId="challengeString" ControlClaimType="ChallengeString" />
+ <DisplayClaim ClaimTypeReferenceId="captchaEntered" ControlClaimType="CaptchaEntered" />
+ </DisplayClaims>
+
+ <Actions>
+ <Action Id="GetChallenge">
+ <ValidationClaimsExchange>
+ <ValidationClaimsExchangeTechnicalProfile
+ TechnicalProfileReferenceId="HIP-GetChallenge" />
+ </ValidationClaimsExchange>
+ </Action>
+
+ <Action Id="VerifyChallenge">
+ <ValidationClaimsExchange>
+ <ValidationClaimsExchangeTechnicalProfile
+ TechnicalProfileReferenceId="HIP-VerifyChallenge" />
+ </ValidationClaimsExchange>
+ </Action>
+ </Actions>
+</DisplayControl>
+...
+<!--</DisplayControls>-->
+```
+
+### Configure a CAPTCHA technical profile
+
+Azure AD B2C [CAPTCHA technical profile](captcha-technical-profile.md) verifies the CAPTCHA challenge. This technical profile can generate a CAPTCHA code or verify it depending on how you configure it.
+
+In the *TrustFrameworkBase.XML* file, locate the `ClaimsProviders` element and add the claims provider by using the following code:
+
+```xml
+<!--<ClaimsProvider>-->
+...
+<ClaimsProvider>
+
+ <DisplayName>HIPChallenge</DisplayName>
+
+ <TechnicalProfiles>
+
+ <TechnicalProfile Id="HIP-GetChallenge">
+ <DisplayName>GetChallenge</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.CaptchaProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="Operation">GetChallenge</Item>
+ <Item Key="Brand">HIP</Item>
+ </Metadata>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="challengeType" />
+ </InputClaims>
+ <DisplayClaims>
+ <DisplayClaim ClaimTypeReferenceId="challengeString" />
+ </DisplayClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="challengeId" />
+ <OutputClaim ClaimTypeReferenceId="challengeString" PartnerClaimType="ChallengeString" />
+ <OutputClaim ClaimTypeReferenceId="azureregion" />
+ </OutputClaims>
+ </TechnicalProfile>
+ <TechnicalProfile Id="HIP-VerifyChallenge">
+ <DisplayName>Verify Code</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.CaptchaProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="Brand">HIP</Item>
+ <Item Key="Operation">VerifyChallenge</Item>
+ </Metadata>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="challengeType" DefaultValue="Visual" />
+ <InputClaim ClaimTypeReferenceId="challengeId" />
+ <InputClaim ClaimTypeReferenceId="captchaEntered" PartnerClaimType="inputSolution" Required="true" />
+ <InputClaim ClaimTypeReferenceId="azureregion" />
+ </InputClaims>
+ <DisplayClaims>
+ <DisplayClaim ClaimTypeReferenceId="captchaEntered" />
+ </DisplayClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="challengeId" />
+ <OutputClaim ClaimTypeReferenceId="isCaptchaSolved" PartnerClaimType="solved" />
+ <OutputClaim ClaimTypeReferenceId="reason" PartnerClaimType="reason" />
+ </OutputClaims>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+</ClaimsProvider>
+...
+<!--<ClaimsProviders>-->
+```
+
+The CAPTCHA technical profile that you configure with the *GetChallenge* operation generates and display the CAPTCHA challenge string. The CAPTCHA technical profile that you configure with the *VerifyChallenge* verifies the challenge string that the user inputs.
+
+### Update content definition's page layouts
+
+For the various page layouts, use the following page layout versions:
+
+|Page layout |Page layout version range |
+|||
+| Selfasserted | >=2.1.29 |
+| Unifiedssp | >=2.1.17 |
+| Multifactor | >=1.2.15 |
+
+**Example:**
+
+In the *TrustFrameworkBase.XML* file, under the `ContentDefinitions` element, locate a content definition with *Id="api.localaccountsignup"*, then updates its *DataUri* as shown in the following code:
+
+```xml
+<!<ContentDefinitions>-->
+...
+<ContentDefinition Id="api.localaccountsignup">
+ ...
+ <!--Update this DataUri-->
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.27</DataUri>
+ ...
+</ContentDefinition>
+...
+<!</ContentDefinitions>-->
+```
+We specify the selfasserted page layout version as *2.1.27*.
+
+Once you configure your technical profiles and display controls, you can specify the flow for which you want to enable CAPTCHA.
+
+### Enable CAPTCHA for sign-up or sign-in flow
+
+To enable CAPTCHA for your sign-up or sign-in flow, use the following steps:
+
+1. Inspect your sign-up sign-in user journey, such as *SignUpOrSignIn*, to identify the self asserted technical profile that displays your sign-up or sign-in experience.
+
+1. In the technical profile, such as *LocalAccountSignUpWithLogonEmail*, add a metadata key and a display claim entry as shown in the following code:
+
+```xml
+<TechnicalProfile Id="LocalAccountSignUpWithLogonEmail">
+ ...
+ <Metadata>
+ ...
+ <!--Add this metadata entry. Set value to true to activate CAPTCHA-->
+ <Item Key="setting.enableCaptchaChallenge">true</Item>
+ ...
+ </Metadata>
+ ...
+ <DisplayClaims>
+ ...
+ <!--Add this display claim, which is a reference to the captcha display control-->
+ <DisplayClaim DisplayControlReferenceId="captchaControlChallengeCode" />
+ ...
+ </DisplayClaims>
+ ...
+</TechnicalProfile>
+```
+The display claim entry references the display control that you configured earlier.
+
+### Enable CAPTCHA in MFA flow
+
+To enable CAPTCHA in MFA flow, you need to make an update in two technical profiles, that is, in the self-asserted technical profile, and in the [phone factor technical profile](phone-factor-technical-profile.md):
+
+1. Inspect your sign-up sign-in user journey, such as *SignUpOrSignIn*, to identify the self-asserted technical profile and phone factor technical profiles that are responsible for your sign-up or sign-in flow.
+
+1. In both of the technical profiles, add a metadata key and a display claim entry as shown in the following code:
+
+```xml
+<TechnicalProfile Id="PhoneFactor-InputOrVerify">
+ ...
+ <Metadata>
+ ...
+ <!--Add this metadata entry. Value set to true-->
+ <Item Key="setting.enableCaptchaChallenge">true</Item>
+ ...
+ </Metadata>
+ ...
+ <DisplayClaims>
+ ...
+ <!--Add this display claim-->
+ <DisplayClaim DisplayControlReferenceId="captchaControlChallengeCode" />
+ ...
+ </DisplayClaims>
+ ...
+</TechnicalProfile>
+```
+
+> [!NOTE]
+> - You can't add CAPTCHA to an MFA step in a sign-up only user flow.
+> - In an MFA flow, CAPTCHA is applicable where the MFA method you select is SMS or phone call, SMS only or Phone call only.
+
+## Upload the custom policy files
+
+Use the steps in [Upload the policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy&branch=pr-en-us-260336#upload-the-policies) to upload your custom policy files.
+
+## Test the custom policy
+
+Use the steps in [Test the custom policy](tutorial-create-user-flows.md?pivots=b2c-custom-policy#test-the-custom-policy) to test and confirm that CAPTCHA is enabled for your chosen flow. You should be prompted to enter the characters you see or hear depending on the CAPTCHA type, visual or audio, you choose.
+
+## Next steps
+
+- Learn how to [Define a CAPTCHA technical profile](captcha-technical-profile.md).
+- Learn how to [Configure CAPTCHA display control](display-control-captcha.md).
active-directory-b2c Captcha Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/captcha-technical-profile.md
+
+ Title: Define a CAPTCHA technical profile in a custom policy
+
+description: Define a CAPTCHA technical profile custom policy in Azure Active Directory B2C.
+++++++ Last updated : 01/17/2024+++
+#Customer intent: As a developer integrating a customer-facing application with Azure AD B2C, I want to define a CAPTCHA technical profile, so that I can secure sign-up and sign-in flows from automated attacks.
++
+# Define a CAPTCHA technical profile in an Azure Active Directory B2C custom policy
++
+A Completely Automated Public Turing Tests to Tell Computer and Human Apart (CAPTCHA) technical profiles enables Azure Active Directory B2C (Azure AD B2C) to prevent automated attacks. Azure AD B2C's CAPTCHA technical profile supports both audio and visual CAPTCHA challenges types.
+
+## Protocol
+
+The **Name** attribute of the **Protocol** element needs to be set to `Proprietary`. The **handler** attribute must contain the fully qualified name of the protocol handler assembly that is used by Azure AD B2C, for CAPTCHA:
+`Web.TPEngine.Providers.CaptchaProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null`
+
+> [!NOTE]
+> This feature is in public preview
+
+The following example shows a self-asserted technical profile for email sign-up:
+
+```xml
+<TechnicalProfile Id="HIP-GetChallenge">
+ <DisplayName>Email signup</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.CaptchaProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+```
+## CAPTCHA technical profile operations
+
+CAPTCHA technical profile operations have two operations:
+
+- **Get challenge operation** generates the CAPTCHA code string, then displays it on the user interface by using a [CAPTCHA display control](display-control-captcha.md). The display includes an input textbox. This operation directs the user to input the characters they see or hear into the input textbox. The user can switch between visual and audio challenge types as needed.
+
+- **Verify code operation** verifies the characters input by the user.
+
+## Get challenge
+
+The first operation generates the CAPTCHA code string, then displays it on the user interface.
+
+### Input claims
+
+The **InputClaims** element contains a list of claims to send to Azure AD B2C's CAPTCHA service.
+
+ | ClaimReferenceId | Required | Description |
+| | -- | -- |
+| challengeType | No | The CAPTCHA challenge type, Audio or Visual (default).|
+| azureregion | Yes | The service region that serves the CAPTCHA challenge request. |
+
+### Display claims
+
+The **DisplayClaims** element contains a list of claims to be presented on the screen for the user to see. For example, the user is presented with the CAPTCHA challenge code to read.
+
+ | ClaimReferenceId | Required | Description |
+| | -- | -- |
+| challengeString | Yes | The CAPTCHA challenge code.|
++
+### Output claims
+
+The **OutputClaims** element contains a list of claims returned by the CAPTCHA technical profile.
+
+| ClaimReferenceId | Required | Description |
+| | -- | -- |
+| challengeId | Yes | A unique identifier for CAPTCHA challenge code.|
+| challengeString | Yes | The CAPTCHA challenge code.|
+| azureregion | Yes | The service region that serves the CAPTCHA challenge request.|
++
+### Metadata
+ | Attribute | Required | Description |
+| | -- | -- |
+| Operation | Yes | Value must be *GetChallenge*.|
+| Brand | Yes | Value must be *HIP*.|
+
+### Example: Generate CAPTCHA code
+
+The following example shows a CAPTCHA technical profile that you use to generate a code:
+
+```xml
+<TechnicalProfile Id="HIP-GetChallenge">
+ <DisplayName>GetChallenge</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.CaptchaProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+
+ <Metadata>
+ <Item Key="Operation">GetChallenge</Item>
+ <Item Key="Brand">HIP</Item>
+ </Metadata>
+
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="challengeType" />
+ </InputClaims>
+
+ <DisplayClaims>
+ <DisplayClaim ClaimTypeReferenceId="challengeString" />
+ </DisplayClaims>
+
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="challengeId" />
+ <OutputClaim ClaimTypeReferenceId="challengeString" PartnerClaimType="ChallengeString" />
+ <OutputClaim ClaimTypeReferenceId="azureregion" />
+ </OutputClaims>
+
+</TechnicalProfile>
+```
++
+## Verify challenge
+
+The second operation verifies the CAPTCHA challenge.
+
+### Input claims
+
+The **InputClaims** element contains a list of claims to send to Azure AD B2C's CAPTCHA service.
+
+ | ClaimReferenceId | Required | Description |
+| | -- | -- |
+| challengeType | No | The CAPTCHA challenge type, Audio or Visual (default).|
+|challengeId| Yes | A unique identifier for CAPTCHA used for session verification. Populated from the *GetChallenge* call. |
+|captchaEntered| Yes | The challenge code that the user inputs into the challenge textbox on the user interface. |
+|azureregion| Yes | The service region that serves the CAPTCHA challenge request. Populated from the *GetChallenge* call.|
++
+### Display claims
+
+The **DisplayClaims** element contains a list of claims to be presented on the screen for collecting an input from the user.
+
+ | ClaimReferenceId | Required | Description |
+| | -- | -- |
+| captchaEntered | Yes | The CAPTCHA challenge code entered by the user.|
+
+### Output claims
+
+The **OutputClaims** element contains a list of claims returned by the captcha technical profile.
+
+| ClaimReferenceId | Required | Description |
+| | -- | -- |
+| challengeId | Yes | A unique identifier for CAPTCHA used for session verification.|
+| isCaptchaSolved | Yes | A flag indicating whether the CAPTCHA challenge is successfully solved.|
+| reason | Yes | Used to communicate to the user whether the attempt to solve the challenge is successful or not. |
+
+### Metadata
+ | Attribute | Required | Description |
+| | -- | -- |
+| Operation | Yes | Value must be **VerifyChallenge**.|
+| Brand | Yes | Value must be **HIP**.|
+
+### Example: Verify CAPTCHA code
+
+The following example shows a CAPTCHA technical profile that you use to verify a CAPTCHA code:
+
+```xml
+ <TechnicalProfile Id="HIP-VerifyChallenge">
+ <DisplayName>Verify Code</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.CaptchaProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="Brand">HIP</Item>
+ <Item Key="Operation">VerifyChallenge</Item>
+ </Metadata>
+
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="challengeType" DefaultValue="Visual" />
+ <InputClaim ClaimTypeReferenceId="challengeId" />
+ <InputClaim ClaimTypeReferenceId="captchaEntered" PartnerClaimType="inputSolution" Required="true" />
+ <InputClaim ClaimTypeReferenceId="azureregion" />
+ </InputClaims>
+
+ <DisplayClaims>
+ <DisplayClaim ClaimTypeReferenceId="captchaEntered" />
+ </DisplayClaims>
+
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="challengeId" />
+ <OutputClaim ClaimTypeReferenceId="isCaptchaSolved" PartnerClaimType="solved" />
+ <OutputClaim ClaimTypeReferenceId="reason" PartnerClaimType="reason" />
+ </OutputClaims>
+
+ </TechnicalProfile>
+```
+
+## Next steps
+
+- [Enable CAPTCHA in Azure Active Directory B2C](add-captcha.md).
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 01/11/2024 Last updated : 02/24/2024
Azure Active Directory B2C [user flows and custom policies](user-flow-overview.m
| [Phone sign-up and sign-in](phone-authentication-user-flows.md) | GA | GA | | | [Conditional Access and Identity Protection](conditional-access-user-flow.md) | GA | GA | Not available for SAML applications | | [Smart lockout](threat-management.md) | GA | GA | |
+| [CAPTCHA](add-captcha.md) | Preview | Preview | You can enable it during sign-up or sign-in for Local accounts. |
## OAuth 2.0 application authorization flows
The following table summarizes the Security Assertion Markup Language (SAML) app
|[Amazon](identity-provider-amazon.md) | GA | GA | | |[Apple](identity-provider-apple-id.md) | GA | GA | | |[Microsoft Entra ID (Single-tenant)](identity-provider-azure-ad-single-tenant.md) | GA | GA | |
-|[Microsoft Entra ID (Multi-tenant)](identity-provider-azure-ad-multi-tenant.md) | NA | GA | |
+|[Microsoft Entra ID (multitenant)](identity-provider-azure-ad-multi-tenant.md) | NA | GA | |
|[Azure AD B2C](identity-provider-azure-ad-b2c.md) | GA | GA | | |[eBay](identity-provider-ebay.md) | NA | Preview | | |[Facebook](identity-provider-facebook.md) | GA | GA | |
active-directory-b2c Display Control Captcha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-control-captcha.md
+
+ Title: Verify CAPTCHA code using CAPTCHA display controls
+
+description: Learn how to define a CAPTCHA display controls custom policy in Azure AD B2C.
+++++++ Last updated : 01/17/2024+++
+#Customer intent: As a developer integrating customer-facing apps with Azure AD B2C, I want to learn how to define a CAPTCHA display control for Azure AD B2C's custom policies so that I can protect my authentication flows from automated attacks.
++
+# Verify CAPTCHA challenge string using CAPTCHA display control
+
+Use CAPTCHA display controls to generate a CAPTCHA challenge string, then verify it by asking the user to enter what they see or hear. To display a CAPTCHA display control, you reference it from a [self-asserted technical profile](self-asserted-technical-profile.md), and you must set the self-asserted technical profile's `setting.enableCaptchaChallenge` metadata value to *true*.
+
+The screenshot shows the CAPTCHA display control shown on a sign-up page:
++
+The sign-up page loads with the CAPTCHA display control. The user then inputs the characters they see or hear. The **Send verification code** button sends a verification code to the user's email, and isn't CAPTCHA display control element, but it causes the CAPTCHA challenge string to be verified.
+
+## CAPTCHA display control elements
+
+This table summarizes the elements that a CAPTCHA display control contains.
+
+| Element | Required | Description |
+| | -- | -- |
+| UserInterfaceControlType | Yes | Value must be *CaptchaControl*.|
+| InputClaims | Yes | One or more claims required as input to specify the CAPTCHA challenge type and to uniquely identify the challenge. |
+| DisplayClaims | Yes | The claims to be shown to the user such as the CAPTCHA challenge code, or collected from the user, such as code input by the user |
+| OutputClaim | No | Any claim to be returned to the self-asserted page after the user completes CAPTCHA code verification process. |
+| Actions | Yes | CAPTCHA display control contains two actions, *GetChallenge* and *VerifyChallenge*. <br> *GetChallenge* action generates, then displays a CAPTCHA challenge code on the user interface. <br> *VerifyChallenge* action verifies the CAPTCHA challenge code that the user inputs. |
+
+The following XML snippet code shows an example of CaptchaProvider display control:
+
+```xml
+<DisplayControls>
+ ...
+ <DisplayControl Id="captchaControlChallengeCode" UserInterfaceControlType="CaptchaControl" DisplayName="Help us beat the bots">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="challengeType" />
+ <InputClaim ClaimTypeReferenceId="challengeId" />
+ </InputClaims>
+
+ <DisplayClaims>
+ <DisplayClaim ClaimTypeReferenceId="challengeType" ControlClaimType="ChallengeType" />
+ <DisplayClaim ClaimTypeReferenceId="challengeId" ControlClaimType="ChallengeId" />
+ <DisplayClaim ClaimTypeReferenceId="challengeString" ControlClaimType="ChallengeString" />
+ <DisplayClaim ClaimTypeReferenceId="captchaEntered" ControlClaimType="CaptchaEntered" />
+ </DisplayClaims>
+
+ <Actions>
+ <Action Id="GetChallenge">
+ <ValidationClaimsExchange>
+ <ValidationClaimsExchangeTechnicalProfile
+ TechnicalProfileReferenceId="HIP-GetChallenge" />
+ </ValidationClaimsExchange>
+ </Action>
+
+ <Action Id="VerifyChallenge">
+ <ValidationClaimsExchange>
+ <ValidationClaimsExchangeTechnicalProfile
+ TechnicalProfileReferenceId="HIP-VerifyChallenge" />
+ </ValidationClaimsExchange>
+ </Action>
+ </Actions>
+ </DisplayControl>
+ ...
+</DisplayControls>
+```
+
+## Next steps
+
+- [Enable CAPTCHA in Azure Active Directory B2C](add-captcha.md).
active-directory-b2c Display Control Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-control-verification.md
Last updated 01/11/2024+
active-directory-b2c Display Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-controls.md
The **DisplayControl** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- | | `Id` | Yes | An identifier that's used for the display control. It can be [referenced](#referencing-display-controls). |
-| `UserInterfaceControlType` | Yes | The type of the display control. Currently supported is [VerificationControl](display-control-verification.md), and [TOTP controls](display-control-time-based-one-time-password.md). |
+| `UserInterfaceControlType` | Yes | The type of the display control. Currently supported is [VerificationControl](display-control-verification.md), [TOTP controls](display-control-time-based-one-time-password.md), and [CAPTCHA controls](display-control-captcha.md). |
### Verification control
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
Title: Localization string IDs - Azure Active Directory B2C
-description: Specify the IDs for a content definition with an ID of api.signuporsignin in a custom policy in Azure Active Directory B2C.
+description: Specify the IDs for a content definition with an ID of api.signuporsignin in a custom policy in Azure AD B2C.
Previously updated : 01/11/2024 Last updated : 02/24/2024
-#Customer intent: As a developer implementing user interface localization in Azure Active Directory B2C, I want to access the list of localization string IDs, so that I can use them in my policy to support multiple locales or languages in the user journeys.
+#Customer intent: As a developer implementing user interface localization in Azure AD B2C, I want to access the list of localization string IDs, so that I can use them in my policy to support multiple locales or languages in the user journeys.
The following IDs are used for a content definition with an ID of `api.signupors
| `button_signin` | Sign in | `All` | | `social_intro` | Sign in with your social account | `All` | | `remember_me` |Keep me signed in. | `All` |
-| `unknown_error` | We are having trouble signing you in. Please try again later. | `All` |
+| `unknown_error` | We're having trouble signing you in. Please try again later. | `All` |
| `divider_title` | OR | `All` | | `local_intro_email` | Sign in with your existing account | `< 2.0.0` | | `logonIdentifier_email` | Email Address | `< 2.0.0` |
The following IDs are used for a content definition with an ID of `api.signupors
| `requiredField_password` | Please enter your password | `< 2.0.0` | | `createaccount_link` | Sign up now | `< 2.0.0` | | `cancel_message` | The user has forgotten their password | `< 2.0.0` |
-| `invalid_password` | The password you entered is not in the expected format. | `< 2.0.0` |
+| `invalid_password` | The password you entered isn't in the expected format. | `< 2.0.0` |
| `createaccount_one_link` | Sign up now | `>= 2.0.0` | | `createaccount_two_links` | Sign up with {0} or {1} | `>= 2.0.0` | | `createaccount_three_links` | Sign up with {0}, {1}, or {2} | `>= 2.0.0` |
The following IDs are used for a content definition having an ID of `api.localac
| `month` | Month | | `ver_success_msg` | E-mail address verified. You can now continue. | | `months` | January, February, March, April, May, June, July, August, September, October, November, December |
-| `ver_fail_server` | We are having trouble verifying your email address. Please enter a valid email address and try again. |
+| `ver_fail_server` | We're having trouble verifying your email address. Please enter a valid email address and try again. |
| `error_requiredFieldMissing` | A required field is missing. Please fill out all required fields and try again. | | `heading` | User Details | | `initial_intro` | Please provide the following details. |
The following IDs are used for a content definition having an ID of `api.localac
### Sign-up and self-asserted pages disclaimer links
-The following `UxElement` string IDs will display disclaimer link(s) at the bottom of the self-asserted page. These links are not displayed by default unless specified in the localized strings.
+The following `UxElement` string IDs display disclaimer links at the bottom of the self-asserted page. These links aren't displayed by default unless specified in the localized strings.
| ID | Example value | | | - |
The following example shows the use of some of the user interface elements in th
![Sign-up page with its UI element names labeled](./media/localization-string-ids/localization-sign-up.png)
-The following example shows the use of some of the user interface elements in the sign-up page, after user clicks on send verification code button:
+The following example shows the use of some of the user interface elements in the sign-up page, after user select on send verification code button:
![Sign-up page email verification UX elements](./media/localization-string-ids/localization-email-verification.png)
The following IDs are used for [Microsoft Entra ID SSPR technical profile](aad-s
## One-time password error messages
-The following IDs are used for a [one-time password technical profile](one-time-password-technical-profile.md) error messages
+The following IDs are used for a [one-time password technical profile](one-time-password-technical-profile.md) error messages.
| ID | Default value | Description | | | - | -- |
-| `UserMessageIfSessionDoesNotExist` | No | The message to display to the user if the code verification session has expired. It is either the code has expired or the code has never been generated for a given identifier. |
-| `UserMessageIfMaxRetryAttempted` | No | The message to display to the user if they've exceeded the maximum allowed verification attempts. |
+| `UserMessageIfSessionDoesNotExist` | No | The message to display to the user if the code verification session is expired. It's either the code is expired or the code has never been generated for a given identifier. |
+| `UserMessageIfMaxRetryAttempted` | No | The message to display to the user if they exceed the maximum allowed verification attempts. |
| `UserMessageIfMaxNumberOfCodeGenerated` | No | The message to display to the user if the code generation has exceeded the maximum allowed number of attempts. |
-| `UserMessageIfInvalidCode` | No | The message to display to the user if they've provided an invalid code. |
-| `UserMessageIfVerificationFailedRetryAllowed` | No | The message to display to the user if they've provided an invalid code, and user is allowed to provide the correct code. |
-| `UserMessageIfSessionConflict` | No | The message to display to the user if the code cannot be verified.|
+| `UserMessageIfInvalidCode` | No | The message to display to the user if they enter an invalid code. |
+| `UserMessageIfVerificationFailedRetryAllowed` | No | The message to display to the user if they enter an invalid code, and user is allowed to provide the correct code. |
+| `UserMessageIfSessionConflict` | No | The message to display to the user if the code can't be verified.|
### One time password example
The following IDs are used for claims transformations error messages:
| `UserMessageIfClaimsTransformationStringsAreNotEqual` |[AssertStringClaimsAreEqual](string-transformations.md#assertstringclaimsareequal) | Claim value comparison failed using StringComparison "OrdinalIgnoreCase".| ### Claims transformations example 1:
-This example shows localized messages for local account signup.
+This example shows localized messages for local account sign-up.
```xml <LocalizedResources Id="api.localaccountsignup.en">
This example shows localized messages for local account password reset.
</LocalizedResources> ```
+## CAPTCHA display control user interface elements
+
+The following IDs are used for a [CAPTCHA display control](display-control-captcha.md):
+
+| ID | Default value | Description |
+| | - | -- |
+| `newCaptcha_arialabel` | Create new CAPTCHA | The tooltip message to display to the user when they move the mouse pointer over the CAPTCHA replay icon. |
+| `switchCaptchaType_title` | Switch CAPTCHA type to {0} | The tooltip message to display to they user when the move the mouse pointer over the CAPTCHA Audio or image icon. |
+| `captchatype_visual_help` | Enter the characters you see | The placeholder text in the input box where the user inputs the CAPTCHA code if the user is in visual mode. |
+| `captchatype_audio_title` | Press audio button to play the challenge | The tooltip message to display to the user when they move the mouse pointer over the CAPTCHA speaker icon if the user switches to audio mode. |
+| `captchatype_audio_help` | Enter the characters you hear | The placeholder text in the input box where the user inputs the CAPTCHA code if the user switches to audio mode. |
+| `charsnotmatched_error` | The characters did not match for CAPTCHA challenge. Please try again | The message to display to the user if they enter a wrong CAPTCHA code. |
+| `api_error` | API error on CAPTCHA control | The message to display to the user if an error occurs while Azure AD B2C attempts to validate the CAPTCHA code. |
+| `captcha_resolved` | Success! | The message to display to the user if they enter a correct CAPTCHA code. |
+|`DisplayName`| Help us beat the bots. | The CAPTCHA display control's display name. |
+
+### CAPTCHA display control example
+
+This example shows localized messages for CAPTCHA display control.
+
+```xml
+ <LocalizedResources Id="api.localaccountsignup.en">
+ <LocalizedStrings>
+ <LocalizedString ElementType="UxElement" StringId="newCaptcha_arialabel">Create new CAPTCHA</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="switchCaptchaType_title">Switch CAPTCHA type to {0}</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="captchatype_visual_help">Enter the characters you see</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="captchatype_audio_title">Press audio button to play the challenge</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="captchatype_audio_help"> Enter the characters you hear</LocalizedString>
+ <LocalizedString ElementType="ErrorMessage" StringId="charsnotmatched_error"> The characters did not match for CAPTCHA challenge. Please try again</LocalizedString>
+ <LocalizedString ElementType="ErrorMessage" StringId="api_error"> Api error on CAPTCHA control</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="captcha_resolved"> Success!</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="captchaControlChallengeCode" StringId="DisplayName">Help us beat the bots</LocalizedString>
+ </LocalizedStrings>
+ </LocalizedResources>
+```
+ ## Next steps See the following articles for localization examples: -- [Language customization with custom policy in Azure Active Directory B2C](language-customization.md)-- [Language customization with user flows in Azure Active Directory B2C](language-customization.md)
+- [Language customization with custom policy in Azure AD B2C](language-customization.md)
+- [Language customization with user flows in Azure AD B2C](language-customization.md)
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Self-asserted page (selfasserted)
+**2.1.29**
+
+- Add CAPTCHA
+ **2.1.26** -- Replaced `Keypress` to `Key Down` event and avoid `Asterisk` for non-required in classic mode.
+- Replaced `Keypress` to `Key Down` event and avoid `Asterisk` for nonrequired in classic mode.
**2.1.25** - Fixed content security policy (CSP) violation and remove additional request header X-Aspnetmvc-Version. -- Introduced Captcha mechanism for Self-asserted and Unified SSP Flows (_Beta-version-Internal use only_).- **2.1.24** - Fixed accessibility bugs.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**2.1.21** -- Additional sanitization of script tags to avoid XSS attacks. This revision breaks any script tags in the `<body>`. You should add script tags to the `<head>` tag. For more information, see [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md?pivots=b2c-user-flow).
+- More sanitization of script tags to avoid XSS attacks. This revision breaks any script tags in the `<body>`. You should add script tags to the `<head>` tag. For more information, see [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md?pivots=b2c-user-flow).
**2.1.20** - Fixed Enter event trigger on MFA. - CSS changes rendering page text/control in vertical manner for small screens **2.1.19**-- Fixed accessibility bugs.-- Handled Undefined Error message for existing user sign up.-- Moved Password mismatch error to Inline instead of page level.-- Accessibility changes related to High Contrast button display and anchor focus improvements
+- Fix accessibility bugs.
+- Handle Undefined Error message for existing user sign-up.
+- Move Password mismatch error to Inline instead of page level.
**2.1.18** - Add asterisk for required fields-- TOTP Store Icons position fixes for Classic Template
+- Fix TOTP Store Icons position for Classic Template
- Activate input items only when verification code is verified - Add Alt Text for Background Image - Added customization for server errors by TOTP verification
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Add descriptive error message and fixed forgotPassword link - Make checkbox as group - Enforce Validation Error Update on control change and enable continue on email verified-- Added additional field to error code to validation failure response
+- Add more field to error code to validation failure response
**2.1.16**-- Fixed "Claims for verification control have not been verified" bug while verifying code.
+- Fixed "Claims for verification control haven't been verified" bug while verifying code.
- Hide error message on validation succeeds and send code to verify **2.1.15**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**1.2.0** -- The username/email and password fields now use the `form` HTML element to allow Edge and Internet Explorer (IE) to properly save this information.
+- The username/email and password fields now use the `form` HTML element to allow Microsoft Edge and Internet Explorer (IE) to properly save this information.
- Added a configurable user input validation delay for improved user experience. - Accessibility fixes-- Fixed an accessibility issue so that error messages are now read by Narrator.
+- Fix an accessibility issue so that error messages are read by Narrator.
- Focus is now placed on the password field after the email is verified. - Removed `autofocus` from the checkbox control. - Added support for a display control for phone number verification. - You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) - Load linked CSS files at the same time as your HTML template so it doesn't 'flicker' between loading the files. - Control the order in which your `script` tags are fetched and executed before the page load.-- Email field is now `type=email` and mobile keyboards will provide the correct suggestions.-- Support for Chrome translate.
+- Email field is now `type=email` and mobile keyboards provide the correct suggestions.
+- Support for Chrome translates.
- Added support for company branding in user flow pages. **1.1.0**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
> [!TIP] > If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select.
+**2.1.17**
+
+- Add CAPTCHA.
+ **2.1.14** - Replaced `Keypress` to `Key Down` event. **2.1.13** -- Fixed content security policy (CSP) violation and remove additional request header X-Aspnetmvc-Version--- Introduced Captcha mechanism for Self-asserted and Unified SSP Flows (_Beta-version-Internal use only_)
+- Fixed content security policy (CSP) violation and remove more request header X-Aspnetmvc-Version
**2.1.12**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**1.2.0** -- The username/email and password fields now use the `form` HTML element to allow Edge and Internet Explorer (IE) to properly save this information.
+- The username/email and password fields now use the `form` HTML element to allow Microsoft Edge and Internet Explorer (IE) to properly save this information.
- Accessibility fixes - You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) to control the load order for CSS and JavaScript. - Load linked CSS files at the same time as your HTML template so it doesn't 'flicker' between loading the files. - Control the order in which your `script` tags are fetched and executed before the page load.-- Email field is now `type=email` and mobile keyboards will provide the correct suggestions.-- Support for Chrome translate.
+- Email field is now `type=email` and mobile keyboards provide the correct suggestions.
+- Support for Chrome translates.
- Added support for tenant branding in user flow pages. **1.1.0**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## MFA page (multifactor)
+**1.2.15**
+
+- Add CAPTCHA to MFA page.
+ **1.2.12** - Replaced `KeyPress` to `KeyDown` event.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**1.2.9** -- Fixed `Enter` event trigger on MFA.
+- Fix `Enter` event trigger on MFA.
- CSS changes render page text/control in vertical manner for small screens -- Fixed Multifactor tab navigation bug.
+- Fix Multifactor tab navigation bug.
**1.2.8**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Minor bug fixes. **1.2.2**-- Fixed an issue with auto-filling the verification code when using iOS.
+- Fixed an issue with autofilling the verification code when using iOS.
- Fixed an issue with redirecting a token to the relying party from Android Webview. -- Added a UXString `heading` in addition to `intro` to display on the page as a title. This messages is hidden by default.
+- Added a UXString `heading` in addition to `intro` to display on the page as a title. This message is hidden by default.
- Added support for using policy or the QueryString parameter `pageFlavor` to select the layout (classic, oceanBlue, or slateGray). **1.2.1**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) to control the load order for CSS and JavaScript. - Load linked CSS files at the same time as your HTML template so it doesn't 'flicker' between loading the files. - Control the order in which your `script` tags are fetched and executed before the page load.-- Email field is now `type=email` and mobile keyboards will provide the correct suggestions-- Support for Chrome translate.
+- Email field is now `type=email` and mobile keyboards provide the correct suggestions
+- Support for Chrome translates.
- Added support for tenant branding in user flow pages. **1.1.0**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) to control the load order for CSS and JavaScript. - Load linked CSS files at the same time as your HTML template so it doesn't 'flicker' between loading the files. - Control the order in which your `script` tags are fetched and executed before the page load.-- Email field is now `type=email` and mobile keyboards will provide the correct suggestions-- Support for Chrome translate
+- Email field is now `type=email` and mobile keyboards provide the correct suggestions
+- Support for Chrome translates
**1.1.0**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) to control the load order for CSS and JavaScript. - Load linked CSS files at the same time as your HTML template so it doesn't 'flicker' between loading the files. - Control the order in which your `script` tags are fetched and executed before the page load.-- Email field is now `type=email` and mobile keyboards will provide the correct suggestions-- Support for Chrome translate
+- Email field is now `type=email` and mobile keyboards provide the correct suggestions
+- Support for Chrome translates
**1.0.0**
active-directory-b2c Partner Asignio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md
- Title: Configure Asignio with Azure Active Directory B2C for multifactor authentication-
-description: Configure Azure Active Directory B2C with Asignio for multifactor authentication
---- Previously updated : 01/26/2024---
-zone_pivot_groups: b2c-policy-type
-
-# Customer intent: I'm a developer integrating Asignio with Azure AD B2C for multifactor authentication. I want to configure an application with Asignio and set it up as an identity provider (IdP) in Azure AD B2C, so I can provide a passwordless, soft biometric, and multifactor authentication experience to customers.
--
-# Configure Asignio with Azure Active Directory B2C for multifactor authentication
-
-Learn to integrate Microsoft Entra ID (Azure AD B2C) authentication with [Asignio](https://www.web.asignio.com/). With this integration, provide passwordless, soft biometric, and multifactor authentication experience to customers. Asignio uses patented Asignio Signature and live facial verification for user authentication. The changeable biometric signature helps to reduce passwords, fraud, phishing, and credential reuse through omni-channel authentication.
-
-## Before you begin
-
-Choose a policy type selector to indicate the policy type setup. Azure AD B2C has two methods to define how users interact with your applications:
-
-* Predefined user flows
-* Configurable custom policies
-
-The steps in this article differ for each method.
-
-Learn more:
-
-* [User flows and custom policies overview](user-flow-overview.md)
-* [Azure AD B2C custom policy overview](custom-policy-overview.md)
--
-## Prerequisites
-
-* An Azure subscription.
-
-* If you don't have on, get an [Azure free account](https://azure.microsoft.com/free/)
--- An Azure AD B2C tenant linked to the Azure subscription-- See, [Tutorial: Create an Azure Active Directory B2C tenant](./tutorial-create-tenant.md) --- An Asignio Client ID and Client Secret issued by Asignio. -- These tokens are obtained by registering your mobile or web applications with Asignio.-
-### For custom policies
-
-Complete [Tutorial: Create user flows and custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
-
-## Scenario description
-
-This integration includes the following components:
-
-* **Azure AD B2C** - authorization server that verifies user credentials
-* **Web or mobile applications** - to secure with Asignio MFA
-* **Asignio web application** - signature biometric collection on the user touch device
-
-The following diagram illustrates the implementation.
-
- ![Diagram showing the implementation architecture.](./media/partner-asignio/partner-asignio-architecture-diagram.png)
--
-1. User opens Azure AD B2C sign in page on their mobile or web application, and then signs in or signs up.
-2. Azure AD B2C redirects the user to Asignio using an OpenID Connect (OIDC) request.
-3. The user is redirected to the Asignio web application for biometric sign in. If the user hasn't registered their Asignio Signature, they can use an SMS One-Time-Password (OTP) to authenticate. After authentication, user receives a registration link to create their Asignio Signature.
-4. The user authenticates with Asignio Signature and facial verification, or voice and facial verification.
-5. The challenge response goes to Asignio.
-6. Asignio returns the OIDC response to Azure AD B2C sign in.
-7. Azure AD B2C sends an authentication verification request to Asignio to confirm receipt of the authentication data.
-8. The user is granted or denied access to the application.
-
-## Configure an application with Asignio
-
-Configurating an application with Asignio is with the Asignio Partner Administration site.
-
-1. Go to asignio.com [Asignio Partner Administration](https://partner.asignio.com) page to request access for your organization.
-2. With credentials, sign into Asignio Partner Administration.
-3. Create a record for the Azure AD B2C application using your Azure AD B2C tenant. When you use Azure AD B2C with Asignio, Azure AD B2C manages connected applications. Asignio apps represent apps in the Azure portal.
-4. In the Asignio Partner Administration site, generate a Client ID and Client Secret.
-5. Note and store Client ID and Client Secret. You'll use them later. Asignio doesn't store Client Secrets.
-6. Enter the redirect URI in your site the user is returned to after authentication. Use the following URI pattern.
-
-`[https://<your-b2c-domain>.b2clogin.com/<your-b2c-domain>.onmicrosoft.com/oauth2/authresp]`.
-
-7. Upload a company logo. It appears on Asignio authentication when users sign in.
-
-## Register a web application in Azure AD B2C
-
-Register applications in a tenant you manage, then they can interact with Azure AD B2C.
-
-Learn more: [Application types that can be used in Active Directory B2C](application-types.md)
-
-For this tutorial, you're registering `https://jwt.ms`, a Microsoft web application with decoded token contents that don't leave your browser.
-
-### Register a web application and enable ID token implicit grant
-
-Complete [Tutorial: Register a web application in Azure Active Directory B2C](tutorial-register-applications.md?tabs=app-reg-ga)
-
-## Configure Asignio as an identity provider in Azure AD B2C
-
-For the following instructions, use the Microsoft Entra tenant with the Azure subscription.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/#home) as the Global Administrator of the Azure AD B2C tenant.
-2. In the Azure portal toolbar, select **Directories + subscriptions**.
-3. On **Portal settings | Directories + subscriptions**, in the **Directory name** list, locate your Microsoft Entra directory.
-4. Select **Switch**.
-5. In the top-left corner of the Azure portal, select **All services**.
-6. Search for and select **Azure AD B2C**.
-7. In the Azure portal, search for and select **Azure AD B2C**.
-8. In the left menu, select **Identity providers**.
-9. Select **New OpenID Connect Provider**.
-10. Select **Identity provider type** > **OpenID Connect**.
-11. For **Name**, enter the Asignio sign in, or a name you choose.
-12. For **Metadata URL**, enter `https://authorization.asignio.com/.well-known/openid-configuration`.
-13. For **Client ID**, enter the Client ID you generated.
-14. For **Client Secret**, enter the Client Secret you generated.
-15. For **Scope**, use **openid email profile**.
-16. For **Response type**, use **code**.
-17. For **Response mode**, use **query**.
-18. For Domain hint, use `https://asignio.com`.
-19. Select **OK**.
-20. Select **Map this identity provider's claims**.
-21. For **User ID**, use **sub**.
-22. For **Display Name**, use **name**.
-23. For **Given Name**, use **given_name**.
-24. For **Surname**, use **family_name**.
-25. For **Emai**l, use **email**.
-26. Select **Save**.
-
-## SCreate a user flow policy
-
-1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
-2. Select **New user flow**.
-3. Select **Sign up and sign in** user flow type.
-4. Select **Version Recommended**.
-5. Select **Create**.
-6. Enter a user flow **Name**, such as `AsignioSignupSignin`.
-7. Under **Identity providers**, for **Local Accounts**, select **None**. This action disables email and password authentication.
-8. For **Custom identity providers**, select the created Asignio Identity provider.
-9. Select **Create**.
-
-## Test your user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-2. Select the created user flow.
-3. For **Application**, select the web application you registered. The **Reply URL** is `https://jwt.ms`.
-4. Select **Run user flow**.
-5. The browser is redirected to the Asignio sign in page.
-6. A sign in screen appears.
-7. At the bottom, select **Asignio** authentication.
-
-If you have an Asignio Signature, complete the prompt to authenticate. If not, supply the device phone number to authenticate via SMS OTP. Use the link to register your Asignio Signature.
-
-8. The browser is redirected to `https://jwt.ms`. The token contents returned by Azure AD B2C appear.
-
-## Create Asignio policy key
-
-1. Store the generated Client Secret in the Azure AD B2C tenant.
-2. Sign in to the [Azure portal](https://portal.azure.com/).
-3. In the portal toolbar, select the **Directories + subscriptions**.
-4. On **Portal settings | Directories + subscriptions**, in the **Directory name** list, locate your Azure AD B2C directory.
-5. Select **Switch**.
-6. In the top-left corner of the Azure portal, select **All services**.
-7. Search for and select **Azure AD B2C**.
-8. On the Overview page, select **Identity Experience Framework**.
-9. Select **Policy Keys**.
-10. Select **Add**.
-11. For **Options**, select **Manual**.
-12. Enter a policy key **Name** for the policy key. The prefix `B2C_1A_` is appended to the key name.
-13. In **Secret**, enter the Client Secret that you noted.
-14. For **Key usage**, select **Signature**.
-15. Select **Create**.
-
-## Configure Asignio as an Identity provider
-
->[!TIP]
->Before you begin, ensure the Azure AD B2C policy is configured. If not, follow the instructions in [Custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
-
-For users to sign in with Asignio, define Asignio as a claims provider that Azure AD B2C communicates with through an endpoint. The endpoint provides claims Azure AD B2C uses to verify user authentication with using digital ID on the device.
-
-### Add Asignio as a claims provider
-
-Get the custom policy starter packs from GitHub, then update the XML files in the LocalAccounts starter pack with your Azure AD B2C tenant name:
-
-1. Download the zip [active-directory-b2c-custom-policy-starterpack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository:
-
- ```
- git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
- ```
-
-2. In the files in the **LocalAccounts** directory, replace the string `yourtenant` with the Azure AD B2C tenant name.
-3. Open the **LocalAccounts/ TrustFrameworkExtensions.xml**.
-4. Find the **ClaimsProviders** element. If there isn't one, add it under the root element, `TrustFrameworkPolicy`.
-5. Add a new **ClaimsProvider** similar to the following example:
-
- ```xml
- <ClaimsProvider>
- <Domain>contoso.com</Domain>
- <DisplayName>Asignio</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="Asignio-Oauth2">
- <DisplayName>Asignio</DisplayName>
- <Description>Login with your Asignio account</Description>
- <Protocol Name="OAuth2" />
- <Metadata>
- <Item Key="ProviderName">authorization.asignio.com</Item>
- <Item Key="authorization_endpoint">https://authorization.asignio.com/authorize</Item>
- <Item Key="AccessTokenEndpoint">https://authorization.asignio.com/token</Item>
- <Item Key="ClaimsEndpoint">https://authorization.asignio.com/userinfo</Item>
- <Item Key="ClaimsEndpointAccessTokenName">access_token</Item>
- <Item Key="BearerTokenTransmissionMethod">AuthorizationHeader</Item>
- <Item Key="HttpBinding">POST</Item>
- <Item Key="scope">openid profile email</Item>
- <Item Key="UsePolicyInRedirectUri">0</Item>
- <!-- Update the Client ID below to the Asignio Application ID -->
- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
- <Item Key="IncludeClaimResolvingInClaimsHandling">true</Item>
--
- <!-- trying to add additional claim-->
- <!--Insert b2c-extensions-app application ID here, for example: 11111111-1111-1111-1111-111111111111-->
- <Item Key="11111111-1111-1111-1111-111111111111"></Item>
- <!--Insert b2c-extensions-app application ObjectId here, for example: 22222222-2222-2222-2222-222222222222-->
- <Item Key="22222222-2222-2222-2222-222222222222"></Item>
- <!-- The key below allows you to specify each of the Azure AD tenants that can be used to sign in. Update the GUIDs below for each tenant. -->
- <!--<Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/11111111-1111-1111-1111-111111111111</Item>-->
- <!-- The commented key below specifies that users from any tenant can sign-in. Uncomment if you would like anyone with an Azure AD account to be able to sign in. -->
- <Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/</Item>
- </Metadata>
- <CryptographicKeys>
- <Key Id="client_secret" StorageReferenceId="B2C_1A_AsignioSecret" />
- </CryptographicKeys>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
- <OutputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="tid" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
- <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
- <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" DefaultValue="https://authorization.asignio.com" />
- <OutputClaim ClaimTypeReferenceId="identityProviderAccessToken" PartnerClaimType="{oauth2:access_token}" />
- <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
- <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="family_name" />
- <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
- <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
- </OutputClaims>
- <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
- <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
- <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
- <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
- </OutputClaimsTransformations>
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
- ```
-
-6. Set **client_id** with the Asignio Application ID you noted.
-7. Update **client_secret** section with the policy key you created. For example, `B2C_1A_AsignioSecret`:
-
- ```xml
- <Key Id="client_secret" StorageReferenceId="B2C_1A_AsignioSecret" />
- ```
-
-8. Save the changes.
-
-## Add a user journey
-
-The identity provider isn't in the sign in pages.
-
-1. If you have a custom user journey continue to **Configure the relying party policy**, otherwise, copy a template user journey:
-2. From the starter pack, open the **LocalAccounts/ TrustFrameworkBase.xml**.
-3. Locate and copy the contents of the **UserJourney** element that include `Id=SignUpOrSignIn`.
-4. Open the **LocalAccounts/ TrustFrameworkExtensions.xml**.
-5. Locate the **UserJourneys** element. If there isn't one, add one.
-6. Paste the UserJourney element contents as a child of the UserJourneys element.]
-7. Rename the user journey **ID**. For example, `Id=AsignioSUSI`.
-
-Learn more: [User journeys](custom-policy-overview.md#user-journeys)
-
-## Add the identity provider to a user journey
-
-Add the new identity provider to the user journey.
-
-1. Find the orchestration step element that includes `Type=CombinedSignInAndSignUp`, or `Type=ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element has an identity provider list that users sign in with. The order of the elements controls the order of the sign in buttons.
-2. Add a **ClaimsProviderSelection** XML element.
-3. Set the value of **TargetClaimsExchangeId** to a friendly name.
-4. Add a **ClaimsExchange** element.
-5. Set the **Id** to the value of the target claims exchange ID.
-6. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created.
-
-The following XML demonstrates user journey orchestration with the identity provider.
-
-```xml
- <UserJourney Id="AsignioSUSI">
- <OrchestrationSteps>
- <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
- <ClaimsProviderSelections>
- <ClaimsProviderSelection TargetClaimsExchangeId="AsignioExchange" />
- <ClaimsProviderSelection ValidationClaimsExchangeId="LocalAccountSigninEmailExchange" />
- </ClaimsProviderSelections>
- <ClaimsExchanges>
- <ClaimsExchange Id="LocalAccountSigninEmailExchange" TechnicalProfileReferenceId="SelfAsserted-LocalAccountSignin-Email" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <!-- Check if the user has selected to sign in using one of the social providers -->
- <OrchestrationStep Order="2" Type="ClaimsExchange">
- <Preconditions>
- <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
- <Value>objectId</Value>
- <Action>SkipThisOrchestrationStep</Action>
- </Precondition>
- </Preconditions>
- <ClaimsExchanges>
- <ClaimsExchange Id="AsignioExchange" TechnicalProfileReferenceId="Asignio-Oauth2" />
- <ClaimsExchange Id="SignUpWithLogonEmailExchange" TechnicalProfileReferenceId="LocalAccountSignUpWithLogonEmail" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <OrchestrationStep Order="3" Type="ClaimsExchange">
- <Preconditions>
- <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
- <Value>authenticationSource</Value>
- <Value>localAccountAuthentication</Value>
- <Action>SkipThisOrchestrationStep</Action>
- </Precondition>
- </Preconditions>
- <ClaimsExchanges>
- <ClaimsExchange Id="AADUserReadUsingAlternativeSecurityId" TechnicalProfileReferenceId="AAD-UserReadUsingAlternativeSecurityId-NoError" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <!-- Show self-asserted page only if the directory does not have the user account already (i.e. we do not have an objectId). This can only happen when authentication happened using a social IDP. If local account was created or authentication done using ESTS in step 2, then an user account must exist in the directory by this time. -->
- <OrchestrationStep Order="4" Type="ClaimsExchange">
- <Preconditions>
- <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
- <Value>objectId</Value>
- <Action>SkipThisOrchestrationStep</Action>
- </Precondition>
- </Preconditions>
- <ClaimsExchanges>
- <ClaimsExchange Id="SelfAsserted-Social" TechnicalProfileReferenceId="SelfAsserted-Social" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <!-- This step reads any user attributes that we may not have received when authenticating using ESTS so they can be sent in the token. -->
- <OrchestrationStep Order="5" Type="ClaimsExchange">
- <Preconditions>
- <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
- <Value>authenticationSource</Value>
- <Value>socialIdpAuthentication</Value>
- <Action>SkipThisOrchestrationStep</Action>
- </Precondition>
- </Preconditions>
- <ClaimsExchanges>
- <ClaimsExchange Id="AADUserReadWithObjectId" TechnicalProfileReferenceId="AAD-UserReadUsingObjectId" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <!-- The previous step (SelfAsserted-Social) could have been skipped if there were no attributes to collect from the user. So, in that case, create the user in the directory if one does not already exist (verified using objectId which would be set from the last step if account was created in the directory. -->
- <OrchestrationStep Order="6" Type="ClaimsExchange">
- <Preconditions>
- <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
- <Value>objectId</Value>
- <Action>SkipThisOrchestrationStep</Action>
- </Precondition>
- </Preconditions>
- <ClaimsExchanges>
- <ClaimsExchange Id="AADUserWrite" TechnicalProfileReferenceId="AAD-UserWriteUsingAlternativeSecurityId" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <OrchestrationStep Order="7" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
- </OrchestrationSteps>
- <ClientDefinition ReferenceId="DefaultWeb" />
- </UserJourney>
-```
-
-## Configure the relying party policy
-
-The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/main/LocalAccounts/SignUpOrSignin.xml), specifies the user journey Azure AD B2C executes.
-
-1. In the relying party, locate the **DefaultUserJourney** element.
-2. Update the **ReferenceId** to match the user journey ID, in which you added the identity provider.
-
-In the following example, for the `AsignioSUSI` user journey, the **ReferenceId** is set to `AsignioSUSI`:
-
-```xml
- <RelyingParty>
- <DefaultUserJourney ReferenceId="AsignioSUSI" />
- <TechnicalProfile Id="PolicyProfile">
- <DisplayName>PolicyProfile</DisplayName>
- <Protocol Name="OpenIdConnect" />
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="displayName" />
- <OutputClaim ClaimTypeReferenceId="givenName" />
- <OutputClaim ClaimTypeReferenceId="surname" />
- <OutputClaim ClaimTypeReferenceId="email" />
- <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
- <OutputClaim ClaimTypeReferenceId="identityProvider" />
- <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
- <OutputClaim ClaimTypeReferenceId="correlationId" DefaultValue="{Context:CorrelationId}" />
- </OutputClaims>
- <SubjectNamingInfo ClaimType="sub" />
- </TechnicalProfile>
- </RelyingParty>
-
-```
-
-## Upload the custom policy
-
-1. Sign in to the [Azure portal](https://portal.azure.com/#home).
-2. In the portal toolbar, select the **Directories + subscriptions**.
-3. On **Portal settings | Directories + subscriptions**, in the **Directory name** list, locate your Azure AD B2C directory.
-4. Select **Switch**.
-5. In the Azure portal, search for and select **Azure AD B2C**.
-6. Under Policies, select **Identity Experience Framework**.
-7. Select **Upload Custom Policy**.
-8. Upload the two policy files you changed in the following order:
-
- * Extension policy, for example `TrustFrameworkExtensions.xml`
- * Relying party policy, such as `SignUpOrSignin.xml`
-
-## Test your custom policy
-
-1. In your Azure AD B2C tenant, and under **Policies**, select **Identity Experience Framework**.
-2. Under **Custom policies**, select **AsignioSUSI**.
-3. For **Application**, select the web application that you registered. The **Reply URL** is `https://jwt.ms`.
-4. Select **Run now**.
-5. The browser is redirected to the Asignio sign in page.
-6. A sign in screen appears.
-7. At the bottom, select **Asignio** authentication.
-
-If you have an Asignio Signature, you're prompted to authenticate with your Asignio Signature. If not, supply the device phone number to authenticate via SMS OTP. Use the link to register your Asignio Signature.
-
-8. The browser is redirected to `https://jwt.ms`. The token contents returned by Azure AD B2C appear.
-
-## Next steps
-
-* [Solutions and Training for Azure Active Directory B2C](solution-articles.md)
-* Ask questions on [Stackoverflow](https://stackoverflow.com/questions/tagged/azure-ad-b2c)
-* [Azure AD B2C Samples](https://stackoverflow.com/questions/tagged/azure-ad-b2c)
-* YouTube: [Identity Azure AD B2C Series](https://www.youtube.com/playlist?list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0)
-* [Azure AD B2C custom policy overview](custom-policy-overview.md)
-* [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+
+ Title: Configure Asignio with Azure Active Directory B2C for multifactor authentication
+
+description: Learn how to configure Azure Active Directory B2C with Asignio for multifactor authentication
++++ Last updated : 01/26/2024+++
+zone_pivot_groups: b2c-policy-type
+
+# Customer intent: As a developer integrating Asignio with Azure AD B2C for multifactor authentication. I want to configure an application with Asignio and set it up as an identity provider (IdP) in Azure AD B2C, so I can provide a passwordless, soft biometric, and multifactor authentication experience to customers.
++
+# Configure Asignio with Azure Active Directory B2C for multifactor authentication
+
+Learn to integrate Microsoft Entra ID (Azure AD B2C) authentication with [Asignio](https://www.web.asignio.com/). With this integration, provide passwordless, soft biometric, and multifactor authentication experience to customers. Asignio uses patented Asignio Signature and live facial verification for user authentication. The changeable biometric signature helps to reduce passwords, fraud, phishing, and credential reuse through omni-channel authentication.
+
+## Before you begin
+
+Choose a policy type selector to indicate the policy type setup. Azure AD B2C has two methods to define how users interact with your applications:
+
+* Predefined user flows
+* Configurable custom policies
+
+The steps in this article differ for each method.
+
+Learn more:
+
+* [User flows and custom policies overview](user-flow-overview.md)
+* [Azure AD B2C custom policy overview](custom-policy-overview.md)
++
+## Prerequisites
+
+* An Azure subscription.
+
+* If you don't have on, get an [Azure free account](https://azure.microsoft.com/free/)
+
+- An Azure AD B2C tenant linked to the Azure subscription
+- See, [Tutorial: Create an Azure Active Directory B2C tenant](./tutorial-create-tenant.md)
+
+- An Asignio Client ID and Client Secret issued by Asignio.
+- These tokens are obtained by registering your mobile or web applications with Asignio.
+
+### For custom policies
+
+Complete [Tutorial: Create user flows and custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+
+## Scenario description
+
+This integration includes the following components:
+
+* **Azure AD B2C** - authorization server that verifies user credentials
+* **Web or mobile applications** - to secure with Asignio MFA
+* **Asignio web application** - signature biometric collection on the user touch device
+
+The following diagram illustrates the implementation.
+
+ ![Diagram showing the implementation architecture.](./media/partner-asignio/partner-asignio-architecture-diagram.png)
++
+1. User opens Azure AD B2C sign in page on their mobile or web application, and then signs in or signs up.
+2. Azure AD B2C redirects the user to Asignio using an OpenID Connect (OIDC) request.
+3. The user is redirected to the Asignio web application for biometric sign in. If the user hasn't registered their Asignio Signature, they can use an SMS One-Time-Password (OTP) to authenticate. After authentication, user receives a registration link to create their Asignio Signature.
+4. The user authenticates with Asignio Signature and facial verification, or voice and facial verification.
+5. The challenge response goes to Asignio.
+6. Asignio returns the OIDC response to Azure AD B2C sign in.
+7. Azure AD B2C sends an authentication verification request to Asignio to confirm receipt of the authentication data.
+8. The user is granted or denied access to the application.
+
+## Configure an application with Asignio
+
+Configurating an application with Asignio is with the Asignio Partner Administration site.
+
+1. Go to asignio.com [Asignio Partner Administration](https://partner.asignio.com) page to request access for your organization.
+2. With credentials, sign into Asignio Partner Administration.
+3. Create a record for the Azure AD B2C application using your Azure AD B2C tenant. When you use Azure AD B2C with Asignio, Azure AD B2C manages connected applications. Asignio apps represent apps in the Azure portal.
+4. In the Asignio Partner Administration site, generate a Client ID and Client Secret.
+5. Note and store Client ID and Client Secret. You'll use them later. Asignio doesn't store Client Secrets.
+6. Enter the redirect URI in your site the user is returned to after authentication. Use the following URI pattern.
+
+`[https://<your-b2c-domain>.b2clogin.com/<your-b2c-domain>.onmicrosoft.com/oauth2/authresp]`.
+
+7. Upload a company logo. It appears on Asignio authentication when users sign in.
+
+## Register a web application in Azure AD B2C
+
+Register applications in a tenant you manage, then they can interact with Azure AD B2C.
+
+Learn more: [Application types that can be used in Active Directory B2C](application-types.md)
+
+For this tutorial, you're registering `https://jwt.ms`, a Microsoft web application with decoded token contents that don't leave your browser.
+
+### Register a web application and enable ID token implicit grant
+
+Complete [Tutorial: Register a web application in Azure Active Directory B2C](tutorial-register-applications.md?tabs=app-reg-ga)
+
+## Configure Asignio as an identity provider in Azure AD B2C
+
+For the following instructions, use the Microsoft Entra tenant with the Azure subscription.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home) as the Global Administrator of the Azure AD B2C tenant.
+2. In the Azure portal toolbar, select **Directories + subscriptions**.
+3. On **Portal settings | Directories + subscriptions**, in the **Directory name** list, locate your Microsoft Entra directory.
+4. Select **Switch**.
+5. In the top-left corner of the Azure portal, select **All services**.
+6. Search for and select **Azure AD B2C**.
+7. In the Azure portal, search for and select **Azure AD B2C**.
+8. In the left menu, select **Identity providers**.
+9. Select **New OpenID Connect Provider**.
+10. Select **Identity provider type** > **OpenID Connect**.
+11. For **Name**, enter the Asignio sign in, or a name you choose.
+12. For **Metadata URL**, enter `https://authorization.asignio.com/.well-known/openid-configuration`.
+13. For **Client ID**, enter the Client ID you generated.
+14. For **Client Secret**, enter the Client Secret you generated.
+15. For **Scope**, use **openid email profile**.
+16. For **Response type**, use **code**.
+17. For **Response mode**, use **query**.
+18. For Domain hint, use `https://asignio.com`.
+19. Select **OK**.
+20. Select **Map this identity provider's claims**.
+21. For **User ID**, use **sub**.
+22. For **Display Name**, use **name**.
+23. For **Given Name**, use **given_name**.
+24. For **Surname**, use **family_name**.
+25. For **Emai**l, use **email**.
+26. Select **Save**.
+
+## SCreate a user flow policy
+
+1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
+2. Select **New user flow**.
+3. Select **Sign up and sign in** user flow type.
+4. Select **Version Recommended**.
+5. Select **Create**.
+6. Enter a user flow **Name**, such as `AsignioSignupSignin`.
+7. Under **Identity providers**, for **Local Accounts**, select **None**. This action disables email and password authentication.
+8. For **Custom identity providers**, select the created Asignio Identity provider.
+9. Select **Create**.
+
+## Test your user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+2. Select the created user flow.
+3. For **Application**, select the web application you registered. The **Reply URL** is `https://jwt.ms`.
+4. Select **Run user flow**.
+5. The browser is redirected to the Asignio sign in page.
+6. A sign in screen appears.
+7. At the bottom, select **Asignio** authentication.
+
+If you have an Asignio Signature, complete the prompt to authenticate. If not, supply the device phone number to authenticate via SMS OTP. Use the link to register your Asignio Signature.
+
+8. The browser is redirected to `https://jwt.ms`. The token contents returned by Azure AD B2C appear.
+
+## Create Asignio policy key
+
+1. Store the generated Client Secret in the Azure AD B2C tenant.
+2. Sign in to the [Azure portal](https://portal.azure.com/).
+3. In the portal toolbar, select the **Directories + subscriptions**.
+4. On **Portal settings | Directories + subscriptions**, in the **Directory name** list, locate your Azure AD B2C directory.
+5. Select **Switch**.
+6. In the top-left corner of the Azure portal, select **All services**.
+7. Search for and select **Azure AD B2C**.
+8. On the Overview page, select **Identity Experience Framework**.
+9. Select **Policy Keys**.
+10. Select **Add**.
+11. For **Options**, select **Manual**.
+12. Enter a policy key **Name** for the policy key. The prefix `B2C_1A_` is appended to the key name.
+13. In **Secret**, enter the Client Secret that you noted.
+14. For **Key usage**, select **Signature**.
+15. Select **Create**.
+
+## Configure Asignio as an Identity provider
+
+>[!TIP]
+>Before you begin, ensure the Azure AD B2C policy is configured. If not, follow the instructions in [Custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
+
+For users to sign in with Asignio, define Asignio as a claims provider that Azure AD B2C communicates with through an endpoint. The endpoint provides claims Azure AD B2C uses to verify user authentication with using digital ID on the device.
+
+### Add Asignio as a claims provider
+
+Get the custom policy starter packs from GitHub, then update the XML files in the LocalAccounts starter pack with your Azure AD B2C tenant name:
+
+1. Download the zip [active-directory-b2c-custom-policy-starterpack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository:
+
+ ```
+ git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
+ ```
+
+2. In the files in the **LocalAccounts** directory, replace the string `yourtenant` with the Azure AD B2C tenant name.
+3. Open the **LocalAccounts/ TrustFrameworkExtensions.xml**.
+4. Find the **ClaimsProviders** element. If there isn't one, add it under the root element, `TrustFrameworkPolicy`.
+5. Add a new **ClaimsProvider** similar to the following example:
+
+ ```xml
+ <ClaimsProvider>
+ <Domain>contoso.com</Domain>
+ <DisplayName>Asignio</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="Asignio-Oauth2">
+ <DisplayName>Asignio</DisplayName>
+ <Description>Login with your Asignio account</Description>
+ <Protocol Name="OAuth2" />
+ <Metadata>
+ <Item Key="ProviderName">authorization.asignio.com</Item>
+ <Item Key="authorization_endpoint">https://authorization.asignio.com/authorize</Item>
+ <Item Key="AccessTokenEndpoint">https://authorization.asignio.com/token</Item>
+ <Item Key="ClaimsEndpoint">https://authorization.asignio.com/userinfo</Item>
+ <Item Key="ClaimsEndpointAccessTokenName">access_token</Item>
+ <Item Key="BearerTokenTransmissionMethod">AuthorizationHeader</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="scope">openid profile email</Item>
+ <Item Key="UsePolicyInRedirectUri">0</Item>
+ <!-- Update the Client ID below to the Asignio Application ID -->
+ <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
+ <Item Key="IncludeClaimResolvingInClaimsHandling">true</Item>
++
+ <!-- trying to add additional claim-->
+ <!--Insert b2c-extensions-app application ID here, for example: 11111111-1111-1111-1111-111111111111-->
+ <Item Key="11111111-1111-1111-1111-111111111111"></Item>
+ <!--Insert b2c-extensions-app application ObjectId here, for example: 22222222-2222-2222-2222-222222222222-->
+ <Item Key="22222222-2222-2222-2222-222222222222"></Item>
+ <!-- The key below allows you to specify each of the Azure AD tenants that can be used to sign in. Update the GUIDs below for each tenant. -->
+ <!--<Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/11111111-1111-1111-1111-111111111111</Item>-->
+ <!-- The commented key below specifies that users from any tenant can sign-in. Uncomment if you would like anyone with an Azure AD account to be able to sign in. -->
+ <Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_AsignioSecret" />
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="tid" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" DefaultValue="https://authorization.asignio.com" />
+ <OutputClaim ClaimTypeReferenceId="identityProviderAccessToken" PartnerClaimType="{oauth2:access_token}" />
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
+ <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="family_name" />
+ <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+6. Set **client_id** with the Asignio Application ID you noted.
+7. Update **client_secret** section with the policy key you created. For example, `B2C_1A_AsignioSecret`:
+
+ ```xml
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_AsignioSecret" />
+ ```
+
+8. Save the changes.
+
+## Add a user journey
+
+The identity provider isn't in the sign in pages.
+
+1. If you have a custom user journey continue to **Configure the relying party policy**, otherwise, copy a template user journey:
+2. From the starter pack, open the **LocalAccounts/ TrustFrameworkBase.xml**.
+3. Locate and copy the contents of the **UserJourney** element that include `Id=SignUpOrSignIn`.
+4. Open the **LocalAccounts/ TrustFrameworkExtensions.xml**.
+5. Locate the **UserJourneys** element. If there isn't one, add one.
+6. Paste the UserJourney element contents as a child of the UserJourneys element.]
+7. Rename the user journey **ID**. For example, `Id=AsignioSUSI`.
+
+Learn more: [User journeys](custom-policy-overview.md#user-journeys)
+
+## Add the identity provider to a user journey
+
+Add the new identity provider to the user journey.
+
+1. Find the orchestration step element that includes `Type=CombinedSignInAndSignUp`, or `Type=ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element has an identity provider list that users sign in with. The order of the elements controls the order of the sign in buttons.
+2. Add a **ClaimsProviderSelection** XML element.
+3. Set the value of **TargetClaimsExchangeId** to a friendly name.
+4. Add a **ClaimsExchange** element.
+5. Set the **Id** to the value of the target claims exchange ID.
+6. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created.
+
+The following XML demonstrates user journey orchestration with the identity provider.
+
+```xml
+ <UserJourney Id="AsignioSUSI">
+ <OrchestrationSteps>
+ <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ <ClaimsProviderSelection TargetClaimsExchangeId="AsignioExchange" />
+ <ClaimsProviderSelection ValidationClaimsExchangeId="LocalAccountSigninEmailExchange" />
+ </ClaimsProviderSelections>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="LocalAccountSigninEmailExchange" TechnicalProfileReferenceId="SelfAsserted-LocalAccountSignin-Email" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <!-- Check if the user has selected to sign in using one of the social providers -->
+ <OrchestrationStep Order="2" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AsignioExchange" TechnicalProfileReferenceId="Asignio-Oauth2" />
+ <ClaimsExchange Id="SignUpWithLogonEmailExchange" TechnicalProfileReferenceId="LocalAccountSignUpWithLogonEmail" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="3" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
+ <Value>authenticationSource</Value>
+ <Value>localAccountAuthentication</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserReadUsingAlternativeSecurityId" TechnicalProfileReferenceId="AAD-UserReadUsingAlternativeSecurityId-NoError" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <!-- Show self-asserted page only if the directory does not have the user account already (i.e. we do not have an objectId). This can only happen when authentication happened using a social IDP. If local account was created or authentication done using ESTS in step 2, then an user account must exist in the directory by this time. -->
+ <OrchestrationStep Order="4" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="SelfAsserted-Social" TechnicalProfileReferenceId="SelfAsserted-Social" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <!-- This step reads any user attributes that we may not have received when authenticating using ESTS so they can be sent in the token. -->
+ <OrchestrationStep Order="5" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
+ <Value>authenticationSource</Value>
+ <Value>socialIdpAuthentication</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserReadWithObjectId" TechnicalProfileReferenceId="AAD-UserReadUsingObjectId" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <!-- The previous step (SelfAsserted-Social) could have been skipped if there were no attributes to collect from the user. So, in that case, create the user in the directory if one does not already exist (verified using objectId which would be set from the last step if account was created in the directory. -->
+ <OrchestrationStep Order="6" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserWrite" TechnicalProfileReferenceId="AAD-UserWriteUsingAlternativeSecurityId" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="7" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
+ </OrchestrationSteps>
+ <ClientDefinition ReferenceId="DefaultWeb" />
+ </UserJourney>
+```
+
+## Configure the relying party policy
+
+The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/main/LocalAccounts/SignUpOrSignin.xml), specifies the user journey Azure AD B2C executes.
+
+1. In the relying party, locate the **DefaultUserJourney** element.
+2. Update the **ReferenceId** to match the user journey ID, in which you added the identity provider.
+
+In the following example, for the `AsignioSUSI` user journey, the **ReferenceId** is set to `AsignioSUSI`:
+
+```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="AsignioSUSI" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
+ <OutputClaim ClaimTypeReferenceId="identityProvider" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ <OutputClaim ClaimTypeReferenceId="correlationId" DefaultValue="{Context:CorrelationId}" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+
+```
+
+## Upload the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+2. In the portal toolbar, select the **Directories + subscriptions**.
+3. On **Portal settings | Directories + subscriptions**, in the **Directory name** list, locate your Azure AD B2C directory.
+4. Select **Switch**.
+5. In the Azure portal, search for and select **Azure AD B2C**.
+6. Under Policies, select **Identity Experience Framework**.
+7. Select **Upload Custom Policy**.
+8. Upload the two policy files you changed in the following order:
+
+ * Extension policy, for example `TrustFrameworkExtensions.xml`
+ * Relying party policy, such as `SignUpOrSignin.xml`
+
+## Test your custom policy
+
+1. In your Azure AD B2C tenant, and under **Policies**, select **Identity Experience Framework**.
+2. Under **Custom policies**, select **AsignioSUSI**.
+3. For **Application**, select the web application that you registered. The **Reply URL** is `https://jwt.ms`.
+4. Select **Run now**.
+5. The browser is redirected to the Asignio sign in page.
+6. A sign in screen appears.
+7. At the bottom, select **Asignio** authentication.
+
+If you have an Asignio Signature, you're prompted to authenticate with your Asignio Signature. If not, supply the device phone number to authenticate via SMS OTP. Use the link to register your Asignio Signature.
+
+8. The browser is redirected to `https://jwt.ms`. The token contents returned by Azure AD B2C appear.
+
+## Next steps
+
+* [Solutions and Training for Azure Active Directory B2C](solution-articles.md)
+* Ask questions on [Stackoverflow](https://stackoverflow.com/questions/tagged/azure-ad-b2c)
+* [Azure AD B2C Samples](https://stackoverflow.com/questions/tagged/azure-ad-b2c)
+* YouTube: [Identity Azure AD B2C Series](https://www.youtube.com/playlist?list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0)
+* [Azure AD B2C custom policy overview](custom-policy-overview.md)
+* [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Phone Factor Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-factor-technical-profile.md
Last updated 01/11/2024+
The **CryptographicKeys** element is not used.
| setting.authenticationMode | No | The method to validate the phone number. Possible values: `sms`, `phone`, or `mixed` (default).| | setting.autodial| No| Specify whether the technical profile should auto dial or auto send an SMS. Possible values: `true`, or `false` (default). Auto dial requires the `setting.authenticationMode` metadata be set to `sms`, or `phone`. The input claims collection must have a single phone number. | | setting.autosubmit | No | Specifies whether the technical profile should auto submit the one-time password entry form. Possible values are `true` (default), or `false`. When auto-submit is turned off, the user needs to select a button to progress the journey. |
+| setting.enableCaptchaChallenge | No | Specifies whether CAPTCHA challenge code should be displayed in an MFA flow. Possible values: `true` , or `false` (default). For this setting to work, the [CAPTCHA display control]() must be referenced in the display claims of the phone factor technical profile. [CAPTCHA feature](add-captcha.md) is in **public preview**.|
### UI elements
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 01/11/2024 Last updated : 01/17/2024+
-#Customer intent: As a developer using Azure Active Directory B2C, I want to define a self-asserted technical profile with display claims and output claims, so that I can collect and validate user input and return the claims to the next orchestration step.
+#Customer intent: As a developer using Azure Active Directory B2C, I want to define a self-asserted technical profile with display, so that I can collect and validate user input.
In the display claims collection, you can include a reference to a [DisplayContr
The following example `TechnicalProfile` illustrates the use of display claims with display controls. * The first display claim makes a reference to the `emailVerificationControl` display control, which collects and verifies the email address.
-* The fifth display claim makes a reference to the `phoneVerificationControl` display control, which collects and verifies a phone number.
+* The second display claim makes a reference to the `captchaChallengeControl` display control, which generates and verifies CAPTCHA code.
+* The sixth display claim makes a reference to the `phoneVerificationControl` display control, which collects and verifies a phone number.
* The other display claims are ClaimTypes to be collected from the user. ```xml <TechnicalProfile Id="Id"> <DisplayClaims> <DisplayClaim DisplayControlReferenceId="emailVerificationControl" />
+ <DisplayClaim DisplayControlReferenceId="captchaChallengeControl" />
<DisplayClaim ClaimTypeReferenceId="displayName" Required="true" /> <DisplayClaim ClaimTypeReferenceId="givenName" Required="true" /> <DisplayClaim ClaimTypeReferenceId="surName" Required="true" />
You can also call a REST API technical profile with your business logic, overwri
| AllowGenerationOfClaimsWithNullValues| No| Allow to generate a claim with null value. For example, in a case user doesn't select a checkbox.| | ContentDefinitionReferenceId | Yes | The identifier of the [content definition](contentdefinitions.md) associated with this technical profile. | | EnforceEmailVerification | No | For sign-up or profile edit, enforces email verification. Possible values: `true` (default), or `false`. |
-| setting.retryLimit | No | Controls the number of times a user can try to provide the data that is checked against a validation technical profile. For example, a user tries to sign-up with an account that already exists and keeps trying until the limit reached. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#retry-limit) of this metadata.|
+| setting.retryLimit | No | Controls the number of times a user can try to provide the data that is checked against a validation technical profile. For example, a user tries to sign up with an account that already exists and keeps trying until the limit reached. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#retry-limit) of this metadata.|
| SignUpTarget <sup>1</sup>| No | The sign-up target exchange identifier. When the user clicks the sign-up button, Azure AD B2C executes the specified exchange identifier. | | setting.showCancelButton | No | Displays the cancel button. Possible values: `true` (default), or `false`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#show-the-cancel-button) of this metadata.| | setting.showContinueButton | No | Displays the continue button. Possible values: `true` (default), or `false`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#show-the-continue-button) of this metadata. |
You can also call a REST API technical profile with your business logic, overwri
| setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#input-verification-delay-time-in-milliseconds) of this metadata. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |setting.forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). |
+| setting.enableCaptchaChallenge | No | Specifies whether CAPTCHA challenge code should be displayed. Possible values: `true` , or `false` (default). For this setting to work, the [CAPTCHA display control]() must be referenced in the [display claims](#display-claims) of the self-asserted technical profile. CAPTCHA feature is in **public preview**.|
Notes:
ai-services Cognitive Services Support Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-support-options.md
Previously updated : 06/28/2022 Last updated : 02/22/2024
ai-services Azure Container Instance Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-container-instance-recipe.md
Previously updated : 12/18/2020 Last updated : 02/22/2024 # https://github.com/Azure/cognitiveservices-aci #Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.
ai-services Container Reuse Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/container-reuse-recipe.md
Previously updated : 10/28/2021 Last updated : 02/22/2024 #Customer intent: As a potential customer, I want to know how to configure containers so I can reuse them.
ai-services Docker Compose Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/docker-compose-recipe.md
Previously updated : 10/29/2020 Last updated : 02/22/2024 # SME: Brendan Walsh
ai-services Create Account Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-terraform.md
description: 'In this article, you create an Azure AI services resource using Te
keywords: Azure AI services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence Previously updated : 4/14/2023 Last updated : 2/23/2024 - devx-track-terraform - ignite-2023
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure AI services resource using Terraform
ai-services Assistants Reference Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-messages.md
A [message](#message-object) object.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(thread_message)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "role": "user",
A list of [message](#message-object) objects.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(thread_messages.data)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
A list of [message file](#message-file-object) objects
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(message_files)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/files?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The [message](#message-object) object matching the specified ID.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(message)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The [message file](#message-file-object) object.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(message_files)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}/files/{file_id}?api-version=2024-02-15-preview ``` \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The modified [message](#message-object) object.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(message)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-02-15-preview ``` \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "metadata": {
ai-services Assistants Reference Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-runs.md
Create a run.
| `assistant_id` | string | Required | The ID of the assistant to use to execute this run. | | `model` | string or null | Optional | The model deployment name to be used to execute this run. If a value is provided here, it will override the model deployment name associated with the assistant. If not, the model deployment name associated with the assistant will be used. | | `instructions` | string or null | Optional | Overrides the instructions of the assistant. This is useful for modifying the behavior on a per-run basis. |
-| `additional_instructions` | string or null | Optional | Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions. |
| `tools` | array or null | Optional | Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis. | | `metadata` | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
A run object.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(run)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "assistant_id": "asst_abc123"
A run object.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
run = client.beta.threads.create_and_run(
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/runs?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "assistant_id": "asst_abc123",
A list of [run](#run-object) objects.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(runs)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
A list of [run step](#run-step-object) objects.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(run_steps)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/steps?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The [run](#run-object) object matching the specified run ID.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(run)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The [run step](#run-step-object) object matching the specified ID.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(run_step)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/steps/{step_id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The modified [run](#run-object) object matching the specified ID.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(run)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' -d '{ "metadata": {
The modified [run](#run-object) object matching the specified ID.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(run)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/submit_tool_outputs?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "tool_outputs": [
The modified [run](#run-object) object matching the specified ID.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(run)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/cancel?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -X POST ```
ai-services Assistants Reference Threads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-threads.md
A [thread object](#thread-object).
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(empty_thread)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '' ```
The thread object matching the specified ID.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(my_thread)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-
## Modify thread ```http
-POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads{thread_id}?api-version=2024-02-15-preview
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview
``` Modifies a thread.
The modified [thread object](#thread-object) matching the specified ID.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(my_updated_thread)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "metadata": {
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-
## Delete thread ```http
-DELETE https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads{thread_id}?api-version=2024-02-15-preview
+DELETE https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview
``` Delete a thread
Deletion status.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(response)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -X DELETE ```
ai-services Assistants Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference.md
An [assistant](#assistant-object) object.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
assistant = client.beta.assistants.create(
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "instructions": "You are an AI assistant that can write code to help answer math questions.",
An [assistant file](#assistant-file-object) object.
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(assistant_file)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "file_id": "assistant-abc123"
A list of [assistant](#assistant-object) objects
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(my_assistants.data)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
A list of [assistant file](#assistant-file-object) objects
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(assistant_files)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}/files?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The [assistant](#assistant-object) object matching the specified ID.
```python client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(my_assistant)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The [assistant file](#assistant-file-object) object matching the specified ID
```python client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(assistant_file)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}/files/{file-id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' ```
The modified [assistant object](#assistant-object).
```python client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(my_updated_assistant)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
Deletion status.
```python client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(response)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -X DELETE ```
File deletion status
```python client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(deleted_assistant_file)
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files/{file-id}?api-version=2024-02-15-preview ``` \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -X DELETE ```
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
import openai
openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") openai.api_version = "2023-06-01-preview" # API version required to test out Annotations preview
-openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
response = openai.Completion.create( engine="gpt-35-turbo", # engine = "deployment_name".
import openai
openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") openai.api_version = "2023-06-01-preview" # API version required to test out Annotations preview
-openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
try: response = openai.Completion.create(
except openai.error.InvalidRequestError as e:
import os from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-10-01-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
main().catch((err) => {
```powershell-interactive # Env: for the endpoint and key assumes that you are using environment variables. $openai = @{
- api_key = $Env:AZURE_OPENAI_KEY
+ api_key = $Env:AZURE_OPENAI_API_KEY
api_base = $Env:AZURE_OPENAI_ENDPOINT # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/ api_version = '2023-10-01-preview' # this may change in the future name = 'YOUR-DEPLOYMENT-NAME-HERE' #This will correspond to the custom name you chose for your deployment when you deployed a model.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
In testing, OpenAI reports both the large and small third generation embeddings
| MIRACL average | 31.4 | 44.0 | 54.9 | | MTEB average | 61.0 | 62.3 | 64.6 |
-The third generation embeddings models support reducing the size of the embedding via a new `dimensions` parameter. Typically larger embeddings are more expensive from a compute, memory, and storage perspective. Being able to adjust the number of dimensions allows more control over overall cost and performance. Official support for the dimensions parameter was added to the OpenAI Python library in version `1.10.0`. If you are running an earlier version of the 1.x library you will need to upgrade `pip install openai --upgrade`.
+The third generation embeddings models support reducing the size of the embedding via a new `dimensions` parameter. Typically larger embeddings are more expensive from a compute, memory, and storage perspective. Being able to adjust the number of dimensions allows more control over overall cost and performance. The `dimensions` parameter is not supported in all versions of the OpenAI 1.x Python library, to take advantage of this parameter we recommend upgrading to the latest version: `pip install openai --upgrade`.
OpenAI's MTEB benchmark testing found that even when the third generation model's dimensions are reduced to less than `text-embeddings-ada-002` 1,536 dimensions performance remains slightly better.
GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview prev
| gpt-4 (0613) | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | East US <br> East US 2 <br> Japan East <br> UK South | | gpt-4 (1106-preview) | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | | | gpt-4 (0125-preview) | East US <br> North Central US <br> South Central US <br> |
-| gpt-4 (vision-preview) | Sweden Central <br> West US <br> Japan East| Switzerland North <br> Australia East |
+| gpt-4 (vision-preview) | Sweden Central <br> West US <br> Japan East <br> Switzerland North <br> Australia East| |
#### Azure Government regions
ai-services Provisioned Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md
An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model.
| Utilization | Provisioned-managed Utilization measure provided in Azure Monitor. | | Estimating size | Provided calculator in the studio & benchmarking script. |
+## How do I get access to Provisioned?
+
+You need to speak with your Microsoft sales/account team to acquire provisioned throughput. If you don't have a sales/account team, unfortunately at this time, you cannot purchase provisioned throughput.
+ ## Key concepts ### Provisioned throughput units
ai-services Assistant Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant-functions.md
To use all features of function calling including parallel functions, you need t
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
assistant = client.beta.assistants.create(
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H "Content-Type: application/json" \ -d '{ "instructions": "You are a weather bot. Use the provided functions to answer questions.",
You can then complete the **Run** by submitting the tool output from the functio
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
run = client.beta.threads.runs.submit_tool_outputs(
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/runs/run_123/submit_tool_outputs?api-version=2024-02-15-preview \ -H "Content-Type: application/json" \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-d '{ "tool_outputs": [{ "tool_call_id": "call_abc123",
ai-services Assistant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant.md
import json
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
When annotations are present in the Message content array, you'll see illegible
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
ai-services Code Interpreter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/code-interpreter.md
We recommend using assistants with the latest models to take advantage of the ne
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
assistant = client.beta.assistants.create(
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "instructions": "You are an AI assistant that can write code to help answer math questions.",
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
assistant = client.beta.assistants.create(
# Upload a file with an "assistants" purpose curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/files?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-F purpose="assistants" \ -F file="@c:\\path_to_file\\file.csv" # Create an assistant using the file ID curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "instructions": "You are an AI assistant that can write code to help answer math questions.",
In addition to making files accessible at the Assistants level you can pass file
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
thread = client.beta.threads.create(
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/<YOUR-THREAD-ID>/messages?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
-H 'Content-Type: application/json' \ -d '{ "role": "user",
You can download these generated files by passing the files to the files API:
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
with open("./my-image.png", "wb") as file:
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/files/<YOUR-FILE-ID>/content?api-version=2024-02-15-preview \
- -H "api-key: $AZURE_OPENAI_KEY" \
+ -H "api-key: $AZURE_OPENAI_API_KEY" \
--output image.png ```
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
import os
from openai import AzureOpenAI client = AzureOpenAI(
- api_key = os.getenv("AZURE_OPENAI_KEY"),
+ api_key = os.getenv("AZURE_OPENAI_API_KEY"),
api_version = "2023-05-15", azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT") )
foreach (float item in returnValue.Value.Data[0].Embedding.ToArray())
```powershell-interactive # Azure OpenAI metadata variables $openai = @{
- api_key = $Env:AZURE_OPENAI_KEY
+ api_key = $Env:AZURE_OPENAI_API_KEY
api_base = $Env:AZURE_OPENAI_ENDPOINT # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/ api_version = '2023-05-15' # this may change in the future name = 'YOUR-DEPLOYMENT-NAME-HERE' #This will correspond to the custom name you chose for your deployment when you deployed a model.
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
Previously updated : 02/06/2024 Last updated : 02/22/2024 zone_pivot_groups: openai-fine-tuning
Azure OpenAI Service lets you tailor our models to your personal datasets by usi
- Higher quality results than what you can get just from [prompt engineering](../concepts/prompt-engineering.md) - The ability to train on more examples than can fit into a model's max request context limit.
+- Token savings due to shorter prompts
- Lower-latency requests, particularly when using smaller models.
-A fine-tuned model improves on the few-shot learning approach by training the model's weights on your own data. A customized model lets you achieve better results on a wider number of tasks without needing to provide examples in your prompt. The result is less text sent and fewer tokens processed on every API call, potentially saving cost and improving request latency.
+In contrast to few-shot learning, fine tuning improves the model by training on many more examples than can fit in a prompt, letting you achieve better results on a wide number of tasks. Because fine tuning adjusts the base modelΓÇÖs weights to improve performance on the specific task, you wonΓÇÖt have to include as many examples or instructions in your prompt. This means less text sent and fewer tokens processed on every API call, potentially saving cost, and improving request latency.
+
+We use LoRA, or low rank approximation, to fine-tune models in a way that reduces their complexity without significantly affecting their performance. This method works by approximating the original high-rank matrix with a lower rank one, thus only fine-tuning a smaller subset of "important" parameters during the supervised training phase, making the model more manageable and efficient. For users, this makes training faster and more affordable than other techniques.
+ ::: zone pivot="programming-language-studio"
A fine-tuned model improves on the few-shot learning approach by training the mo
### How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio? In order to successfully access fine-tuning, you need **Cognitive Services OpenAI Contributor assigned**. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information, please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor).
-
+
+### Why did my upload fail?
+
+If your file upload fails, you can view the error message under ΓÇ£data filesΓÇ¥ in Azure OpenAI Studio. Hover your mouse over where it says ΓÇ£errorΓÇ¥ (under the status column) and an explanation of the failure will be displayed.
++
+### My fine-tuned model does not seem to have improved
+
+- **Missing system message:** You need to provide a system message when you fine tune; you will want to provide that same system message when you use the fine-tuned model. If you provide a different system message, you may see different results than what you fine-tuned for.
+
+- **Not enough data:** while 10 is the minimum for the pipeline to run, you need hundreds to thousands of data points to teach the model a new skill. Too few data points risks overfitting and poor generalization. Your fine-tuned model may perform well on the training data, but poorly on other data because it has memorized the training examples instead of learning patterns. For best results, plan to prepare a data set with hundreds or thousands of data points.
+
+- **Bad data:** A poorly curated or unrepresentative dataset will produce a low-quality model. Your model may learn inaccurate or biased patterns from your dataset. For example, if you are training a chatbot for customer service, but only provide training data for one scenario (e.g. item returns) it will not know how to respond to other scenarios. Or, if your training data is bad (contains incorrect responses), your model will learn to provide incorrect results.
++ ## Next steps - Explore the fine-tuning capabilities in the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md).
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
import json
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-12-01-preview" )
When functions are provided, by default the `function_call` is set to `"auto"` a
import os import openai
-openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
openai.api_version = "2023-07-01-preview" openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
import os
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-10-01-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
print(response.choices[0].message.model_dump_json(indent=2))
```powershell-interactive $openai = @{
- api_key = $Env:AZURE_OPENAI_KEY
+ api_key = $Env:AZURE_OPENAI_API_KEY
api_base = $Env:AZURE_OPENAI_ENDPOINT # should look like https:/YOUR_RESOURCE_NAME.openai.azure.com/ api_version = '2023-10-01-preview' # may change in the future name = 'YOUR-DEPLOYMENT-NAME-HERE' # the custom name you chose for your deployment
ai-services Json Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/json-mode.md
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-12-01-preview" )
because they plan to use the output for further scripting.
```powershell-interactive $openai = @{
- api_key = $Env:AZURE_OPENAI_KEY
+ api_key = $Env:AZURE_OPENAI_API_KEY
api_base = $Env:AZURE_OPENAI_ENDPOINT # like the following https://YOUR_RESOURCE_NAME.openai.azure.com/ api_version = '2023-12-01-preview' # may change in the future name = 'YOUR-DEPLOYMENT-NAME-HERE' # name you chose for your deployment
ai-services Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md
import os
import openai openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
-openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
openai.api_version = "2023-05-15" response = openai.ChatCompletion.create(
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-05-15" )
Additional examples can be found in our [in-depth Chat Completion article](chatg
import os import openai
-openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/ openai.api_type = 'azure' openai.api_version = '2023-05-15' # this might change in the future
import os
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-12-01-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
import os
from openai import AzureOpenAI client = AzureOpenAI(
- api_key = os.getenv("AZURE_OPENAI_KEY"),
+ api_key = os.getenv("AZURE_OPENAI_API_KEY"),
api_version = "2023-05-15", azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT") )
from openai import AsyncAzureOpenAI
async def main(): client = AsyncAzureOpenAI(
- api_key = os.getenv("AZURE_OPENAI_KEY"),
+ api_key = os.getenv("AZURE_OPENAI_API_KEY"),
api_version = "2023-12-01-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
ai-services Provisioned Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md
The inferencing code for provisioned deployments is the same a standard deployme
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-05-15" )
from openai import AzureOpenAI
# Configure the default for all requests: client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-05-15", max_retries=5,# default is 2 )
ai-services Reproducible Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/reproducible-output.md
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-12-01-preview" )
for i in range(3):
```powershell-interactive $openai = @{
- api_key = $Env:AZURE_OPENAI_KEY
+ api_key = $Env:AZURE_OPENAI_API_KEY
api_base = $Env:AZURE_OPENAI_ENDPOINT # like the following https://YOUR_RESOURCE_NAME.openai.azure.com/ api_version = '2023-12-01-preview' # may change in the future name = 'YOUR-DEPLOYMENT-NAME-HERE' # name you chose for your deployment
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-12-01-preview" )
for i in range(3):
```powershell-interactive $openai = @{
- api_key = $Env:AZURE_OPENAI_KEY
+ api_key = $Env:AZURE_OPENAI_API_KEY
api_base = $Env:AZURE_OPENAI_ENDPOINT # like the following https://YOUR_RESOURCE_NAME.openai.azure.com/ api_version = '2023-12-01-preview' # may change in the future name = 'YOUR-DEPLOYMENT-NAME-HERE' # name you chose for your deployment
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
import os
from openai import AzureOpenAI client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-12-01-preview", azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT") )
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the default quotas and
| Max number of `/chat completions` tools | 128 | | Maximum number of Provisioned throughput units per deployment | 100,000 | | Max files per Assistant/thread | 20 |
-| Max file size for Assistants | 512 MB |
+| Max file size for Assistants & fine-tuning | 512 MB |
| Assistants token limit | 2,000,000 token limit | ## Regional quota limits The default quota for models varies by model and region. Default quota limits are subject to change.
-<table>
- <tr>
- <th>Model</th>
- <th>Regions</th>
- <th>Tokens per minute</th>
- </tr>
- <tr>
- <td rowspan="2">gpt-35-turbo</td>
- <td>East US, South Central US, West Europe, France Central, UK South</td>
- <td>240 K</td>
- </tr>
- <tr>
- <td>North Central US, Australia East, East US 2, Canada East, Japan East, Sweden Central, Switzerland North</td>
- <td>300 K</td>
- </tr>
- <tr>
- <td rowspan="2">gpt-35-turbo-16k</td>
- <td>East US, South Central US, West Europe, France Central, UK South</td>
- <td>240 K</td>
- </tr>
- <tr>
- <td>North Central US, Australia East, East US 2, Canada East, Japan East, Sweden Central, Switzerland North</td>
- <td>300 K</td>
- </tr>
- <tr>
- <td>gpt-35-turbo-instruct</td>
- <td>East US, Sweden Central</td>
- <td>240 K</td>
- </tr>
- <tr>
- <td>gpt-35-turbo (1106)</td>
- <td> Australia East, Canada East, France Central, South India, Sweden Central, UK South, West US
-</td>
- <td>120 K</td>
- </tr>
- <tr>
- <td rowspan="2">gpt-4</td>
- <td>East US, South Central US, France Central</td>
- <td>20 K</td>
- </tr>
- <tr>
- <td>North Central US, Australia East, East US 2, Canada East, Japan East, UK South, Sweden Central, Switzerland North</td>
- <td>40 K</td>
- </tr>
- <tr>
- <td rowspan="2">gpt-4-32k</td>
- <td>East US, South Central US, France Central</td>
- <td>60 K</td>
- </tr>
- <tr>
- <td>North Central US, Australia East, East US 2, Canada East, Japan East, UK South, Sweden Central, Switzerland North</td>
- <td>80 K</td>
- </tr>
- <tr>
- <td rowspan="2">gpt-4 (1106-preview)<br>GPT-4 Turbo </td>
- <td>Australia East, Canada East, East US 2, France Central, UK South, West US</td>
- <td>80 K</td>
- </tr>
- <tr>
- <td>South India, Norway East, Sweden Central</td>
- <td>150 K</td>
- </tr>
-<tr>
- <td>gpt-4 (vision-preview)<br>GPT-4 Turbo with Vision</td>
- <td>Sweden Central, Switzerland North, Australia East, West US</td>
- <td>30 K</td>
- </tr>
- <tr>
- <td rowspan="2">text-embedding-ada-002</td>
- <td>East US, South Central US, West Europe, France Central</td>
- <td>240 K</td>
- </tr>
- <tr>
- <td>North Central US, Australia East, East US 2, Canada East, Japan East, UK South, Switzerland North</td>
- <td>350 K</td>
- </tr>
-<tr>
- <td>Fine-tuning models (babbage-002, davinci-002, gpt-35-turbo-0613)</td>
- <td>North Central US, Sweden Central</td>
- <td>50 K</td>
- </tr>
- <tr>
- <td>all other models</td>
- <td>East US, South Central US, West Europe, France Central</td>
- <td>120 K</td>
- </tr>
-</table>
+
+| Region | Text-Embedding-Ada-002 | text-embedding-3-small | text-embedding-3-large | GPT-35-Turbo | GPT-35-Turbo-1106 | GPT-35-Turbo-16K | GPT-35-Turbo-Instruct | GPT-4 | GPT-4-32K | GPT-4-Turbo | GPT-4-Turbo-V | Babbage-002 | Babbage-002 - finetune | Davinci-002 | Davinci-002 - finetune | GPT-35-Turbo - finetune | GPT-35-Turbo-1106 - finetune |
+|:--|:-|:-|:-|:|:--|:-|:|:--|:|:--|:-|:--|:-|:--|:-|:--|:-|
+| australiaeast | 350 K | - | - | 300 K | 120 K | 300 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - |
+| brazilsouth | 350 K | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
+| canadaeast | 350 K | 350 K | 350 K | 300 K | 120 K | 300 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - |
+| eastus | 240 K | 350 K | 350 K | 240 K | - | 240 K | 240 K | - | - | 80 K | - | - | - | - | - | - | - |
+| eastus2 | 350 K | 350 K | 350 K | 300 K | - | 300 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - |
+| francecentral | 240 K | - | - | 240 K | 120 K | 240 K | - | 20 K | 60 K | 80 K | - | - | - | - | - | - | - |
+| japaneast | 350 K | - | - | 300 K | - | 300 K | - | 40 K | 80 K | - | 30 K | - | - | - | - | - | - |
+| northcentralus | 350 K | - | - | 300 K | - | 300 K | - | - | - | 80 K | - | 240 K | 250 K | 240 K | 250 K | 250 K | 250 K |
+| norwayeast | 350 K | - | - | - | - | - | - | - | - | 150 K | - | - | - | - | - | - | - |
+| southafricanorth | 350 K | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
+| southcentralus | 240 K | - | - | 240 K | - | - | - | - | - | 80 K | - | - | - | - | - | - | - |
+| southindia | 350 K | - | - | - | 120 K | - | - | - | - | 150 K | - | - | - | - | - | - | - |
+| swedencentral | 350 K | - | - | 300 K | 120 K | 300 K | 240 K | 40 K | 80 K | 150 K | 30 K | 240 K | 250 K | 240 K | 250 K | 250 K | 250 K |
+| switzerlandnorth | 350 K | - | - | 300 K | - | 300 K | - | 40 K | 80 K | - | 30 K | - | - | - | - | - | - |
+| uksouth | 350 K | - | - | 240 K | 120 K | 240 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - |
+| westeurope | 240 K | - | - | 240 K | - | - | - | - | - | - | - | - | - | - | - | - | - |
+| westus | 350 K | - | - | - | 120 K | - | - | - | - | 80 K | 30 K | - | - | - | - | - | - |
### General best practices to remain within rate limits
ai-services Text To Speech Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/text-to-speech-quickstart.md
To successfully make a call against Azure OpenAI, you need an **endpoint** and a
|Variable name | Value | |--|-| | `AZURE_OPENAI_ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in the **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://aoai-docs.openai.azure.com/`.|
-| `AZURE_OPENAI_KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.|
+| `AZURE_OPENAI_API_KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.|
Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
Create and assign persistent environment variables for your key and endpoint.
# [Command Line](#tab/command-line) ```CMD
-setx AZURE_OPENAI_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
+setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
``` ```CMD
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
# [PowerShell](#tab/powershell) ```powershell
-[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User')
+[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_API_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User')
``` ```powershell
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
# [Bash](#tab/bash) ```Bash
-echo export AZURE_OPENAI_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment && source /etc/environment
+echo export AZURE_OPENAI_API_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment && source /etc/environment
``` ```Bash
ai-services Fine Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-12-01-preview" # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002 )
job_id = response.id
# The fine-tuning job will take some time to start and complete. print("Job ID:", response.id)
-print("Status:", response.id)
+print("Status:", response.status)
print(response.model_dump_json(indent=2)) ```
import openai
openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") openai.api_version = "2023-05-15"
-openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
response = openai.ChatCompletion.create( engine="gpt-35-turbo-ft", # engine = "Custom deployment name you chose for your fine-tuning model"
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-05-15" )
ai-services Whisper Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md
To successfully make a call against Azure OpenAI, you'll need an **endpoint** an
|Variable name | Value | |--|-| | `AZURE_OPENAI_ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in the **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://aoai-docs.openai.azure.com/`.|
-| `AZURE_OPENAI_KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.|
+| `AZURE_OPENAI_API_KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.|
Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
Create and assign persistent environment variables for your key and endpoint.
# [Command Line](#tab/command-line) ```CMD
-setx AZURE_OPENAI_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
+setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
``` ```CMD
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
# [PowerShell](#tab/powershell) ```powershell
-[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User')
+[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_API_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User')
``` ```powershell
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
# [Bash](#tab/bash) ```Bash
-echo export AZURE_OPENAI_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment && source /etc/environment
+echo export AZURE_OPENAI_API_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment && source /etc/environment
``` ```Bash
ai-studio Ai Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/ai-resources.md
While projects show up as their own tracking resources in the Azure portal, they
Azure AI offers a set of connectors that allows you to connect to different types of data sources and other Azure tools. You can take advantage of connectors to connect with data such as indices in Azure AI Search to augment your flows.
-Connections can be set up as shared with all projects in the same Azure AI hub resource, or created exclusively for one project. To manage project connections via Azure AI Studio, navigate to a project page, then navigate to **Settings** > **Connections**. To manage shared connections, navigate to the **Manage** page. As an administrator, you can audit both shared and project-scoped connections on an Azure AI hub resource level to have a single pane of glass of connectivity across projects.
+Connections can be set up as shared with all projects in the same Azure AI hub resource, or created exclusively for one project. To manage project connections via Azure AI Studio, navigate to a project page, then navigate to **AI project settings** > **Connections**. To manage shared connections, navigate to the **Manage** page. As an administrator, you can audit both shared and project-scoped connections on an Azure AI hub resource level to have a single pane of glass of connectivity across projects.
## Azure AI dependencies
In the Azure portal, you can find resources that correspond to your Azure AI pro
> [!NOTE] > This section assumes that the Azure AI hub resource and Azure AI project are in the same resource group.
-1. In [Azure AI Studio](https://ai.azure.com), go to **Build** > **Settings** to view your Azure AI project resources such as connections and API keys. There's a link to your Azure AI hub resource in Azure AI Studio and links to view the corresponding project resources in the [Azure portal](https://portal.azure.com).
+1. In [Azure AI Studio](https://ai.azure.com), go to **Build** > **AI project settings** to view your Azure AI project resources such as connections and API keys. There's a link to your Azure AI hub resource in Azure AI Studio and links to view the corresponding project resources in the [Azure portal](https://portal.azure.com).
:::image type="content" source="../media/concepts/azureai-project-view-ai-studio.png" alt-text="Screenshot of the Azure AI project and related resources in the Azure AI Studio." lightbox="../media/concepts/azureai-project-view-ai-studio.png":::
ai-studio Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/connections.md
Title: Connections in Azure AI Studio
-description: This article introduces connections in Azure AI Studio
+description: This article introduces connections in Azure AI Studio.
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
[!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)]
-Connections in Azure AI Studio are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI hub resource.
+Connections in Azure AI Studio are a way to authenticate and consume both Microsoft and non-Microsoft resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI hub resource.
## Connections to Azure AI services
As another example, you can create a connection to an Azure AI Search resource.
:::image type="content" source="../media/prompt-flow/vector-db-lookup-tool-connection.png" alt-text="Screenshot of a connection used by the Vector DB Lookup tool in prompt flow." lightbox="../media/prompt-flow/vector-db-lookup-tool-connection.png":::
-## Connections to third-party services
+## Connections to non-Microsoft services
-Azure AI Studio supports connections to third-party services, including the following:
-- The [API key connection](../how-to/connections-add.md?tabs=api-key#connection-details) handles authentication to your specified target on an individual basis. This is the most common third-party connection type.-- The [custom connection](../how-to/connections-add.md?tabs=custom#connection-details) allows you to securely store and access keys while storing related properties, such as targets and versions. Custom connections are useful when you have many targets that or cases where you would not need a credential to access. LangChain scenarios are a good example where you would use custom service connections. Custom connections don't manage authentication, so you will have to manage authenticate on your own.
+Azure AI Studio supports connections to non-Microsoft services, including the following:
+- The [API key connection](../how-to/connections-add.md?tabs=api-key#connection-details) handles authentication to your specified target on an individual basis. This is the most common non-Microsoft connection type.
+- The [custom connection](../how-to/connections-add.md?tabs=custom#connection-details) allows you to securely store and access keys while storing related properties, such as targets and versions. Custom connections are useful when you have many targets that or cases where you wouldn't need a credential to access. LangChain scenarios are a good example where you would use custom service connections. Custom connections don't manage authentication, so you'll have to manage authentication on your own.
## Connections to datastores
Azure Blob Container| Γ£ô | Γ£ô|
Microsoft OneLake| Γ£ô | Γ£ô| Azure Data Lake Gen2| Γ£ô | Γ£ô|
-A Uniform Resource Identifier (URI) represents a storage location on your local computer, Azure storage, or a publicly available http(s) location. These examples show URIs for different storage options:
+A Uniform Resource Identifier (URI) represents a storage location on your local computer, Azure storage, or a publicly available http or https location. These examples show URIs for different storage options:
-|Storage location | URI examples |
-|||
-|Azure AI Studio connection | `azureml://datastores/<data_store_name>/paths/<folder1>/<folder2>/<folder3>/<file>.parquet` |
-|Local files | `./home/username/data/my_data` |
-|Public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
-|Blob storage | `wasbs://<containername>@<accountname>.blob.core.windows.net/<folder>/`|
-|Azure Data Lake (gen2) | `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>.csv` |
-|Microsoft OneLake | `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>.csv` `https://<accountname>.dfs.fabric.microsoft.com/<artifactname>` |
+| Storage location | URI examples |
+||--|
+| Azure AI Studio connection | `azureml://datastores/<data_store_name>/paths/<folder1>/<folder2>/<folder3>/<file>.parquet` |
+| Local files | `./home/username/data/my_data` |
+| Public http or https server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
+| Blob storage | `wasbs://<containername>@<accountname>.blob.core.windows.net/<folder>/` |
+| Azure Data Lake (gen2) | `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>.csv` |
+| Microsoft OneLake | `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>.csv` `https://<accountname>.dfs.fabric.microsoft.com/<artifactname>` |
## Key vaults and secrets Connections allow you to securely store credentials, authenticate access, and consume data and information. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards. As an administrator, you can audit both shared and project-scoped connections on an Azure AI hub resource level (link to connection rbac).
-Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Studio connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the Azure AI hub resource's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak.
+Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Studio connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they're stored in the Azure AI hub resource's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak.
## Next steps
ai-studio Content Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/content-filtering.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
The content filtering models have been trained and tested on the following langu
You can create a content filter or use the default content filter for Azure OpenAI model deployment, and can also use a default content filter for other text models curated by Azure AI in the [model catalog](../how-to/model-catalog.md). The custom content filters for those models aren't yet available. Models available through Models as a Service have content filtering enabled by default and can't be configured. ## How to create a content filter?
-For any model deployment in Azure AI Studio, you could directly use the default content filter, but when you want to have more customized setting on content filter, for example set a stricter or looser filter, or enable more advanced capabilities, like jailbreak risk detection and protected material detection. To create a content filter, you could go to **Build**, choose one of your projects, then select **Content filters** in the left navigation bar, and create a content filter.
+For any model deployment in [Azure AI Studio](https://ai.azure.com), you could directly use the default content filter, but when you want to have more customized setting on content filter, for example set a stricter or looser filter, or enable more advanced capabilities, like jailbreak risk detection and protected material detection. To create a content filter, you could go to **Build**, choose one of your projects, then select **Content filters** in the left navigation bar, and create a content filter.
:::image type="content" source="../media/content-safety/content-filter/create-content-filter.png" alt-text="Screenshot of create content filter." lightbox="../media/content-safety/content-filter/create-content-filter.png":::
The default content filtering configuration is set to filter at the medium sever
<sup>1</sup> For Azure OpenAI models, only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning off content filters. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
-### More filters for Gen-AI scenarios
-You could also enable filters for Gen-AI scenarios: jailbreak risk detection and protected material detection.
+### More filters for generative AI scenarios
+You could also enable filters for generative AI scenarios: jailbreak risk detection and protected material detection.
:::image type="content" source="../media/content-safety/content-filter/additional-models.png" alt-text="Screenshot of additional models." lightbox="../media/content-safety/content-filter/additional-models.png":::
ai-studio Evaluation Approach Gen Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-approach-gen-ai.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Evaluation Improvement Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-improvement-strategies.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Evaluation Metrics Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-metrics-built-in.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/retrieval-augmented-generation.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/vulnerability-management.md
+
+ Title: Vulnerability management
+
+description: Learn how Azure AI Studio manages vulnerabilities in images that the service provides, and how you can get the latest security updates for the components that you manage.
++++ Last updated : 02/22/2024++++
+# Vulnerability management for Azure AI Studio
++
+Vulnerability management involves detecting, assessing, mitigating, and reporting on any security vulnerabilities that exist in an organization's systems and software. Vulnerability management is a shared responsibility between you and Microsoft.
+
+This article discusses these responsibilities and outlines the vulnerability management controls that Azure AI Studio provides. You learn how to keep your service instance and applications up to date with the latest security updates, and how to minimize the window of opportunity for attackers.
+
+## Microsoft-managed VM images
+
+Azure AI Studio manages host OS virtual machine (VM) images for compute instances and serverless compute clusters. The update frequency is monthly and includes the following details:
+
+* For each new VM image version, the latest updates are sourced from the original publisher of the OS. Using the latest updates helps ensure that you get all applicable OS-related patches. For Azure AI Studio, the publisher is Canonical for all the Ubuntu images.
+
+* VM images are updated monthly.
+
+* In addition to patches that the original publisher applies, Azure AI Studio updates system packages when updates are available.
+
+* Azure AI Studio checks and validates any machine learning packages that might require an upgrade. In most circumstances, new VM images contain the latest package versions.
+
+* All VM images are built on secure subscriptions that run vulnerability scanning regularly. Azure AI Studio flags any unaddressed vulnerabilities and fixes them within the next release.
+
+* The frequency is a monthly interval for most images. For compute instances, the image release is aligned with the release cadence of the Azure AI Studio SDK that's preinstalled in the environment.
+
+In addition to the regular release cadence, Azure AI Studio applies hotfixes if vulnerabilities surface. Microsoft rolls out hotfixes within 72 hours for serverless compute clusters and within a week for compute instances.
+
+> [!NOTE]
+> The host OS is not the OS version that you might specify for an environment when you're training or deploying a model. Environments run inside Docker. Docker runs on the host OS.
+
+## Microsoft-managed container images
+
+[Base docker images](https://github.com/Azure/AzureML-Containers) that Azure AI Studio maintains get security patches frequently to address newly discovered vulnerabilities.
+
+Azure AI Studio releases updates for supported images every two weeks to address vulnerabilities. As a commitment, we aim to have no vulnerabilities older than 30 days in the latest version of supported images.
+
+Patched images are released under a new immutable tag and an updated `:latest` tag. Using the `:latest` tag or pinning to a particular image version might be a tradeoff between security and environment reproducibility for your machine learning job.
+
+## Managing environments and container images
+
+In Azure AI Studio, Docker images are used to provide a runtime environment for [prompt flow deployments](../how-to/flow-deploy.md). The images are built from a base image that Azure AI Studio provides.
+
+Although Azure AI Studio patches base images with each release, whether you use the latest image might be tradeoff between reproducibility and vulnerability management. It's your responsibility to choose the environment version that you use for your jobs or model deployments.
+
+By default, dependencies are layered on top of base images when you're building an image. After you install more dependencies on top of the Microsoft-provided images, vulnerability management becomes your responsibility.
+
+Associated with your AI hub resource is an Azure Container Registry instance that functions as a cache for container images. Any image that materializes is pushed to the container registry. The workspace uses it when deployment is triggered for the corresponding environment.
+
+The AI hub doesn't delete any image from your container registry. You're responsible for evaluating the need for an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](/azure/defender-for-cloud/defender-for-container-registries-usage) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate remediation responses](/azure/defender-for-cloud/workflow-automation).
++
+## Vulnerability management on compute hosts
+
+Managed compute nodes in Azure AI Studio use Microsoft-managed OS VM images. When you provision a node, it pulls the latest updated VM image. This behavior applies to compute instance, serverless compute cluster, and managed inference compute options.
+
+Although OS VM images are regularly patched, Azure AI Studio doesn't actively scan compute nodes for vulnerabilities while they're in use. For an extra layer of protection, consider network isolation of your computes.
+
+Ensuring that your environment is up to date and that compute nodes use the latest OS version is a shared responsibility between you and Microsoft. Nodes that aren't idle can't be updated to the latest VM image. Considerations are slightly different for each compute type, as listed in the following sections.
+
+### Compute instance
+
+Compute instances get the latest VM images at the time of provisioning. Microsoft releases new VM images on a monthly basis. After you deploy a compute instance, it isn't actively updated. To keep current with the latest software updates and security patches, you can use one of these methods:
+
+* Re-create a compute instance to get the latest OS image (recommended).
+
+ If you use this method, you'll lose data and customizations (such as installed packages) that are stored on the instance's OS and temporary disks.
+
+ For more information about image releases, see the [Azure Machine Learning compute instance image release notes](/azure/machine-learning/azure-machine-learning-ci-image-release-notes).
+
+* Regularly update OS and Python packages.
+
+ * Use Linux package management tools to update the package list with the latest versions:
+
+ ```bash
+ sudo apt-get update
+ ```
+
+ * Use Linux package management tools to upgrade packages to the latest versions. Package conflicts might occur when you use this approach.
+
+ ```bash
+ sudo apt-get upgrade
+ ```
+
+ * Use Python package management tools to upgrade packages and check for updates:
+
+ ```bash
+ pip list --outdated
+ ```
+
+You can install and run additional scanning software on the compute instance to scan for security issues:
+
+* Use [Trivy](https://github.com/aquasecurity/trivy) to discover OS and Python package-level vulnerabilities.
+* Use [ClamAV](https://www.clamav.net/) to discover malware. It comes preinstalled on compute instances.
+
+Microsoft Defender for Servers agent installation is currently not supported.
+
+### Endpoints
+
+Endpoints automatically receive OS host image updates that include vulnerability fixes. The update frequency of images is at least once a month.
+
+Compute nodes are automatically upgraded to the latest VM image version when that version is released. You don't need to take any action.
+
+## Next steps
+
+* [Azure AI hub resources](ai-resources.md)
+* [Create and manage compute instances](../how-to/create-manage-compute.md)
ai-studio Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/autoscale.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Cli Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/cli-install.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Connections Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/connections-add.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
Here's a table of the available connection types in Azure AI Studio with descrip
## Create a new connection 1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**. If you don't have a project already, first create a project.
-1. Select **Settings** from the collapsible left menu.
+1. Select **AI project settings** from the collapsible left menu.
1. Select **View all** from the **Connections** section. 1. Select **+ Connection** under **Resource connections**. 1. Select the service you want to connect to from the list of available external resources.
ai-studio Costs Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/costs-plan-manage.md
For the examples in this section, assume that all Azure AI Studio resources are
Here's an example of how to monitor costs for an Azure AI Studio project. The costs are used as an example only. Your costs will vary depending on the services that you use and the amount of usage. 1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Select your project and then select **Settings** from the left navigation menu.
+1. Select your project and then select **AI project settings** from the left navigation menu.
:::image type="content" source="../media/cost-management/project-costs/project-settings-go-view-costs.png" alt-text="Screenshot of the Azure AI Studio portal showing how to see project settings." lightbox="../media/cost-management/project-costs/project-settings-go-view-costs.png":::
ai-studio Create Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
In this article, you learn how to create a compute instance in Azure AI Studio.
You need a compute instance to: - Use prompt flow in Azure AI Studio. - Create an index-- Open Visual Studio Code (Web) in the Azure AI Studio.
+- Open Visual Studio Code (Web or Desktop) in Azure AI Studio.
You can use the same compute instance for multiple scenarios and workflows. Note that a compute instance can't be shared. It can only be used by a single assigned user. By default, it will be assigned to the creator and you can change this to a different user in the security step.
You can start or stop a compute instance from the Azure AI Studio.
## Next steps - [Create and manage prompt flow runtimes](./create-manage-runtime.md)
+- [Vulnerability management](../concepts/vulnerability-management.md)
ai-studio Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-runtime.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
Automatic is the default option for a runtime. You can start an automatic runtim
1. Sign in to [Azure AI Studio](https://ai.azure.com) and select your project from the **Build** page. If you don't have a project, create one.
-1. On the collapsible left menu, select **Settings**.
+1. On the collapsible left menu, select **AI project settings**.
1. In the **Compute instances** section, select **View all**.
ai-studio Create Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md
Projects are hosted by an Azure AI hub resource that provides enterprise-grade s
## Project details
-In the project details page (select **Build** > **Settings**), you can find information about the project, such as the project name, description, and the Azure AI hub resource that hosts the project. You can also find the project ID, which is used to identify the project in the Azure AI Studio API.
+In the project details page (select **Build** > **AI project settings**), you can find information about the project, such as the project name, description, and the Azure AI hub resource that hosts the project. You can also find the project ID, which is used to identify the project in the Azure AI Studio API.
- Name: The name of the project corresponds to the selected project in the left panel. - AI hub: The Azure AI hub resource that hosts the project.
ai-studio Data Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-add.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
The supported source paths are shown in Azure AI Studio. You can create a data f
# [Python SDK](#tab/python)
-If you're using SDK or CLI to create data, you must specify a `path` that points to the data location. Supported paths include:
+If you're using the SDK or CLI to create data, you must specify a `path` that points to the data location. Supported paths include:
|Location | Examples | |||
ai-studio Evaluate Flow Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-flow-results.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
[!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)]
-The Azure AI Studio's evaluation page is a versatile hub that not only allows you to visualize and assess your results but also serves as a control center for optimizing, troubleshooting, and selecting the ideal AI model for your deployment needs. It's a one-stop solution for data-driven decision-making and performance enhancement in your AI projects. You can seamlessly access and interpret the results from various sources, including your flow, the playground quick test session, evaluation submission UI, generative SDK and CLI. This flexibility ensures that you can interact with your results in a way that best suits your workflow and preferences.
+The Azure AI Studio evaluation page is a versatile hub that not only allows you to visualize and assess your results but also serves as a control center for optimizing, troubleshooting, and selecting the ideal AI model for your deployment needs. It's a one-stop solution for data-driven decision-making and performance enhancement in your AI projects. You can seamlessly access and interpret the results from various sources, including your flow, the playground quick test session, evaluation submission UI, generative SDK and CLI. This flexibility ensures that you can interact with your results in a way that best suits your workflow and preferences.
Once you've visualized your evaluation results, you can dive into a thorough examination. This includes the ability to not only view individual results but also to compare these results across multiple evaluation runs. By doing so, you can identify trends, patterns, and discrepancies, gaining invaluable insights into the performance of your AI system under various conditions. In this article you learn to: -- View the evaluation result and metrics -- Compare the evaluation results -- Understand the built-in evaluation metrics -- Improve the performance -- View the evaluation results and metrics
+- View the evaluation result and metrics.
+- Compare the evaluation results.
+- Understand the built-in evaluation metrics.
+- Improve the performance.
+- View the evaluation results and metrics.
## Find your evaluation results
-Upon submitting your evaluation, you can locate the submitted evaluation run within the run list by navigating to the 'Evaluation' tab.
+Upon submitting your evaluation, you can locate the submitted evaluation run within the run list by navigating to the **Evaluation** page.
-You can oversee your evaluation runs within the run list. With the flexibility to modify the columns using the column editor and implement filters, you can customize and create your own version of the run list. Additionally, you have the ability to swiftly review the aggregated evaluation metrics across the runs, enabling you to perform quick comparisons.
+You can monitor and manage your evaluation runs within the run list. With the flexibility to modify the columns using the column editor and implement filters, you can customize and create your own version of the run list. Additionally, you have the ability to swiftly review the aggregated evaluation metrics across the runs, enabling you to perform quick comparisons.
:::image type="content" source="../media/evaluations/view-results/evaluation-run-list.png" alt-text="Screenshot of the evaluation run list." lightbox="../media/evaluations/view-results/evaluation-run-list.png":::
ai-studio Evaluate Generative Ai App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-generative-ai-app.md
Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Evaluate Prompts Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-prompts-playground.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Fine Tune Model Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-model-llama.md
Verify the subscription is registered to the `Microsoft.Network` resource provid
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Subscriptions** from the left menu. 1. Select the subscription you want to use.
-1. Select **Settings** > **Resource providers** from the left menu.
+1. Select **AI project settings** > **Resource providers** from the left menu.
1. Confirm that **Microsoft.Network** is in the list of resource providers. Otherwise add it. :::image type="content" source="../media/how-to/fine-tune/llama/subscription-resource-providers.png" alt-text="Screenshot of subscription resource providers in Azure portal." lightbox="../media/how-to/fine-tune/llama/subscription-resource-providers.png":::
ai-studio Index Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md
This can happen if you are trying to create an index using an **Owner**, **Contr
If the Azure AI hub resource the project uses was created through Azure AI Studio: 1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**.
-1. Select **Settings** from the collapsible left menu.
+1. Select **AI project settings** from the collapsible left menu.
1. From the **Resource Configuration** section, select the link for your resource group name that takes you to the Azure portal. 1. In the Azure portal under **Overview** > **Resources** select the Azure AI service type. It's named similar to "YourAzureAIResourceName-aiservices."
If the Azure AI hub resource the project uses was created through Azure AI Studi
If the Azure AI hub resource the project uses was created through Azure portal: 1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**.
-1. Select **Settings** from the collapsible left menu.
+1. Select **AI project settings** from the collapsible left menu.
1. From the **Resource Configuration** section, select the link for your resource group name that takes you to the Azure portal. 1. Select **Access control (IAM)** > **+ Add** to add a role assignment. 1. Add the **Cognitive Services OpenAI User** role to the user who wants to make an index. `Cognitive Services OpenAI Contributor` and `Cognitive Services Contributor` also work, but they assign more permissions than needed for creating an index in Azure AI Studio.
ai-studio Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
ai-studio Monitor Quality Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/monitor-quality-safety.md
Follow these steps to set up monitoring for your prompt flow deployment:
:::image type="content" source="../media/deploy-monitor/monitor/monitor-metrics.png" alt-text="Screenshot of the monitoring result metrics." lightbox = "../media/deploy-monitor/monitor/monitor-metrics.png":::
-By default, operational metrics such as requests per minute and request latency show up. The default safety and quality monitoring signal are configured with a 10% sample rate and run on your default workspace Azure Open AI connection.
+By default, operational metrics such as requests per minute and request latency show up. The default safety and quality monitoring signal are configured with a 10% sample rate and run on your default workspace Azure OpenAI connection.
Your monitor is created with default settings: - 10% sample rate
ai-studio Content Safety Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/content-safety-tool.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
Azure AI Content Safety is a content moderation service that helps detect harmfu
Create an Azure Content Safety connection: 1. Sign in to [Azure AI Studio](https://studio.azureml.net/).
-1. Go to **Settings** > **Connections**.
+1. Go to **AI project settings** > **Connections**.
1. Select **+ New connection**. 1. Complete all steps in the **Create a new connection** dialog box. You can use an Azure AI hub resource or Azure AI Content Safety resource. An Azure AI hub resource that supports multiple Azure AI services is recommended. ## Build with the Content Safety tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Content Safety (Text)** to add the Content Safety tool to your flow. :::image type="content" source="../../media/prompt-flow/content-safety-tool.png" alt-text="Screenshot of the Content Safety tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/content-safety-tool.png":::
You can use the following parameters as inputs for this tool:
| action_by_category | string | A binary value for each category: *Accept* or *Reject*. This value shows if the text meets the sensitivity level that you set in the request parameters for that category. | | suggested_action | string | An overall recommendation based on the four categories. If any category has a *Reject* value, the `suggested_action` is *Reject* as well. | -- ## Next steps - [Learn more about how to create a flow](../flow-develop.md)
ai-studio Embedding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/embedding-tool.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
The prompt flow *Embedding* tool enables you to convert text into dense vector r
## Build with the Embedding tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Embedding** to add the Embedding tool to your flow. :::image type="content" source="../../media/prompt-flow/embedding-tool.png" alt-text="Screenshot of the Embedding tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/embedding-tool.png":::
ai-studio Faiss Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/faiss-index-lookup-tool.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
The prompt flow *Faiss Index Lookup* tool is tailored for querying within a user
## Build with the Faiss Index Lookup tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Faiss Index Lookup** to add the Faiss Index Lookup tool to your flow. :::image type="content" source="../../media/prompt-flow/faiss-index-lookup-tool.png" alt-text="Screenshot of the Faiss Index Lookup tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/faiss-index-lookup-tool.png":::
ai-studio Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/index-lookup-tool.md
The prompt flow *Index Lookup* tool enables the usage of common vector indices (
## Build with the Index Lookup tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Index Lookup** to add the Index Lookup tool to your flow. :::image type="content" source="../../media/prompt-flow/configure-index-lookup-tool.png" alt-text="Screenshot of the Index Lookup tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/configure-index-lookup-tool.png":::
ai-studio Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
Prepare a prompt as described in the [prompt tool](prompt-tool.md#prerequisites)
## Build with the LLM tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ LLM** to add the LLM tool to your flow. :::image type="content" source="../../media/prompt-flow/llm-tool.png" alt-text="Screenshot of the LLM tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/llm-tool.png":::
ai-studio Prompt Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-tool.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
For more information and best practices, see [prompt engineering techniques](../
## Build with the Prompt tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ Prompt** to add the Prompt tool to your flow. :::image type="content" source="../../media/prompt-flow/prompt-tool.png" alt-text="Screenshot of the Prompt tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/prompt-tool.png":::
ai-studio Python Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md
The prompt flow *Python* tool offers customized code snippets as self-contained
## Build with the Python tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ Python** to add the Python tool to your flow. :::image type="content" source="../../media/prompt-flow/python-tool.png" alt-text="Screenshot of the Python tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/python-tool.png":::
ai-studio Serp Api Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/serp-api-tool.md
Sign up at [SERP API homepage](https://serpapi.com/)
Create a Serp connection: 1. Sign in to [Azure AI Studio](https://studio.azureml.net/).
-1. Go to **Settings** > **Connections**.
+1. Go to **AI project settings** > **Connections**.
1. Select **+ New connection**. 1. Add the following custom keys to the connection: - `azureml.flow.connection_type`: `Custom`
The connection is the model used to establish connections with Serp API. Get you
## Build with the Serp API tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Serp API** to add the Serp API tool to your flow. :::image type="content" source="../../media/prompt-flow/serp-api-tool.png" alt-text="Screenshot of the Serp API tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/serp-api-tool.png":::
ai-studio Vector Db Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/vector-db-lookup-tool.md
The tool searches data from a third-party vector database. To use it, you should
## Build with the Vector DB Lookup tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Vector DB Lookup** to add the Vector DB Lookup tool to your flow. :::image type="content" source="../../media/prompt-flow/vector-db-lookup-tool.png" alt-text="Screenshot of the Vector DB Lookup tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/embedding-tool.png":::
ai-studio Vector Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/vector-index-lookup-tool.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
The prompt flow *Vector index lookup* tool is tailored for querying within vecto
## Build with the Vector index lookup tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Vector Index Lookup** to add the Vector index lookup tool to your flow. :::image type="content" source="../../media/prompt-flow/vector-index-lookup-tool.png" alt-text="Screenshot of the Vector Index Lookup tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/vector-index-lookup-tool.png":::
ai-studio Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
Prompt flow is a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). Prompt flow provides a comprehensive solution that simplifies the process of prototyping, experimenting, iterating, and deploying your AI applications.
-Prompt flow is available independently as an open-source project on [GitHub](https://github.com/microsoft/promptflow), with its own SDK and [VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow). Prompt flow is also available and recommended to use as a feature within both [Azure AI Studio](https://aka.ms/AzureAIStudio) and [Azure Machine Learning studio](https://aka.ms/AzureAIStudio). This set of documentation focuses on prompt flow in Azure AI Studio.
+Prompt flow is available independently as an open-source project on [GitHub](https://github.com/microsoft/promptflow), with its own SDK and [VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow). Prompt flow is also available and recommended to use as a feature within both [Azure AI Studio](https://ai.azure.com) and [Azure Machine Learning studio](https://ml.azure.com). This set of documentation focuses on prompt flow in Azure AI Studio.
Definitions: - *Prompt flow* is a feature that can be used to generate, customize, or run a flow.
ai-studio Simulator Interaction Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/simulator-interaction-data.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
aoai_config = AzureOpenAIModelConfiguration.from_connection(
"max_token": 300 ) ```
-`max_tokens` and `temperature` are optional, the default value for `max_tokens` is 300, the default value for `temperature` is 0.9
+
+The `max_tokens` and `temperature` parameters are optional. The default value for `max_tokens` is 300 and the default value for `temperature` is 0.9.
## Initialize simulator class
ai-studio Troubleshoot Deploy And Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-deploy-and-monitor.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 2/22/2024
For the general deployment error code reference, you can go to the [Azure Machin
**Question:** I got an "out of quota" error message. What should I do? **Answer:** For more information about managing quota, see:-- [Quota for deploying and inferencing a model](../how-to/deploy-models-openai.md#quota-for-deploying-and-inferencing-a-model)-- [Manage Azure OpenAI Service quota documentation](/azure/ai-services/openai/how-to/quota?tabs=rest)
+- [Quota for deploying and inferencing a model](../how-to/deploy-models-openai.md#quota-for-deploying-and-inferencing-a-model)
+- [Manage Azure OpenAI Service quota documentation](/azure/ai-services/openai/how-to/quota?tabs=rest)
- [Manage and increase quotas for resources with Azure AI Studio](quota.md) **Question:** After I deployed a prompt flow, I got an error message "Tool load failed in 'search_question_from_indexed_docs': (ToolLoadError) Failed to load package tool 'Vector Index Lookup': (HttpResponseError) (AuthorizationFailed)". How can I resolve this? **Answer:** You can follow this instruction to manually assign ML Data scientist role to your endpoint to resolve this issue. It might take several minutes for the new role to take effect.
-1. Go to your project and select **Settings** from the left menu.
+1. Go to your project and select **AI project settings** from the left menu.
2. Select the link to your resource group. 3. Once you're redirected to the resource group in Azure portal, Select **Access control (IAM)** on the left navigation menu. 4. Select **Add role assignment**.
You might have come across an ImageBuildFailure error: This happens when the env
Option 1: Find the build log for the Azure default blob storage. 1. Go to your project in [Azure AI Studio](https://ai.azure.com) and select the settings icon on the lower left corner.
-2. Select your Azure AI hub resource name under **Resource configurations** on the **Settings** page.
+2. Select your Azure AI hub resource name under **Resource configurations** on the **AI project settings** page.
3. On the Azure AI hub overview page, select your storage account name. This should be the name of storage account listed in the error message you received. You'll be taken to the storage account page in the [Azure portal](https://portal.azure.com). 4. On the storage account page, select **Containers** under **Data Storage** on the left menu. 5. Select the container name listed in the error message you received.
ai-studio Playground Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/playground-completions.md
To use the Azure OpenAI for text completions in the playground, follow these ste
1. From the Azure AI Studio Home page, select **Build** > **Playground**. 1. Select your deployment from the **Deployments** dropdown. 1. Select **Completions** from the **Mode** dropdown menu.
-1. Select **Generate product name ideas** from the **Examples** dropdown menu. The system prompt is prepopulated with something resembling the following text:
+1. In the **Prompt** text box, enter the following text:
``` Generate product name ideas for a yet to be launched wearable health device that will allow users to monitor their health and wellness in real-time using AI and share their health metrics with their friends and family. The generated product name ideas should reflect the product's key features, have an international appeal, and evoke positive emotions.
To use the Azure OpenAI for text completions in the playground, follow these ste
:::image type="content" source="../media/quickstarts/playground-completions-generate-before.png" alt-text="Screenshot of the Azure AI Studio playground with the Generate product name ideas dropdown selection visible." lightbox="../media/quickstarts/playground-completions-generate-before.png":::
-1. Select `Generate`. Azure OpenAI generates product name ideas based on. You should get a result that resembles the following list:
+1. Select **Generate**. Azure OpenAI generates product name ideas based on the prompt. You should get a result that resembles the following list:
``` Product names:
ai-studio Deploy Copilot Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md
The **FormatReply** node formats the output of the **DetermineReply** node.
In prompt flow, you should also see: - **Save**: You can save your prompt flow at any time by selecting **Save** from the top menu. Be sure to save your prompt flow periodically as you make changes in this tutorial. -- **Runtime**: The runtime that you created [earlier in this tutorial](#create-compute-and-runtime-that-are-needed-for-prompt-flow). You can start and stop runtimes and compute instances via **Settings** in the left menu. To work in prompt flow, make sure that your runtime is in the **Running** status.
+- **Runtime**: The runtime that you created [earlier in this tutorial](#create-compute-and-runtime-that-are-needed-for-prompt-flow). You can start and stop runtimes and compute instances via **AI project settings** in the left menu. To work in prompt flow, make sure that your runtime is in the **Running** status.
:::image type="content" source="../media/tutorials/copilot-deploy-flow/prompt-flow-overview.png" alt-text="Screenshot of the prompt flow editor and surrounding menus." lightbox="../media/tutorials/copilot-deploy-flow/prompt-flow-overview.png":::
ai-studio Deploy Copilot Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-sdk.md
In this tutorial, you use a prebuilt custom container via [Visual Studio Code (W
In the left pane of Visual Studio Code, you see the `code` folder for personal work such as cloning git repos. There's also a `shared` folder that has files that everyone that is connected to this project can see. For more information about the directory structure, see [Get started with Azure AI projects in VS Code](../how-to/develop-in-vscode.md#the-custom-container-folder-structure).
-You can still use the Azure AI Studio (that's still open in another browser tab) while working in VS Code Web. You can see the compute is running via **Build** > **Settings** > **Compute instances**. You can pause or stop the compute from here.
+You can still use the Azure AI Studio (that's still open in another browser tab) while working in VS Code Web. You can see the compute is running via **Build** > **AI project settings** > **Compute instances**. You can pause or stop the compute from here.
:::image type="content" source="../media/tutorials/copilot-sdk/compute-running.png" alt-text="Screenshot of the compute instance running in Azure AI Studio." lightbox="../media/tutorials/copilot-sdk/compute-running.png":::
ai-studio Screen Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md
- ignite-2023 Previously updated : 2/6/2024 Last updated : 2/22/2024
This article is for people who use screen readers such as Microsoft's Narrator,
Most Azure AI Studio pages are composed of the following structure: -- Banner (contains Azure AI Studio app title, settings and profile information)
+- Banner (contains Azure AI Studio app title, settings, and profile information)
- Primary navigation (contains Home, Explore, Build, and Manage) - Secondary navigation - Main page content
For efficient navigation, it might be helpful to navigate by landmarks to move b
In **Explore** you can explore the different capabilities of Azure AI before creating a project. You can find this page in the primary navigation landmark.
-Within **Explore**, you can [explore many capabilities](../how-to/models-foundation-azure-ai.md) found within the secondary navigation. These include [model catalog](../how-to/model-catalog.md), model leaderboard, and pages for Azure AI services such as Speech, Vision, and Content Safety.
-- [Model catalog](../how-to/model-catalog.md) contains three main areas: Announcements, Models and Filters. You can use Search and Filters to narrow down model selection
+Within **Explore**, you can [explore many capabilities](../how-to/models-foundation-azure-ai.md) found within the secondary navigation. These include [model catalog](../how-to/model-catalog.md), model benchmarks, and pages for Azure AI services such as Speech, Vision, and Content Safety.
+- [Model catalog](../how-to/model-catalog.md) contains three main areas: Announcements, Models, and Filters. You can use Search and Filters to narrow down model selection
- Azure AI service pages such as Speech consist of many cards containing links. These cards lead you to demo experiences where you can sample our AI capabilities and might link out to another webpage. ## Projects To work within the Azure AI Studio, you must first [create a project](../how-to/create-projects.md): 1. In [Azure AI Studio](https://ai.azure.com), navigate to the **Build** tab in the primary navigation.
-1. Press the **Tab** key until you hear *New project* and select this button.
+1. Press the **Tab** key until you hear *new project* and select this button.
1. Enter the information requested in the **Create a new project** dialog. You then get taken to the project details page.
Once you edit the system message or examples, your changes don't save automatica
### Chat session pane
-The chat session pane is where you can chat to the model and test out your assistant
+The chat session pane is where you can chat to the model and test out your assistant.
- After you send a message, the model might take some time to respond, especially if the response is long. You hear a screen reader announcement "Message received from the chatbot" when the model finishes composing a response. -- Content in the chatbot follows this format: -
- ```
- [message from user] [user image]
- [chatbot image] [message from chatbot]
- ```
- ## Using prompt flow
-Prompt flow is a tool to create executable flows, linking LLMs, prompts and Python tools through a visualized graph. You can use this to prototype, experiment and iterate on your AI applications before deploying.
-
-With the Build tab selected, navigate to the secondary navigation landmark and press the down arrow until you hear *flows*.
+Prompt flow is a tool to create executable flows, linking LLMs, prompts, and Python tools through a visualized graph. You can use this to prototype, experiment, and iterate on your AI applications before deploying.
-The prompt flow UI in Azure AI Studio is composed of the following main sections: Command toolbar, Flow (includes list of the flow nodes), Files and the Graph view. The Flow, Files and Graph sections each have their own H2 headings that can be used for navigation.
+With the Build tab selected, navigate to the secondary navigation landmark and press the down arrow until you hear *prompt flow*.
+The prompt flow UI in Azure AI Studio is composed of the following main sections: Command toolbar, Flow (includes list of the flow nodes), Files and the Graph view. The Flow, Files, and Graph sections each have their own H2 headings that can be used for navigation.
### Flow - This is the main working area where you can edit your flow, for example adding a new node, editing the prompt, selecting input data -- You can also open your flow in VS Code Web by selecting the **Work in VS Code Web** button.
+- You can also open your flow in VS Code Web by selecting the **Open project in VS Code (Web)** button.
- Each node has its own H3 heading, which can be used for navigation. ### Files
The prompt flow UI in Azure AI Studio is composed of the following main sections
## Evaluations
-Evaluation is a tool to help you evaluate the performance of your generative AI application. You can use this to prototype, experiment and iterate on your applications before deploying.
+Evaluation is a tool to help you evaluate the performance of your generative AI application. You can use this to prototype, experiment, and iterate on your applications before deploying.
### Creating an evaluation To review evaluation metrics, you must first create an evaluation. 1. Navigate to the Build tab in the primary navigation.
-1. Navigate to the secondary navigation landmark and press the down arrow until you hear *evaluations*.
+1. Navigate to the secondary navigation landmark and press the down arrow until you hear *evaluation*.
1. Press the Tab key until you hear *new evaluation* and select this button. 1. Enter the information requested in the **Create a new evaluation** dialog. Once complete, your focus is returned to the evaluations list.
Once you create an evaluation, you can access it from the list of evaluations.
Evaluation runs are listed as links within the Evaluations grid. Selecting a link takes you to a dashboard view with information about your specific evaluation run.
-You might prefer to export the data from your evaluation run so that you can view it in an application of your choosing. To do this, select your evaluation run link, then navigate to the **Export results** button and select it.
+You might prefer to export the data from your evaluation run so that you can view it in an application of your choosing. To do this, select your evaluation run link, then navigate to the **Export result** button and select it.
-There's also a dashboard view provided to allow you to compare evaluation runs. From the main Evaluations list page, navigate to the **Switch to dashboard view** button. You can also export all this data using the **Export table** button.
+There's also a dashboard view provided to allow you to compare evaluation runs. From the main Evaluations list page, navigate to the **Switch to dashboard view** button.
## Technical support for customers with disabilities
analysis-services Analysis Services Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure Analysis Services server using Terraform
api-management Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure API Management instance using Terraform
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
description: Learn how to migrate your App Service Environment to App Service En
Previously updated : 2/12/2024 Last updated : 2/22/2024 zone_pivot_groups: app-service-cli-portal
Under **Get new IP addresses**, confirm that you understand the implications and
When the previous step finishes, the IP addresses for your new App Service Environment v3 resource appear. Use the new IPs to update any resources and networking components so that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates.
-This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes in moving to App Service Environment v3. These changes include the port change for Azure Load Balancer, which now uses port 80. Don't move to the next step until you confirm that you made these updates.
+This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes in moving to App Service Environment v3. These changes include the port change for Azure Load Balancer, which now uses port 80. Don't move to the next step until you confirmed that you made these updates.
:::image type="content" source="./media/migration/ip-sample.png" alt-text="Screenshot that shows sample IPs generated during premigration.":::
After you complete all of the preceding steps, you can start the migration. Make
This step takes three to six hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations, depending on the environment size. Scaling and modifications to your existing App Service Environment are blocked during this step. > [!NOTE]
-> In rare cases, you might see a notification in the portal that says "Migration to App Service Environment v3 failed" after you start the migration. There's a known bug that might trigger this notification even if the migration is progressing. Check the activity log for the App Service Environment to determine the validity of this error message.
+> In rare cases, you might see a notification in the portal that says "Migration to App Service Environment v3 failed" after you start the migration. There's a known bug that might trigger this notification even if the migration is progressing. Check the activity log for the App Service Environment to determine the validity of this error message. In most cases, refreshing the page resolves the issue, and the error message disappears. If the error message persists, contact support for assistance.
>
-> :::image type="content" source="./media/migration/migration-error.png" alt-text="Screenshot that shows the potential error notification after migration starts.":::
+> :::image type="content" source="./media/migration/migration-error-2.png" alt-text="Screenshot that shows the potential error notification after migration starts.":::
At this time, detailed migration statuses are available only when you're using the Azure CLI. For more information, see the [Azure CLI section for migrating to App Service Environment v3](#8-migrate-to-app-service-environment-v3-and-check-status).
app-service How To Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md
You have two App Service Environments at this stage in the migration process. Yo
You can get the new IP addresses for your new App Service Environment v3 by running the following command. It's your responsibility to make any necessary updates.
+> [!IMPORTANT]
+> During the preview, the new inbound IP is returned incorrectly due to a known bug. Open a support ticket to receive the correct IP addresses for your App Service Environment v3.
+>
+ ```azurecli az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" ``` ## 10. Redirect customer traffic and complete migration
-This step is your opportunity to test and validate your new App Service Environment v3. Once you confirm your apps are working as expected, you can redirect customer traffic to your new environment by running the following command. This command also deletes your old environment.
+This step is your opportunity to test and validate your new App Service Environment v3. Your App Service Environment v2 frontends are still running, but the backing compute is an App Service Environment v3. If you're able to access your apps without issues, that means you're ready to complete the migration.
+
+Once you confirm your apps are working as expected, you can redirect customer traffic to your new App Service Environment v3 frontends by running the following command. This command also deletes your old environment.
```azurecli az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=DnsChange&api-version=2022-03-01"
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the in-place migration fea
description: Overview of the in-place migration feature for migration to App Service Environment v3. Previously updated : 02/15/2024 Last updated : 02/22/2024
If your App Service Environment doesn't pass the validation checks or you try to
|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade is initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. | |App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You can migrate once these operations are complete. | |Migrate is not available for this subscription.|Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.|
-|Your InteralLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Your InteralLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the migration feature at this time. The InternalLoadBalancingMode must be manually changed by the Microsoft team. |Open a support case to engage support to resolve your issue. Request an update to the InternalLoadBalancingMode to allow migration. |
|Migration is invalid. Your ASE needs to be upgraded to the latest build to ensure successful migration. We will upgrade your ASE now. Please try migrating again in few hours once platform upgrade has finished. |Your App Service Environment isn't on the minimum build required for migration. An upgrade is started. Your App Service Environment won't be impacted, but you won't be able to scale or make changes to your App Service Environment while the upgrade is in progress. You won't be able to migrate until the upgrade finishes. |Wait until the upgrade finishes and then migrate. | ## Overview of the migration process using the in-place migration feature
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
Title: Migrate to App Service Environment v3 by using the side by side migration
description: Overview of the side by side migration feature for migration to App Service Environment v3. Previously updated : 2/21/2024 Last updated : 2/22/2024
At this time, the side by side migration feature supports migrations to App Serv
### Azure Public - East Asia
+- North Europe
- West Central US
+- West US 2
The following App Service Environment configurations can be migrated using the side by side migration feature. The table gives the App Service Environment v3 configuration when using the side by side migration feature based on your existing App Service Environment.
If your App Service Environment doesn't pass the validation checks or you try to
|Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) is met. |Remove unneeded environments or contact support to review your options. | |Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade is initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. | |App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You can migrate once these operations are complete. |
-|Your InternalLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the side by side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Your InteralLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the migration feature at this time. The InternalLoadBalancingMode must be manually changed by the Microsoft team. |Open a support case to engage support to resolve your issue. Request an update to the InternalLoadBalancingMode to allow migration. |
|Migration is invalid. Your ASE needs to be upgraded to the latest build to ensure successful migration. We will upgrade your ASE now. Please try migrating again in few hours once platform upgrade has finished. |Your App Service Environment isn't on the minimum build required for migration. An upgrade is started. Your App Service Environment won't be impacted, but you won't be able to scale or make changes to your App Service Environment while the upgrade is in progress. You won't be able to migrate until the upgrade finishes. |Wait until the upgrade finishes and then migrate. | |Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-side-by-side-migrate.md). |
The new default outbound to the internet public addresses are given so you can a
### Redirect customer traffic and complete migration
-The final step is to redirect traffic to your new App Service Environment v3 and complete the migration. The platform does this change for you, but only when you initiate it. Before you do this step, you should review your new App Service Environment v3 and perform any needed testing to validate that it's functioning as intended. You can do this review using the IPs associated with your App Service Environment v3 from the IP generation steps. Once you're ready to redirect traffic, you can complete the final step of the migration. This step updates internal DNS records to point to the load balancer IP address of your new App Service Environment v3. Changes are effective immediately. This step also shuts down your old App Service Environment and deletes it. Your new App Service Environment v3 is now your production environment.
+The final step is to redirect traffic to your new App Service Environment v3 and complete the migration. The platform does this change for you, but only when you initiate it. Before you do this step, you should review your new App Service Environment v3 and perform any needed testing to validate that it's functioning as intended. Your App Service Environment v2 frontends are still running, but the backing compute is an App Service Environment v3. If you're able to access your apps without issues, that means you're ready to complete the migration.
+
+Once you're ready to redirect traffic, you can complete the final step of the migration. This step updates internal DNS records to point to the load balancer IP address of your new App Service Environment v3 and the frontends that were created during the migration. Changes are effective immediately. This step also shuts down your old App Service Environment and deletes it. Your new App Service Environment v3 is now your production environment.
> [!IMPORTANT] > During the preview, in some cases there may be up to 20 minutes of downtime when you complete the final step of the migration. This downtime is due to the DNS change. The downtime is expected to be removed once the feature is generally available. If you have a requirement for zero downtime, you should wait until the side by side migration feature is generally available. During preview, however, you can still use the side by side migration feature to migrate your dev environments to App Service Environment v3 to learn about the migration process and see how it impacts your workloads.
application-gateway Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Direct web traffic with Azure Application Gateway - Terraform
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
Client applications can be designed to take advantage of TPM attestation by dele
### AMD SEV-SNP attestation
-Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (CVM) is based on [AMD processors with SEV-SNP technology](../confidential-computing/virtual-machine-solutions.md). CVM offers VM OS disk encryption option with platform-managed keys or customer-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements is sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](../key-vault/managed-hsm/overview.md) or [Azure Key Vault](../key-vault/general/basic-concepts.md). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
+Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (CVM) is based on [AMD processors with SEV-SNP technology](../confidential-computing/virtual-machine-options.md). CVM offers VM OS disk encryption option with platform-managed keys or customer-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](../key-vault/managed-hsm/overview.md) or [Azure Key Vault](../key-vault/general/basic-concepts.md). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
### Trusted Launch attestation
attestation Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-terraform.md
Last updated 09/25/2023 content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure Attestation provider by using Terraform
azure-arc Enable Guest Management At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md
Previously updated : 12/01/2023 Last updated : 02/23/2024 keywords: "VMM, Arc, Azure" #Customer intent: As an IT infrastructure admin, I want to install arc agents to use Azure management services for SCVMM VMs.
keywords: "VMM, Arc, Azure"
In this article, you learn how to install Arc agents at scale for SCVMM VMs and use Azure management capabilities.
+>[!IMPORTANT]
+>We recommend maintaining the SCVMM management server and the SCVMM console in the same Long-Term Servicing Channel (LTSC) and Update Rollup (UR) version.
+ >[!NOTE] >This article is applicable only if you are running: >- SCVMM 2022 UR1 or later
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager. Previously updated : 12/19/2023 Last updated : 02/23/2024 ms.
To Arc-enable a System Center VMM management server, deploy [Azure Arc resource
The following image shows the architecture for the Arc-enabled SCVMM: ## How is Arc-enabled SCVMM different from Arc-enabled Servers
Azure Arc-enabled SCVMM doesn't store/process customer data outside the region t
## Next steps
-[Create an Azure Arc VM](create-virtual-machine.md)
+[Create an Azure Arc VM](create-virtual-machine.md).
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
ms. Previously updated : 2/07/2024 Last updated : 2/23/2024 # Customer intent: As a VI admin, I want to connect my VMM management server to Azure Arc.
Before you can start using the Azure Arc-enabled SCVMM features, you need to connect your VMM management server to Azure Arc.
-This Quickstart shows you how to connect your SCVMM management server to Azure Arc using a helper script. The script deploys a lightweight Azure Arc appliance (called Azure Arc resource bridge) as a virtual machine running in your VMM environment and installs an SCVMM cluster extension on it, to provide a continuous connection between your VMM management server and Azure Arc.
+This Quickstart shows you how to connect your SCVMM management server to Azure Arc using a helper script. The script deploys a lightweight Azure Arc appliance (called Azure Arc resource bridge) as a virtual machine running in your VMM environment and installs an SCVMM cluster extension on it to provide a continuous connection between your VMM management server and Azure Arc.
## Prerequisites
Follow these instructions to run the script on a Windows machine.
Follow these instructions to run the script on a Linux machine:
-1. Open the terminal and navigate to the folder, where you've downloaded the Bash script.
+1. Open the terminal and navigate to the folder where you've downloaded the Bash script.
2. Execute the script using the following command: ```sh
The script execution will take up to half an hour and you'll be prompted for var
| **SCVMM management server FQDN/Address** | FQDN for the VMM server (or an IP address). </br> Provide role name if itΓÇÖs a Highly Available VMM deployment. </br> For example: nyc-scvmm.contoso.com or 10.160.0.1 | | **SCVMM Username**</br> (domain\username) | Username for the SCVMM administrator account. The required permissions for the account are listed in the prerequisites above.</br> Example: contoso\contosouser | | **SCVMM password** | Password for the SCVMM admin account |
-| **Private cloud selection** | Select the name of the private cloud where the Arc resource bridge VM should be deployed. |
+| **Deployment location selection** | Select if you want to deploy the Arc resource bridge VM in an SCVMM Cloud or an SCVMM Host Group. |
+| **Private cloud/Host group selection** | Select the name of the private cloud or the host group where the Arc resource bridge VM should be deployed. |
| **Virtual Network selection** | Select the name of the virtual network to which *Arc resource bridge VM* needs to be connected. This network should allow the appliance to talk to the VMM management server and the Azure endpoints (or internet). | | **Static IP pool** | Select the VMM static IP pool that will be used to allot the IP address. | | **Control Plane IP** | Provide a reserved IP address in the same subnet as the static IP pool used for Resource Bridge deployment. This IP address should be outside of the range of static IP pool used for Resource Bridge deployment and shouldn't be assigned to any other machine on the network. |
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Title: Connect VMware vCenter Server to Azure Arc by using the helper script
description: In this quickstart, you learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. Previously updated : 11/06/2023 Last updated : 02/22/2024
First, the script deploys a virtual appliance called [Azure Arc resource bridge]
- A virtual network that can provide internet access, directly or through a proxy. It must also be possible for VMs on this network to communicate with the vCenter server on TCP port (usually 443). -- At least three free static IP addresses on the above network. If you have a DHCP server on the network, the IP addresses must be outside the DHCP range.
+- At least three free static IP addresses on the above network.
- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs.
You need a Windows or Linux machine that can access both your vCenter Server ins
7. Select a subscription and resource group where the resource bridge will be created.
-8. Under **Region**, select an Azure location where the resource metadata will be stored. Currently, supported regions are **East US**, **West Europe**, **Australia East** and **Canada Central**.
+8. Under **Region**, select an Azure location where the resource metadata will be stored. Currently, the supported regions are **East US**, **West Europe**, **Australia East**, and **Canada Central**.
9. Provide a name for **Custom location**. You'll see this name when you deploy VMs. Name it for the datacenter or the physical location of your datacenter. For example: **contoso-nyc-dc**.
A typical onboarding that uses the script takes 30 to 60 minutes. During the pro
| **vCenter password** | Enter the password for the vSphere account. | | **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge VM should be deployed. | | **Network selection** | Select the name of the virtual network or segment to which the Azure Arc resource bridge VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |
-| **Static IP / DHCP** | For deploying Azure Arc resource bridge, the preferred configuration is to use Static IP. Enter **n** to select static IP configuration. While not recommended, if you have DHCP server in your network and want to use it instead, enter **y**. If you're using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). If you use DHCP, the cluster configuration IP address still needs to be a static IP address. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br>|
-| **Control Plane IP address** | Azure Arc resource bridge runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <br> - The IP address must have internet access. <br> - The IP address must be within the subnet defined by IP address prefix. <br> - If you're using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP). <br> - If there's a DHCP service on the network, the IP address must be outside of DHCP range.|
+| **Static IP** | Arc Resource Bridge requires static IP address assignment and DHCP isn't supported. </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br>|
+| **Control Plane IP address** | Azure Arc resource bridge runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <br> - The IP address must have internet access. <br> - The IP address must be within the subnet defined by IP address prefix. <br> - If you're using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP). |
| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge VM will be deployed. | | **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge VM. | | **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. |
-| **VM template Name** | Provide a name for the VM template that will be created in your vCenter Server instance based on the downloaded OVA file. For example: **arc-appliance-template**. |
| **Appliance proxy settings** | Enter **y** if there's a proxy in your appliance network. Otherwise, enter **n**. </br> You need to populate the following boxes when you have a proxy set up: </br> 1. **Http**: Address of the HTTP proxy server. </br> 2. **Https**: Address of the HTTPS proxy server. </br> 3. **NoProxy**: Addresses to be excluded from the proxy. </br> 4. **CertificateFilePath**: For SSL-based proxies, the path to the certificate to be used. After the command finishes running, your setup is complete. You can now use the capabilities of Azure Arc-enabled VMware vSphere.
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
For Internet of Things services availability in Azure Government, see [Products
### [Azure IoT Hub](../iot-hub/index.yml) -- IoT Hub supports encryption of data at rest with customer-managed keys, also known as *bring your own key* (BYOK). Azure IoT Hub provides encryption of data at rest and in transit. By default, Azure IoT Hub uses Microsoft-managed keys to encrypt the data. Customer-managed key support enables you to encrypt data at rest by using an [encryption key that you manage via Azure Key Vault](../iot-hub/iot-hub-customer-managed-keys.md).-
+- Azure IoT Hub provides encryption of data at rest and in transit. Azure IoT Hub uses Microsoft-managed keys to encrypt the data.
## Management and governance
azure-maps How To Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md
After the response returns, copy the feature `id` for one of the `unit` features
> [!div class="nextstepaction"] > [How to create a feature stateset]
+[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
[datasets]: /rest/api/maps-creator/dataset [WFS API]: /rest/api/maps-creator/wfs [Web Feature Service (WFS)]: /rest/api/maps-creator/wfs
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Learn more about migrating from Bing Maps to Azure Maps.
[Load a map]: #load-a-map [Localization support in Azure Maps]: supported-languages.md [Localizing the map]: #localizing-the-map
+[Microsoft Entra ID]: /entra/fundamentals/whatis
[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps [OpenLayers plugin]: /samples/azure-samples/azure-maps-OpenLayers/azure-maps-OpenLayers-plugin [OpenLayers]: https://openlayers.org/
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
Learn the details of how to migrate your Bing Maps application with these articl
> [!div class="nextstepaction"] > [Migrate a web app]
-[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Blog]: https://aka.ms/AzureMapsTechBlog [Azure Maps code samples]: https://samples.azuremaps.com/
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
Learn the details of how to migrate your Google Maps application with these arti
> [!div class="nextstepaction"] > [Migrate a web app](migrate-from-google-maps-web-app.md)
-[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Blog]: https://aka.ms/AzureMapsBlog [Azure Maps developer forums]: https://aka.ms/AzureMapsForums
Learn the details of how to migrate your Google Maps application with these arti
[Azure support options]: https://azure.microsoft.com/support/options [free account]: https://azure.microsoft.com/free/ [Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Microsoft Entra authentication]: azure-maps-authentication.md#microsoft-entra-authentication
[Microsoft learning center shows]: https://aka.ms/AzureMapsVideos [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
Find more open-source Azure Maps projects.
[Azure Maps Jupyter Notebook samples]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook [Azure Maps Leaflet plugin]: https://github.com/azure-samples/azure-maps-leaflet [Azure Maps OpenLayers plugin]: https://github.com/azure-samples/azure-maps-openlayers
+[Azure Maps Open Source Projects]: https://github.com/Microsoft/Maps/blob/master/AzureMaps.md
[Azure Maps Overview Map module]: https://github.com/Azure-Samples/azure-maps-overview-map [Azure Maps Scale Bar Control module]: https://github.com/Azure-Samples/azure-maps-scale-bar-control [Azure Maps Selection Control module]: https://github.com/Azure-Samples/azure-maps-selection-control
azure-maps Schema Stateset Stylesobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/schema-stateset-stylesobject.md
Learn more about Creator for indoor maps by reading:
[`StyleObject`]: #styleobject [Creator for indoor maps]: creator-indoor-maps.md [Feature State service]: /rest/api/maps-creator/feature-state
-[Implement dynamic styling for Creator  indoor maps]: indoor-map-dynamic-styling.md
+[Implement dynamic styling for Creator indoor maps]: indoor-map-dynamic-styling.md
[RangeObject]: #rangeobject [What is Azure Maps Creator?]: about-creator.md
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md
The following table displays the combined historical and forecast data for one o
grouped_weather_data.get_group(station_ids[0]).reset_index() ```
-<center>![Grouped data](./media/weather-service-tutorial/grouped-data.png)</center>
+![Grouped data](./media/weather-service-tutorial/grouped-data.png)
## Plot forecast data
windsPlot.set_ylabel("Wind direction")
The following graphs visualize the forecast data. For the change of wind speed, see the left graph. For change in wind direction, see the right graph. This data is prediction for next 15 days from the day the data is requested.
-<center>
![Wind speed plot](./media/weather-service-tutorial/speed-date-plot.png) ![Wind direction plot](./media/weather-service-tutorial/direction-date-plot.png)
-</center>
In this tutorial, you learned how to call Azure Maps REST APIs to get weather forecast data. You also learned how to visualize the data on graphs.
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Global requests from clients can be processed by action group services in any re
1. Configure basic action group settings. In the **Project details** section: - Select values for **Subscription** and **Resource group**. - Select the region.
+
+ > [!NOTE]
+ > Service Health Alerts are only supported in public clouds within the global region. For Action Groups to properly function in response to a Service Health Alert the region of the action group must be set as "Global".
| Option | Behavior | | | -- |
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
If you don't wish to have your classic resource automatically migrated to a work
### Is there any implication on the cost from migration?
-There's usually no difference, with a couple of exceptions:
+There's usually no difference, with one exeception - Application Insights resources that were receiving 1 GB per month free via legacy Application Insights pricing model will no longer receive the free data.
+The migration to workspace-based Application Insights offers a number of options to further [optimize cost](../logs/cost-logs.md), including [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers), [dedicated clusters](../logs/cost-logs.md#dedicated-clusters), and [basic logs](../logs/cost-logs.md#basic-logs).
### How will telemetry capping work?
To avoid this issue, make sure to use the latest version of the Terraform [azure
For backwards compatibility, calls to the old API for creating Application Insights resources will continue to work. Each of these calls will eventually create both a workspace-based Application Insights resource and a Log Analytics workspace to store the data.
-We strongly encourage updating to the [new API](https://learn.microsoft.com/azure/azure-monitor/app/resource-manager-app-resource) for better control over resource creation.
+We strongly encourage updating to the [new API](resource-manager-app-resource.md) for better control over resource creation.
### Should I migrate diagnostic settings on classic Application Insights before moving to a workspace-based AI? Yes, we recommend migrating diagnostic settings on classic Application Insights resources before transitioning to a workspace-based Application Insights. It ensures continuity and compatibility of your diagnostic settings.
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
Last updated 02/21/2023
# Resize a capacity pool or a volume+ You can change the size of a capacity pool or a volume as necessary, for example, when a volume or capacity pool fills up. For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacity of a volume](monitor-volume-capacity.md).
For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit
* Capacity pools with Basic network features have a minimum size of 4 TiB. For capacity pools with Standard network features, the minimum size is 1 TiB. For more information, see [Resource limits](azure-netapp-files-resource-limits.md) * Volume resize operations are nearly instantaneous but not always immediate. There can be a short delay for the volume's updated size to appear in the portal. Verify the size from a host perspective before re-attempting the resize operation.
+>[!IMPORTANT]
+>If you are using a capacity pool with a size of 2 TiB or smaller and have `ANFStdToBasicNetworkFeaturesRevert` and `ANFBasicToStdNetworkFeaturesUpgrade` AFECs enabled and want to change the capacity pool's QoS type from auto manual, you must [perform the operation with the REST API](#resizing-the-capacity-pool-or-a-volume-using-rest-api) using the `2023-07-01` API version or later.
+ ## Resize the capacity pool using the Azure portal You can change the capacity pool size in 1-TiB increments or decrements. However, the capacity pool size cannot be smaller than the sum of the capacity of the volumes hosted in the pool.
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
See [regions supported for this feature](azure-netapp-files-network-topologies.m
> [!IMPORTANT] > Updating the network features option might cause a network disruption on the volumes for up to 5 minutes.
+>[!NOTE]
+>If you have enabled both the `ANFStdToBasicNetworkFeaturesRevert` and `ANFBasicToStdNetworkFeaturesUpgrade` AFECs are using 1 or 2-TiB capacity pools, see [Resize a capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) for information about sizing your capacity pools.
+ 1. Navigate to the volume for which you want to change the network features option. 1. Select **Change network features**. 1. The **Edit network features** window displays the volumes that are in the same network sibling set. Confirm whether you want to modify the network features option.
azure-resource-manager Manage Resource Groups Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md
Last updated 01/27/2024
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Manage Azure resource groups by using Python
azure-resource-manager Deploy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-python.md
Title: Deploy resources with Python and template description: Use Azure Resource Manager and Python to deploy resources to Azure. The resources are defined in an Azure Resource Manager template. Previously updated : 04/24/2023 Last updated : 02/23/2024 content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Deploy resources with ARM templates and Python
This article explains how to use Python with Azure Resource Manager templates (A
* A template to deploy. If you don't already have one, download and save an [example template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json) from the Azure Quickstart templates repo.
-* Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
+* Python 3.8 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
* The following Azure library packages for Python installed in your virtual environment. To install any of the packages, use `pip install {package-name}` * azure-identity
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
description: This article describes how to move Azure VMware Solution resources
Previously updated : 12/18/2023 Last updated : 2/23/2024 # Customer intent: As an Azure service administrator, I want to move my Azure VMware Solution resources from Azure Region A to Azure Region B.
You can move Azure VMware Solution resources to a different region for several r
This article helps you plan and migrate Azure VMware Solution from one Azure region to another, such as Azure region A to Azure region B. - The diagram shows the recommended ExpressRoute connectivity between the two Azure VMware Solution environments. An HCX site pairing and service mesh are created between the two environments. The HCX migration traffic and Layer-2 extension moves (depicted by the red line) between the two environments. For VMware recommended HCX planning, see [Planning an HCX Migration](https://vmc.techzone.vmware.com/vmc-solutions/docs/deploy/planning-an-hcx-migration#section1). :::image type="content" source="media/move-across-regions/move-ea-csp-across-regions-2.png" alt-text="Diagram showing ExpressRoute Global Reach communication between the source and target Azure VMware Solution environments." border="false":::
The diagram shows the recommended ExpressRoute connectivity between the two Azur
>[!NOTE] >You don't need to migrate any workflow back to on-premises because the traffic will flow between the private clouds (source and target): >
->**Azure VMware Solution private cloud (source) > ExpressRoute gateway (source) > ExpressRoute gateway (target) > Azure VMware Solution private cloud (target)**
+>**Azure VMware Solution private cloud (source) > ExpressRoute gateway (source) > Global Reach -> ExpressRoute gateway (target) > Azure VMware Solution private cloud (target)**
The diagram shows the connectivity between both Azure VMware Solution environments. :::image type="content" source="media/move-across-regions/move-ea-csp-across-regions-connectivity-diagram.png" alt-text="Diagram showing communication between the source and target Azure VMware Solution environments." border="false"::: - In this article, walk through the steps to: > [!div class="checklist"]
The following steps show how to prepare your Azure VMware Solution private cloud
Before you can move the source configuration, you need to [deploy the target environment](plan-private-cloud-deployment.md). - ### Back up the source configuration Back up the Azure VMware Solution (source) configuration that includes vCenter Server, NSX-T Data Center, and firewall policies and rules. -- **Compute:** Export existing inventory configuration. For Inventory backup, you can use RVtools (an open-source app).--- **Network and firewall policies and rules:** On the Azure VMware Solution target, create the same network segments as the source environment.
+- **Compute:** Export existing inventory configuration. For Inventory backup, you can use [RVTools (an open-source app)](https://www.robware.net/home).
+
+- **Network and firewall policies and rules:** This is included as part of the VMware HCX Network Extension.
Azure VMware Solution supports all backup solutions. You need CloudAdmin privileges to install, backup data, and restore backups. For more information, see [Backup solutions for Azure VMware Solution VMs](ecosystem-back-up-vms.md).
Azure VMware Solution supports all backup solutions. You need CloudAdmin privile
3. Copy the sourceΓÇÖs **ExpressRoute ID**. You need it to peer between the private clouds. - ### Create the targetΓÇÖs authorization key 1. From the target, sign in to the [Azure portal](https://portal.azure.com/).
Azure VMware Solution supports all backup solutions. You need CloudAdmin privile
> [!NOTE] > If you need access to the Azure US Gov portal, go to https://portal.azure.us/
-
- 1. Select **Manage** > **Connectivity** > **ExpressRoute**, then select **+ Request an authorization key**. :::image type="content" source="media/expressroute-global-reach/start-request-authorization-key.png" alt-text="Screenshot showing how to request an ExpressRoute authorization key." border="true" lightbox="media/expressroute-global-reach/start-request-authorization-key.png":::
After you establish connectivity, you'll create a VMware HCX site pairing betwee
1. In **Advanced Configuration - Network Extension Appliance Scale Out**, review and select **Continue**.
- You can have up to eight VLANs per appliance, but you can deploy another appliance to add another eight VLANs. You must also have IP space to account for the more appliances, and it's one IP per appliance. For more information, see [VMware HCX Configuration Limits](https://configmax.vmware.com/guest?vmwareproduct=VMware%20HCX&release=VMware%20HCX&categories=41-0,42-0,43-0,44-0,45-0).
+ You can have up to eight Network Segments per appliance, but you can deploy another appliance to add another eight Network Segments. You must also have IP space to account for the more appliances, and it's one IP per appliance. For more information, see [VMware HCX Configuration Limits](https://configmax.vmware.com/guest?vmwareproduct=VMware%20HCX&release=VMware%20HCX&categories=41-0,42-0,43-0,44-0,45-0).
:::image type="content" source="media/tutorial-vmware-hcx/extend-networks-increase-vlan.png" alt-text="Screenshot that shows where to increase the VLAN count." lightbox="media/tutorial-vmware-hcx/extend-networks-increase-vlan.png":::
In this step, copy the source vSphere configuration and move it to the target en
2. From the source's vCenter Server, use the same VM folder name and [create the same VM folder](https://docs.vmware.com/en/VMware-Validated-Design/6.1/sddc-deployment-of-cloud-operations-and-automation-in-the-first-region/GUID-9D935BBC-1228-4F9D-A61D-B86C504E469C.html) on the target's vCenter Server under **Folders**.
-3. Use VMware HCX to migrate all VM templates from the source's vCenter Server to the target's vCenter.
+3. Use VMware HCX to migrate all VM templates from the source's vCenter Server to the target's vCenter Server.
1. From the source, convert the existing templates to VMs and then migrate them to the target.
In this step, copy the source vSphere configuration and move it to the target en
4. Select **Sync Now**. - ### Configure the target NSX-T Data Center environment
-In this step, use the source NSX-T Data Center configuration to configure the target NSX-T environment.
+In this step, use the source NSX-T Data Center configuration to configure the target NSX-T Data Center environment.
>[!NOTE]
->You'll have multiple features configured on the source NSX-T Data Center, so you must copy or read from the source NSX-T Data Center and recreate it in the target private cloud. Use L2 Extension to keep same IP address and Mac Address of the VM while migrating Source to target AVS Private Cloud to avoid downtime due to IP change and related configuration.
+>You'll have multiple features configured on the source NSX-T Data Center, so you must copy or read from the source NSX-T Data Center and recreate it in the target private cloud. Use L2 Extension to keep same IP address and Mac Address of the VM while migrating Source to target Azure VMware Solution Private Cloud to avoid downtime due to IP change and related configuration.
1. [Configure NSX-T Data Center network components](tutorial-nsx-t-network-segment.md) required in the target environment under default Tier-1 gateway.
Before the gateway cutover, verify all migrated workload services and performanc
For VMware recommendations, see [Cutover of extended networks](https://vmc.techzone.vmware.com/vmc-solutions/docs/deploy/planning-an-hcx-migration#section9). - ### Public IP DNAT for migrated DMZ VMs To this point, you migrated the workloads to the target environment. These application workloads must be reachable from the public internet. The target environment provides two ways of hosting any application. Applications can be:
batch Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure Batch account using Terraform
cdn Cdn Add To Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-add-to-web-app.md
What you learned:
Learn how to optimize CDN performance in the following articles: > [!div class="nextstepaction"]
-> [Tutorial: Add a custom domain to your Azure CDN endpoint](cdn-map-content-to-custom-domain.md)
+> [Tutorial: Optimize Azure CDN for the type of content delivery.](cdn-optimization-overview.md)
cdn Create Profile Endpoint Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure CDN profile and endpoint using Terraform
certification Concepts Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/concepts-legacy.md
- Title: Legacy devices on the Azure Certified Device catalog
-description: An explanation of legacy devices on the Azure Certified Device catalog
---- Previously updated : 04/07/2021---
-# Legacy devices on the Azure Certified Device catalog
-
-You may have noticed on the Azure Certified Device catalog that some devices don't have the blue outline or the "Azure Certified Device" label. These devices (dubbed "legacy devices") were previously certified under the legacy program.
-
-## Certified for Azure IoT program
-
-Before the launch of the Azure Certified Device program, hardware partners could previously certify their products under the Certified for Azure IoT program. The Azure Certified Device certification program refocuses its mission to deliver on customer promises rather than technical device capabilities.
-
-Devices that have been certified as an ΓÇÿIoT Hub certified deviceΓÇÖ appear on the Azure Certified Device catalog as a ΓÇÿlegacy device.ΓÇÖ This label indicates devices have previously qualified through the now-retired program, but haven't been certified through the updated Azure Certified Device program. These devices are clearly noted in the catalog by their lack of blue outline, and can be found through the "IoT Hub Certified devices (legacy)" filter.
-
-## Next steps
-
-Interested in recertifying a legacy device under the Azure Certified Device program? You can submit your device through our portal and leave a note to our review team to coordinate. Follow the link below to get started!
--- [Tutorial: Select your certification program](./tutorial-00-selecting-your-certification.md)
certification Concepts Marketing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/concepts-marketing.md
- Title: Marketing properties
-description: A description of the different marketing fields collected in the portal and how they will appear on the Azure Certified Device catalog
---- Previously updated : 06/22/2021--
-# Marketing properties
-
-In the process of [adding your device details](tutorial-02-adding-device-details.md), you will be required to supply marketing information that will be displayed on the [Azure Certified Device catalog](https://devicecatalog.azure.com). This information is collected within the Azure Certified Device portal during the certification submission process and will be used as filter parameters on the catalog. This article provides a mapping between the fields collected in the portal to how they appear on the catalog. After reading this article, partners should better understand what information to provide during the certification process to best represent their product on the catalog.
-
-![PDP overview](./media/concepts-marketing/pdp-overview.png)
-
-## Azure Certified Device catalog product tile
-
-Visitors to the catalog will first interact with your device as a catalog product tile on the search page. This will provide a basic overview of the device and certifications it has been awarded.
-
-![Product tile template](./media/concepts-marketing/product-tile.png)
-
-| Field | Description | Where to add in the portal |
-||-|-|
-| Device Name | Public name of your certified device | Basics tab of Device details|
-| Company name| Public name of your company | Not editable in the portal. Extracted from MPN account name |
-| Product photo | Image of your device with minimum resolution 200p x 200p | Marketing details |
-| Certification classification | Mandatory Azure Certified Device certification label and optional certification badges | Basics tab of Device details. Must pass appropriate testing in Connect & test section. |
-
-## Product description page information
-
-Once a customer has clicked on your device tile from the catalog search page, they will be navigated to the product description page of your device. This is where the bulk of the information provided during the certification process will be found.
-
-The top of the product description page highlights key characteristics, some of which were already used for the product tile.
-
-![PDP top bar](./media/concepts-marketing/pdp-top.png)
-
-| Field | Description | Where to add in the portal |
-||-|-|
-| Device class | Classification of the form factor and primary purpose of your device ([Learn more](./resources-glossary.md)) | Basics tab of Device details|
-| Device type | Classification of device based on implementation readiness ([Learn more](./resources-glossary.md)) | Basics tab of Device details |
-| Geo availability | Regions that your device is available for purchase | Marketing details |
-| Operating systems | Operating system(s) that your device supports | Product details tab of Device details |
-| Target industries | Top 3 industries that your device is optimized for | Marketing details |
-| Product description | Free text field for you to write your marketing description of your product. This can capture details not listed in the portal, or add additional context for the benefits of using your device. | Marketing details|
-
-The remainder of the page is focused on displaying the technical specifications of your device in table format that will help your customer better understand your product. For convenience, the information displayed at the top of the page is also listed here, along with some additional device information. The rest of the table is sectioned by the components specified in the portal.
-
-![PDP bottom page](./media/concepts-marketing/pdp-bottom.png)
-
-| Field | Description | Where to add in the portal |
-||-|-|
-| Environmental certifications | Official certifications received for performance in different environments | Hardware of Device details |
-| Operating conditions | Ingress Protection value or temperature ranges the device is qualified for | Software of device details |
-| Azure software set-up | Classification of the set-up process to connect the device to Azure ([Learn more](./how-to-software-levels.md)) | Software of Device details |
-| Component type | Classification of the form factor and primary purpose of your device ([Learn more](./resources-glossary.md)) | Hardware of Device details|
-| Component name| Name of the component you are describing | Product details of Device details |
-| Additional component information | Additional hardware specifications such as included sensors, connectivity, accelerators, etc. | Additional component information of Device details ([Learn more](./how-to-using-the-components-feature.md)) |
-| Device dependency text | Partner-provided text describing the different dependencies the product requires to connect to Azure ([Learn more](./how-to-indirectly-connected-devices.md)) | Customer-facing comments section of Dependencies tab of Device details |
-| Device dependency link | Link to a certified device that your current product requires | Dependencies tab of Device details |
-
-## Shop links
-Available both on the product tile and product description page is a Shop button. When clicked by the customer, a window opens that allows them to select a distributor (you are allowed to list up to 5 distributors). Once selected, the customer is redirected to the partner-provided URL.
-
-![Image of Shop pop-up experience](./media/concepts-marketing/shop.png)
-
-| Field | Description | Where to add in the portal |
-||-|-|
-| Distributor name | Name of the distributor who is selling your product | Marketing details|
-| Get Device| Link to external website for customer to purchase the device (or request a quote from the distributor). This may be the same as the Manufacturer's page if the distributor is the same as the device manufacturer. If a purchase page is not available, this will redirect to the distributor's page for customer to contact them directly. | Distributor product page URL in marketing details. If no purchase page is available, link will default to Distributor URL in Marketing detail. |
-
-## External links
-
-Also included within the Product Description page are links that navigate to partner-provided sites or files that help the customer better understand the product. They appear towards the top of the page, beneath the product description text. The links displayed will differ for different device types and certification programs.
-
-| Link | Description | Where to add in the portal |
-||-|-|
-| Get Started guide* | PDF file with user instructions to connect and use your device with Azure services | Add 'Get Started' guide section of the portal|
-| Manufacturer's page*|Link to manufacturer's page. This page may be the specific product page for your device, or to the company home page if a marketing page is not available. | Manufacturer's marketing page in Marketing details |
-| Device model | Public DTDL models for IoT Plug and Play solutions | Not editable in the portal. Device model must be uploaded to the ([public model repository](https://aka.ms/modelrepo) |
-| Device source code | URL to device source code for Dev Kit device types| Basics tab of Device details |
-
- **Required for all published devices*
-
-## Next steps
-Now that you have an understanding of how we use the information you provide during certification, you are now ready to certify your device! Begin your certification project, or jump back into the device details stage to add your own marketing information.
--- [Start your certification journey](./tutorial-00-selecting-your-certification.md)-- [Adding device details](./tutorial-02-adding-device-details.md)
certification Edge Secured Core Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/edge-secured-core-devices.md
+
+ Title: Edge Secured-core certified devices
+description: List of devices that have passed the Edge Secured-core certifications
+++ Last updated : 01/26/2024++++
+# Edge Secured-core certified devices
+This page contains a list of devices that have successfully passed the Edge Secured-core certification.
+
+|Manufacturer|Device Name|OS|Last Updated|
+|||
+|Asus|[PE200U](https://www.asus.com/networking-iot-servers/aiot-industrial-solutions/embedded-computers-edge-ai-systems/pe200u/)|Windows 10 IoT Enterprise|2022-04-20|
+|Asus|[PN64-E1 vPro](https://www.asus.com/ca-en/displays-desktops/mini-pcs/pn-series/asus-expertcenter-pn64-e1/)|Windows 10 IoT Enterprise|2023-08-08|
+|AAEON|[SRG-TG01](https://newdata.aaeon.com.tw/DOWNLOAD/2014%20datasheet/Systems/SRG-TG01.pdf)|Windows 10 IoT Enterprise|2022-06-14|
+|Intel|[NUC13L3Hv7](https://www.asus.com/us/displays-desktops/nucs/nuc-kits/nuc-13-pro-kit/techspec/)|Windows 10 IoT Enterprise|2023-04-28|
+|Intel|[NUC13L3Hv5](https://www.asus.com/us/displays-desktops/nucs/nuc-kits/nuc-13-pro-kit/techspec/)|Windows 10 IoT Enterprise|2023-04-12|
+|Intel|[NUC13ANKv7](https://www.asus.com/us/displays-desktops/nucs/nuc-kits/nuc-13-pro-kit/techspec/)|Windows 10 IoT Enterprise|2023-01-27|
+|Intel|[NUC12WSKv5](https://www.asus.com/displays-desktops/nucs/nuc-mini-pcs/nuc-12-pro-mini-pc/techspec/)|Windows 10 IoT Enterprise|2023-03-16|
+|Intel|ELM12HBv5+CMB1AB|Windows 10 IoT Enterprise|2023-03-17|
+|Intel|[NUC12WSKV7](https://www.asus.com/displays-desktops/nucs/nuc-mini-pcs/nuc-12-pro-mini-pc/techspec/)|Windows 10 IoT Enterprise|2022-10-31|
+|Intel|BELM12HBv716W+CMB1AB|Windows 10 IoT Enterprise|2022-10-25|
+|Intel|NUC11TNHv5000|Windows 10 IoT Enterprise|2022-06-14|
+|Lenovo|[ThinkEdge SE30](https://www.lenovo.com/us/en/p/desktops/thinkedge/thinkedge-se30/len102c0004)|Windows 10 IoT Enterprise|2022-04-06|
certification Edge Secured Core Get Certified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/edge-secured-core-get-certified.md
+
+ Title: Get your device certified
+description: Instructions to achieve Edge Secured-core certifications
+++ Last updated : 01/26/2024++++
+# Get your device certified
+This page contains a series of steps to get a new device Edge Secured-core certified.
+
+## Prerequisites
+Create a [Microsoft Partner Center account.](https://partner.microsoft.com/dashboard/account/exp/enrollment/welcome?cloudInstance=Global&accountProgram=Reseller)
+
+## Certification steps
+1. Review [Edge Secured-core certification requirements](program-requirements-edge-secured-core.md).
+2. Submit a [form](https://forms.office.com/r/HSAtk0Ghru) to express interest in getting your device certified.
+3. Microsoft reaches out to you on next steps and provides instructions to validate that your device meets the program's requirements.
+4. Once your device validation is completed based on the instructions provided, share the results with Microsoft.
+5. Microsoft reviews and communicates the status of your submission.
+6. If the device is approved for Edge Secured-core certification, notification is sent and the device appears on the [Edge Secured-core device listing](edge-secured-core-devices.md) page.
+7. If the device didn't meet requirements for Edge Secured-core certification, notification is sent and you can submit new/additional validation data to Microsoft.
+
+[![Diagram showing flowchart for certification process.](./media/images/certification-flowchart.png)](./media/images/certification-flowchart-expanded.png#lightbox)
certification How To Edit Published Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-edit-published-device.md
- Title: How to edit your published Azure Certified Device
-description: A guide to edit you device information after you have certified and published your device through the Azure Certified Device program.
---- Previously updated : 07/13/2021---
-# Edit your published device
-
-After your device has been certified and published to the Azure Certified Device catalog, you may need to update your device details. This may be due to an update to your distributor list, changes to purchase page URLs, or updates to the hardware specifications (such as operating system version or a new component addition). You may also have to update your IoT Plug and Play device model from what you originally uploaded to the model repository.
--
-## Prerequisites
--- You should be signed in and have an **approved** project for your device on the [Azure Certified Device portal](https://certify.azure.com). If you don't have a certified device, you can view this [tutorial](tutorial-01-creating-your-project.md) to get started.--
-## Editing your published project information
-
-On the project summary, you should notice that your project is in read-only mode since it has already been reviewed and accepted. To make changes, you will have to request an edit to your project and have the update re-approved by the Azure Certification team.
-
-1. Click the `Request Metadata Edit` button on the top of the page
-
- ![Request metadata update](./media/images/request-metadata-edit.png)
-
-1. Acknowledge the notification on the page that you will be required to submit your product for review after editing.
- > [!NOTE]
- > By confirming this edit, you are **not** removing your device from the Azure Certified Device catalog if it has already been published. Your previous version of the product will remain on the catalog until you have republished your device.
- > You will also not have to repeat the Connect & test section of the portal.
-
-1. Once acknowledging this warning, you can edit your device details. Make sure to leave a note in the `Comments for Reviewer` section of `Device Details` of what has been changed.
-
- ![Note of metadata edit](./media/images/edit-notes.png)
-
-1. On the project summary page, click `Submit for review` to have your changes reapproved by the Azure Certification team.
-1. After your changes have been reviewed and approved, you can then republish your changes to the catalog through the portal (See our [tutorial](./tutorial-04-publishing-your-device.md)).
-
-## Editing your IoT Plug and Play device model
-
-Once you have submitted your device model to the public model repository, it cannot be removed. If you update your device model and would like to re-link your certified device to the new model, you **must re-certify** your device as a new project. If you do this, please leave a note in the 'Comments for Reviewer' section so the certification team can remove your old device entry.
-
-## Next steps
-
-You've now successfully edited your device on the Azure Certified Device catalog. You can check out your changes on the catalog, or certify another device!
-- [Azure Certified Device catalog](https://devicecatalog.azure.com/)-- [Get started with certifying a device](./tutorial-01-creating-your-project.md)
certification How To Indirectly Connected Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-indirectly-connected-devices.md
-
-# Mandatory fields.
Title: Certify bundled or indirectly connected devices-
-description: Learn how to submit a bundled or indirectly connected device for Azure Certified Device certification. See how to configure dependencies and components.
-- Previously updated : 06/07/2022----
-# Optional fields. Don't forget to remove # if you need a field.
-#
-#
--
-# Device bundles and indirectly connected devices
-
-Many devices interact with Azure indirectly. Some communicate through another device, such as a gateway. Others connect through software as a service (SaaS) or platform as a service (PaaS) offerings.
-
-The [submission portal](https://certify.azure.com/) and [device catalog](https://devicecatalog.azure.com) offer support for indirectly connected devices:
--- By listing dependencies in the portal, you can specify that your device needs another device or service to connect to Azure.-- By adding components, you can indicate that your device is part of a bundle.-
-This functionality gives indirectly connected devices access to the Azure Certified Device program.
-
-Depending on your product line and the services that you offer or use, your situation might require a combination of dependencies and bundling. The Azure Edge Certification Portal provides a way for you to list dependencies and additional components.
--
-## Sensors and indirect devices
-
-Many sensors require a device to connect to Azure. In addition, you might have multiple compatible devices that work with the sensor. **To accommodate these scenarios, certify the devices before you certify the sensor that passes information through them.**
-
-The following matrix provides some examples of submission combinations:
--
-To certify a sensor that requires a separate device:
-
-1. Go to the [Azure Certified Device portal](https://certify.azure.com) to certify the device and publish it to the Azure Certified Device catalog. If you have multiple, compatible pass-through devices, as in the earlier example, submit them separately for certification and catalog publication.
-
-1. With the sensor connected through the device, submit the sensor for certification. In the **Dependencies** tab of the **Device details** section, set the following values:
-
- - **Dependency type**: Select **Hardware gateway**.
- - **Dependency URL**: Enter the URL of the device in the device catalog.
- - **Used during testing**: Select **Yes**.
- - **Customer-facing comments**: Enter any comments that you'd like to provide to a user who sees the product description in the device catalog. For example, you might enter **Series 100 devices are required for sensors to connect to Azure**.
-
-1. If you'd like to add more devices as optional for this device:
-
- 1. Select **Add additional dependency**.
- 1. Enter **Dependency type** and **Dependency URL** values.
- 1. For **Used during testing**, select **No**.
- 1. For **Customer-facing comments**, enter a comment that informs your customers that other devices are available as alternatives to the device that was used during testing.
--
-## PaaS and SaaS offerings
-
-As part of your product portfolio, you might certify a device that requires services from your company or third-party companies. To add this type of dependency:
-
-1. Go to the [Azure Certified Device portal](https://certify.azure.com) and start the submission process for your device.
-
-1. In the **Dependencies** tab, enter the following values:
-
- - **Dependency type**: Select **Software service**.
- - **Service name**: Enter the name of your product.
- - **Dependency URL**: Enter the URL of a product page that describes the service.
- - **Customer-facing comments**: Enter any comments that you'd like to provide to a user who sees the product description in the Azure Certified Device catalog.
-
-1. If you have other software, services, or hardware dependencies that you'd like to add as optional for this device, select **Add additional dependency** and enter the required information.
--
-## Bundled products
-
-With bundled product listings, a device is successfully certified in the Azure Certified Device program with other components. The device and the components are then sold together under one product listing.
-
-The following matrix provides some examples of bundled products. You can submit a device that includes extra components such as a temperature sensor and a camera sensor, as in submission example 1. You can also submit a touch sensor that includes a pass-through device, as in submission example 2.
--
-Use the component feature to add multiple components to your listing. Format the product listing image to indicate that your product comes with other components. If your bundle requires additional services for certification, identify those services through service dependencies.
-
-For a more detailed description of how to use the component functionality in the Azure Certified Device portal, see [Add components on the portal](./how-to-using-the-components-feature.md).
-
-If a device is a pass-through device with a separate sensor in the same product, create one component to reflect the pass-through device, and another component to reflect the sensor. As the following screenshot shows, you can add components to your project in the **Product details** tab of the **Device details** section:
--
-Configure the pass-through device first. For **Component type**, select **Customer Ready Product**. Enter the other values, as relevant for your product. The following screenshot provides an example:
--
-For the sensor, add a second component. For **Component type**, select **Peripheral**. For **Attachment method**, select **Discrete**. The following screenshot provides an example:
--
-After you've created the sensor component, enter its information. Then go to the **Sensors** tab and enter detailed sensor information, as the following screenshot shows.
--
-Complete the rest of your project's details, and then submit your device for certification as usual.
certification How To Software Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-software-levels.md
- Title: Software levels of Azure Certified Devices
-description: A breakdown of the different software levels that an Azure Certified Device may be classified as.
---- Previously updated : 06/22/2021---
-# Software levels of Azure Certified Devices
-
-Software levels are a feature defined by the Azure Certified Device program to help device builders indicate the technical level of difficulty a customer can expect when connecting the device to Azure services. Appearing on the catalog as "Azure software set-up," these values are aimed to help viewers better understand the product and its connection to Azure. The definitions of each of these levels are provided below.
-
-## Level 1
-
-User can immediately connect device to Azure by simply adding provisioning details. The certified IoT device already contains pre-installed software that was used for certification upon purchase. This level is most similar to having an "out-of-the-box" set-up experience for IoT beginners who are not as comfortable with compiling source code.
-
-## Level 2
-
-User must flash/apply manufacturer-provided software image to the device to connect to Azure. Extra tools/software experience may be required. The link to the software image is also provided in our catalog.
-
-## Level 3
-
-User must follow a manufacturer-provided guide to prepare and install Azure-specific software. No Azure-specific software image is provided, so some customization and compilation of provided source code is required.
-
-## Level 4
-
-User must develop, customize, and recompile their own device code to connect to Azure. No manufacturer-supported source code is available. This level is most well suited for developers looking to create custom deployments for their device.
-
-## Next steps
-
-These levels are aimed to help you get started with building IoT solutions with Azure! Ready to get started? Visit the [Azure Certified Device catalog](https://devicecatalog.azure.com) to get searching for devices!
-
-Are you a device builder who is looking to add this software level to your certified device? Check out the links below.
-- [Edit a previously published device](how-to-edit-published-device.md)-- [Tutorial: Adding device details](tutorial-02-adding-device-details.md)
certification How To Test Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-device-update.md
- Title: How to test Device Update for IoT Hub
-description: A guide describing how to test Device Update for IoT Hub on a Linux host in preparation for Edge Secured-core certification.
---- Previously updated : 06/20/2022---
-# How to test Device Update for IoT Hub
-The [Device Update for IoT Hub](..\iot-hub-device-update\understand-device-update.md) test exercises your deviceΓÇÖs ability to receive an update from IoT Hub. The following steps will guide you through the process to test Device Update for IoT Hub when attempting device certification.
-
-## Prerequisites
-* Device must be capable of running Linux [IoT Edge supported container](..\iot-edge\support.md).
-* Your device must be capable of receiving an [.SWU update](https://swupdate.org/) and be able to return to a running and connected state after the update is applied.
-* The update package and manifest must be applicable to the device under test. (Example: If the device is running ΓÇ£Version 1.0ΓÇ¥, the update should be ΓÇ£Version 2.0ΓÇ¥.)
-* Upload your .SWU file to a blob storage location of your choice.
-* Create a SAS URL for accessing the uploaded .SWU file.
-
-## Test the device
-1. On the Connect + test page, select **"Yes"** for the **"Are you able to test Device Update for IoT Hub?"** question.
- > [!Note]
- > If you are not able to test Device Update and select No, you will still be able to run all other Secured-core tests, but your product will not be eligible for certification.
-
- :::image type="content" source="./media/how-to-adu/connect-test.png" alt-text="Dialog to confirm that you are able to test device for IoT Hub.":::
-
-2. Proceed with connecting your device to the test infrastructure.
-
-3. On the Select Requirement Validation step, select **"Upload"**.
- :::image type="content" source="./media/how-to-adu/connect-and-test.png" alt-text="Dialog that shows the selected tests that will be validated.":::
-
-4. Upload your .importmanifest.json file by selecting the **Choose File** button. Select your file and then select the **Upload** button.
- > [!Note]
- > The file extension must be .importmanifest.json.
-
- :::image type="content" source="./media/how-to-adu/upload-manifest.png" alt-text="Dialog to instruct the user to upload the .importmanifest.json file by selecting the choose File button.":::
-
-5. Copy and Paste the SAS URL to the location of your .SWU file in the provided input box, then select the **Validate** button.
- :::image type="content" source="./media/how-to-adu/input-sas-url.png" alt-text="Dialog that shows how the SAS url is applied.":::
-
-6. Once weΓÇÖve validated our service can reach the provided URL, select **Import**.
- :::image type="content" source="./media/how-to-adu/finalize-import.png" alt-text="Dialog to inform the user that the SAS URL was reachable and that the user needs to click import.":::
-
- > [!Note]
- > If you receive an ΓÇ£Invalid SAS URLΓÇ¥ message, generate a new SAS URL from your storage blob and try again.
-
-7. Select **Continue** to proceed
-
-8. Congratulations! You're now ready to proceed with Edge Secured-core testing.
-
-9. Select the **Run tests** button to begin the testing process. Your device will be updated as the final step in our Edge Secured-core testing.
certification How To Using The Components Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-using-the-components-feature.md
- Title: How to use the components feature in the Azure Certified Device portal
-description: A guide on how to best use the components feature of the Device details section to accurately describe your device
---- Previously updated : 05/04/2021---
-# Add components on the portal
-
-While completing the [tutorial to add device details](tutorial-02-adding-device-details.md) to your certification project, you will be expected to describe the hardware specifications of your device. To do so, users can highlight multiple, separate hardware products (referred to as **components**) that make up your device. This enables you to better promote devices that come with additional hardware, and allows customers to find the right product by searching on the catalog based on these features.
-
-## Prerequisites
--- You should be signed in and have a project for your device created on the [Azure Certified Device portal](https://certify.azure.com). For more information, view the [tutorial](tutorial-01-creating-your-project.md).-
-## How to add components
-
-Every project submitted for certification will include one **Customer Ready Product** component (which in many cases will represent the holistic product itself). To better understand the distinction of a Customer Ready Product component type, view our [certification glossary](./resources-glossary.md). All additional components are at your discretion to include to accurately capture your device.
-
-1. Select `Add a component` on the Hardware tab.
-
- ![Add a component link](./media/images/add-component-new.png)
-
-1. Complete relevant form fields for the component.
-
- ![Component details section](./media/images/component-details-section.png)
-
-1. Save your information using the `Save Product Details` button at the bottom of the page:
-
- ![Save Product Details button](./media/images/save-product-details-button.png)
-
-1. Once you have saved your component, you can further tailor the hardware capabilities it supports. Select the `Edit` link by the component name.
-
- ![Edit Component button](./media/images/component-edit.png)
-
-1. Provide relevant hardware capability information where appropriate.
-
- ![Image of editable component sections](./media/images/component-selection-area.png)
-
- The editable component fields (shown above) include:
-
- - **General**: Hardware details such as processors and secure hardware
- - **Connectivity**: Connectivity options, protocols, and interfaces such as radio(s) and GPIO
- - **Accelerators**: Specify hardware acceleration such as GPU and VPU
- - **Sensors**: Specify available sensors such as GPS and vibration
- - **Additional Specs**: Additional information about the device such as physical dimensions and storage/battery information
-
-1. Select `Save Product Details` at the bottom of the Product details page.
-
-## Component use requirements and recommendations
-
-You may have questions regarding how many components to include, or what component type to use. Below are examples of a few sample scenarios of devices that you may be certifying, and how you can use the components feature.
-
-| Product Type | No. Components | Component 1 / Attachment Type | Components 2+ / Attachment Type |
-|-||-|--|
-| Finished Product | 1 | Customer Ready Product, Discrete | N/A |
-| Finished Product with **detachable peripheral(s)** | 2 or more | Customer Ready Product, Discrete | Peripheral / Discrete or Integrated |
-| Finished Product with **integrated component(s)** | 2 or more | Customer Ready Product, Discrete | Select appropriate type / Discrete or integrated |
-| Solution-Ready Dev Kit | 1 or more | Customer Ready Product or Development Board, Discrete or Integrated| Select appropriate type / Discrete or integrated |
-
-## Example component usage
-
-Below are examples of how an OEM called Contoso would use the components feature to certify their product, called Falcon.
-
-1. Falcon is a complete stand-alone device that does not integrate into a larger product.
- 1. No. of components: 1
- 1. Component device type: Customer Ready Product
- 1. Attachment type: Discrete
-
- ![Image of customer ready product](./media/images/customer-ready-product.png)
-
-1. Falcon is a device that includes an integrated peripheral camera module manufactured by INC Electronics that connects via USB to Falcon.
- 1. No. of components: 2
- 1. Component device type: Customer Ready Product, Peripheral
- 1. Attachment type: Discrete, Integrated
-
- > [!Note]
- > The peripheral component is considered integrated because it is not removable.
-
- ![Image of peripheral example component](./media/images/peripheral.png)
-
-1. Falcon is a device that includes an integrated System on Module from INC Electronics that uses a built-in processor Apollo52 from company Espressif and has an ARM64 architecture.
- 1. No. of components: 2
- 1. Component device type: Customer Ready Product, System on Module
- 1. Attachment type: Discrete, Integrated
-
- > [!Note]
- > The peripheral component is considered integrated because it is not removable. The SoM component would also include processor information.
-
- ![Image of system on module example component ](./media/images/system-on-module.png)
-
-## Additional tips
-
-We've provided below more clarifications regarding our component usage policy. If you have any questions about appropriate component usage, contact our team at [iotcert@microsoft.com](mailto:iotcert@microsoft.com), and we'll be more than happy to help!
-
-1. A project must contain **only** one Customer Ready Product component. If you are certifying a project with two independent devices, those devices should be certified separately.
-1. It is primarily up to you to use (or not use) components to promote your device's capabilities to potential customers.
-1. During our review of your device, the Azure Certification team will only require at least one Customer Ready Product component to be listed. However, we may request edits to the component information if the details are not clear or appear to be lacking (for example, component manufacturer is not supplied for a Customer Ready Product type).
-
-## Next steps
-
-Now that you're ready to use our components feature, you're now ready to complete your device details or edit your project for further clarity.
--- [Tutorial: Adding device details](tutorial-02-adding-device-details.md)-- [Editing your published device](how-to-edit-published-device.md)
certification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/overview.md
Title: Overview of the Azure Certified Device program
-description: An overview of the Azure Certified Device program for our partners and customers. Use these resources to start the device certification process. Find out how to certify your device, from IoT device requirements to publishing your device.
--
+ Title: Overview of the Edge Secured-core program
+description: An overview of the Edge Secured-core program for our partners and customers. Use these resources to start the certification process. Find out how to certify your device, from IoT device requirements to the device being published.
++ Previously updated : 04/09/2021 Last updated : 02/07/2024
+# Edge Secured-core Program
+> _Note: As of February 2024, the Azure Certified Device program has been retired. This page has been updated as a new home for the Edge Secured-core program._
+## What is the Edge Secured-core program? ##
+Edge Secured-Core is Microsoft's recommended standard for highly secured embedded devices. Such devices must include hardware security features, must be shipped in a secured state, and must be able to connect to services that enable that security monitoring and maintenance for the lifetime of the device.
-# What is the Azure Certified Device program?
-> [!Note]
-> The Azure Certified Device program has met its goals and will conclude on February 23, 2024. This means that the Azure Certified Device catalog, along with certifications for Azure Certified Device, Edge Managed, and IoT Plug and Play will no longer be available after this date. However, the Edge Secured-core program will remain active and will be relocated to a new home at [aka.ms/EdgeSecuredCoreHome](https://aka.ms/EdgeSecuredCoreHome).
+## Program purpose ##
+Edge Secured-core is a security certification for devices running a full operating system. Edge Secured-core currently supports Windows IoT and Azure Sphere OS. Linux support is coming in the future. Devices meeting this criteria enable these promises:
-Thank you for your interest in the Azure Certified Device program! Azure Certified Device is a free program that enables you to differentiate, certify, and promote your IoT devices built to run on Azure. From intelligent cameras to connected sensors to edge infrastructure, this enhanced IoT device certification program helps device builders increase their product visibility and saves customers time in building solutions.
-
-## Our certification promise
-
-The Azure Certified Device program ensures customer solutions work great on Azure. It is a program that utilizes tools, services, and a catalog to share industry knowledge with our community of builders within the IoT ecosystem to help builders and customers alike.
-
-Across the device certification process, the three tenets of this program are:
--- **Giving customers confidence:** Customers can confidently purchase Azure certified devices that carry the Microsoft promise.--- **Matchmaking customers with the right devices for them:** Device builders can set themselves apart with certification that highlights their unique capabilities, and customers can easily find IoT qualified devices that fit their needs.--- **Promoting certified devices:** Device builders get increased visibility, contact with customers, and usage of MicrosoftΓÇÖs Azure Certified Device brand.
+1. Hardware-based device identity
+2. Capable of enforcing system integrity
+3. Stays up to date and is remotely manageable
+4. Provides data at-rest protection
+5. Provides data in-transit protection
+6. Built-in security agent and hardening
## User roles
-The Azure Certified Device program serves two different audiences.
-
-1. **Device builders**: Do you build IoT devices? Easily differentiate your IoT device capabilities and gain access to a worldwide audience looking to reliably purchase devices built to run on Azure. Use the Azure Certified Device Catalog to increase product visibility and connect with customers by certifying your device and show it meets specific IoT device requirements.
-1. **Solution builders**: Wondering what are IoT qualified devices? Confidently find and purchase IoT devices built to run on Azure, knowing they meet specific IoT requirements. Easily search and select the right certified device for your IoT solution on the [Azure Certified Device catalog](https://devicecatalog.azure.com/).
-
-## Our certification programs and IoT device requirements.
+The Edge Secured-core program serves two different audiences.
-There are four different certifications available now! Each certification is focused on delivering a different customer value. Depending on the type of device and your target audience, you can choose which certification(s) is most applicable for you to apply for. Select the titles of each program to learn more about the program and IoT requirements.
-
-| Certification program | Overview |
-|-|
-| [Azure Certified Device](program-requirements-azure-certified-device.md) | Azure Certified Device certification validates that a device can connect with Azure IoT Hub and securely provision through the Device Provisioning Service (DPS). This certification reflects a device's functionality and interoperability, which are a **required baseline** for all other certifications. |
-| [IoT Plug and Play](program-requirements-pnp.md) | IoT Plug and Play certification, an incremental certification beyond the baseline Azure Certified Device certification, validates Digital Twin Definition Language version 2 (DTDL) and interaction based on your device model. It enables a seamless device-to-cloud integration experience and enables hardware partners to build devices that can seamlessly integrate without the need to write custom code. |
-| [Edge Managed](program-requirements-edge-managed.md) | Edge Managed certification, an incremental certification beyond the baseline Azure Certified Device certification, focuses on device management standards for Azure connected devices. |
-| [Edge Secured Core](program-requirements-edge-secured-core.md) | Edge Secured-core certification, an incremental certification beyond the baseline Azure Certified Device certification, is for IoT devices running a full operating system such as Linux or Windows 10 IoT. It validates devices meet additional security requirements around device identity, secure boot, operating system hardening, device updates, data protection, and vulnerability disclosures. |
-
-## How to certify your device
-
-Certifying a device involves several major steps on the [Azure Certified Device portal](https://certify.azure.com):
-
-1. Select the right certification for your device based on the IoT device requirements.
-1. Create your project in the [Azure Certified Device portal](https://certify.azure.com).
-1. Add device details including hardware capability information to begin the device certification process.
-1. Validate device functionality
-1. Submit and complete the review process
-
-> [!Note]
-> The review process can take up to a week to complete, though sometimes may take longer.
-
-Once you have certified your device, you then can optionally complete two of the following activities:
-
-1. Publishing to the Azure Certified Device Catalog (optional)
-1. Updating your project after it has been approved/published (optional)
+1. **Device builders**: Do you build Edge devices? Easily differentiate your Edge device capabilities by certifying your device, showing that it meets specific security requirements.
+1. **Solution builders**: Wondering what Edge devices are capable of security? Confidently purchase Edge devices from Device builders, knowing they meet specific security requirements. Check out the list of current Device builders with certified [Edge Secured-core devices](edge-secured-core-devices.md).
## Next steps
-Ready to get started with your certification journey? View our resources below to start the device certification process!
+Ready to get started with your certification journey? View our resources to start the device certification process!
+
+- [Edge Secured-core program requirements](program-requirements-edge-secured-core.md)
+- [Start the certification process](edge-secured-core-get-certified.md)
-- [Starting the certification process](tutorial-00-selecting-your-certification.md)-- If you have other questions or feedback, contact [the Azure Certified Device team](mailto:iotcert@microsoft.com).
certification Program Requirements Azure Certified Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-azure-certified-device.md
- Title: Azure Certified Device Certification Requirements
-description: Azure Certified Device Certification Requirements
--- Previously updated : 03/15/2021------
-# Azure Certified Device Certification Requirements
-> [!Note]
-> The Azure Certified Device program has met its goals and will conclude on February 23, 2024. This means that the Azure Certified Device catalog, along with certifications for Azure Certified Device, Edge Managed, and IoT Plug and Play will no longer be available after this date. However, the Edge Secured-core program will remain active and will be relocated to a new home at [aka.ms/EdgeSecuredCoreHome](https://aka.ms/EdgeSecuredCoreHome).
-
-This document outlines the device specific capabilities that will be represented in the Azure Certified Device catalog. A capability is singular device attribute that may be software implementation or combination of software and hardware implementations.
-
-## Program Purpose
-
-Microsoft is simplifying IoT and Azure Certified Device certification is baseline certification program to ensure any device types are provisioned to Azure IoT Hub securely.
-
-Promise of Azure Certified Device certification are:
-
-1. Device support telemetry that works with IoT Hub
-2. Device support IoT Hub Device Provisioning Service (DPS) to securely provisioned to Azure IoT Hub
-3. Device supports easy input of target DPS ID scope transfer without requiring user to recompile embedded code.
-4. Optionally validates other elements such as cloud to device messages, direct methods and device twin
-
-## Requirements
-
-**[Required] Device to cloud: The purpose of test is to make sure devices that send telemetry works with IoT Hub**
-
-| **Name** | AzureCertified.D2C |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Leaf device/Edge device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com/) to execute the tests. Device to cloud (required): **1.** Validates that the device can send message to AICS managed IoT Hub **2.** User must specify the number and frequency of messages. **3.** AICS validates the telemetry is received by the Hub instance |
-| **Resources** | [Certification steps](./overview.md) (has all the additional resources) |
-
-**[Required] DPS: The purpose of test is to check the device implements and supports IoT Hub Device Provisioning Service with one of the three attestation methods**
-
-| **Name** | AzureCertified.DPS |
-| -- | |
-| **Target Availability** | New |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device supports easy input of target DPS ID scope ownership. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests to validate that the device supports DPS **1.** User must select one of the attestation methods (X.509, TPM and SAS key) **2.** Depending on the attestation method, user needs to take corresponding action such as **a)** Upload X.509 cert to AICS managed DPS scope **b)** Implement SAS key or endorsement key into the device |
-| **Resources** | [Device provisioning service overview](../iot-dps/about-iot-dps.md) |
-
-**[If implemented] Cloud to device: The purpose of test is to make sure messages can be sent from cloud to devices**
-
-| **Name** | AzureCertified.C2D |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Leaf device/Edge device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must be able to Cloud to Device messages from IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute these tests.Cloud to device (if implemented): **1.** Validates that the device can receive message from IoT Hub **2.** AICS sends random message and validates via message ACK from the device |
-| **Resources** | **a)** [Certification steps](./overview.md) (has all the additional resources) **b)** [Send cloud to device messages from an IoT Hub](../iot-hub/iot-hub-devguide-messages-c2d.md) |
-
-**[If implemented] Direct methods: The purpose of test is to make sure devices works with IoT Hub and supports direct methods**
-
-| **Name** | AzureCertified.DirectMethods |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Leaf device/Edge device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must be able to receive and reply commands requests from IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Direct methods (if implemented) **1.** User has to specify the method payload of direct method. **2.** AICS validates the specified payload request is sent from Hub and ACK message received by the device |
-| **Resources** | **a)** [Certification steps](./overview.md) (has all the additional resources) **b)** [Understand direct methods from IoT Hub](../iot-hub/iot-hub-devguide-direct-methods.md) |
-
-**[If implemented] Device twin property: The purpose of test is to make sure devices that send telemetry works with IoT Hub and supports some of the IoT Hub capabilities such as direct methods, and device twin property**
-
-| **Name** | AzureCertified.DeviceTwin |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Leaf device/Edge device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Device twin property (if implemented) **1.** AICS validates the read/write-able property in device twin JSON **2.** User has to specify the JSON payload to be changed **3.** AICS validates the specified desired properties sent from IoT Hub and ACK message received by the device |
-| **Resources** | **a)** [Certification steps](./overview.md) (has all the additional resources) **b)** [Use device twins with IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md) |
-
-**[Required] Limit Recompile: The purpose of this policy ensures devices by default should not need users to re-compile code to deploy the device.**
-
-| **Name** | AzureCertified.Policy.LimitRecompile |
-| -- | |
-| **Target Availability** | Policy |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Policy |
-| **Validation** | To simplify device configuration for users, we require all devices can be configured to connect to Azure without the need to recompile and deploy device source code. This includes DPS information, such as Scope ID, which should be set as configuration settings and not compiled. However, if your device contains certain secure hardware or if there are extenuating circumstances in which the user will expect to compile and deploy code, contact the certification team to request an exception review. |
-| **Resources** | **a)** [Device provisioning service overview](../iot-dps/about-iot-dps.md) **b)** Sample config file for DPS ID Scope transfer |
certification Program Requirements Edge Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-edge-managed.md
- Title: Edge Managed Certification Requirements
-description: Edge Managed Certification Requirements
--- Previously updated : 03/15/2021------
-# Edge Managed Certification Requirements
-> [!Note]
-> The Azure Certified Device program has met its goals and will conclude on February 23, 2024. This means that the Azure Certified Device catalog, along with certifications for Azure Certified Device, Edge Managed, and IoT Plug and Play will no longer be available after this date. However, the Edge Secured-core program will remain active and will be relocated to a new home at [aka.ms/EdgeSecuredCoreHome](https://aka.ms/EdgeSecuredCoreHome).
-
-This document outlines the device specific capabilities that will be represented in the Azure Certified Device catalog. A capability is singular device attribute that may describe the device.
-
-## Program Purpose
-
-Edge Managed certification, an incremental certification beyond the baseline Azure Certified Device certification. Edge Managed focuses on device management standards for Azure connected devices and validates the IoT Edge runtime compatibility for module deployment and management. (Previously, this program was identified as the IoT Edge certification program.)
-
-Edge Managed certification validates IoT Edge runtime compatibility for module deployment and management. This program provides confidence in the management of Azure connected IoT devices.
-
-## Requirements
-
-The Edge Managed certification requires that all requirements from the [Azure Certified Device baseline program](.\program-requirements-azure-certified-device.md).
-
-**DPS: The purpose of test is to check the device implements and supports IoT Hub Device Provisioning Service with one of the three attestation methods**
-
-| **Name** | AzureReady.DPS |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | AICS validates the device code support DPS. **1.** User has to select one of the attestation methods (X.509, TPM and SAS key). **2.** Depending on the attestation method, user needs to take corresponding action such as **a)** Upload X.509 cert to AICS managed DPS scope **b)** Implement SAS key or endorsement key into the device. **3.** Then, user will hit ΓÇÿConnectΓÇÖ button to connect to AICS managed IoT Hub via DPS |
-| **Resources** | |
-| **Azure Recommended:** | N/A |
-
-## IoT Edge
-
-**Edge runtime exists: The purpose of test is to make sure the device contains IoT Edge runtime ($edgehub and $edgeagent) are functioning correctly.**
-
-| **Name** | EdgeManaged.EdgeRT |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | IoT Edge device |
-| **OS** | [Tier1 and Tier2 OS](../iot-edge/support.md) |
-| **Validation Type** | Automated |
-| **Validation** | AICS validates the deploy-ability of the installed IoT Edge RT. **1.** User needs to specify specific OS (OS not on the list of Tier1/2 are not accepted) **2.** AICS generates its config.yaml and deploys canonical [simulated temp sensor edge module](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azure-iot.simulated-temperature-sensor?tab=Overview) **3.** AICS validates that docker compatible container subsystem (Moby) is installed on the device **4.** Test result is determined based on successful deployment of the simulated temp sensor edge module and functionality of docker compatible container subsystem |
-| **Resources** | **a)** [AICS blog](https://azure.microsoft.com/blog/expanding-azure-iot-certification-service-to-support-azure-iot-edge-device-certification/), **b)** [Certification steps](./overview.md) (has all the additional resources), **c)** [Requirements](./program-requirements-azure-certified-device.md) |
-| **Azure Recommended:** | N/A |
-
-### Capability Template:
-
-**IoT Edge easy setup: The purpose of test is to make sure IoT Edge device is easy to set up and validates IoT Edge runtime is preinstalled during physical device validation**
-
-| **Name** | EdgeManaged.PhysicalDevice |
-| -- | |
-| **Target Availability** | Available now (currently on hold due to COVID-19) |
-| **Applies To** | IoT Edge device |
-| **OS** | [Tier1 and Tier2 OS](../iot-edge/support.md) |
-| **Validation Type** | Manual / Lab Verified |
-| **Validation** | OEM must ship the physical device to IoT administration (HCL). HCL performs manual validation on the physical device to check: **1.** EdgeRT is using Moby subsystem (allowed redistribution version). Not docker **2.** Pick the latest edge module to validate ability to deploy edge. |
-| **Resources** | **a)** [AICS blog](https://azure.microsoft.com/blog/expanding-azure-iot-certification-service-to-support-azure-iot-edge-device-certification/), **b)** [Certification steps](./overview.md) , **c)** [Requirements](./program-requirements-azure-certified-device.md) |
-| **Azure Recommended:** | N/A |
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-edge-secured-core.md
Title: Edge Secured-core Certification Requirements description: Edge Secured-core Certification program requirements--++ Previously updated : 06/21/2021 Last updated : 02/20/2024 zone_pivot_groups: app-service-platform-windows-linux-sphere-rtos
-# Azure Certified Device - Edge Secured-core #
-
-## Edge Secured-Core certification requirements ##
-
-### Program purpose ###
-Edge Secured-core is a security certification for devices running a full operating system. Edge Secured-core currently supports Windows IoT and Azure Sphere OS. Linux support is coming in the future. This program enables device partners to differentiate their devices by meeting an additional set of security criteria. Devices meeting this criteria enable these promises:
-
-1. Hardware-based device identity
-2. Capable of enforcing system integrity
-3. Stays up to date and is remotely manageable
-4. Provides data at-rest protection
-5. Provides data in-transit protection
-6. Built in security agent and hardening
-
+# Edge Secured-Core certification requirements
::: zone pivot="platform-windows" ## Windows IoT OS Support
-Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 1903 or greater
+Edge Secured-core requires a version of Windows IoT that has at least five years of support from Microsoft remaining in its support lifecycle, at time of certification such as:
* [Windows 10 IoT Enterprise Lifecycle](/lifecycle/products/windows-10-iot-enterprise)
-> [!Note]
-> The Windows secured-core tests require you to download and run the following package (https://aka.ms/Scforwiniot) from an Administrator Command Prompt on the IoT device being validated.
+* [Windows 10 IoT Enterprise LTSC 2021 Lifecycle](/lifecycle/products/windows-10-iot-enterprise-ltsc-2021)
+* [Windows 11 IoT Enterprise Lifecycle](/lifecycle/products/windows-11-iot-enterprise)
## Windows IoT Hardware/Firmware Requirements > [!Note]
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
> * Trusted Platform Module (TPM) 2.0 > * <b>For Intel systems:</b> Intel Virtualization Technology for Directed I/O (VT-d), Intel Trusted Execution Technology (TXT), and SINIT ACM driver package must be included in the Windows system image (for DRTM) > * <b>For AMD systems:</b> AMD IOMMU and AMD-V virtualization, and SKINIT package must be integrated in the Windows system image (for DRTM)
-> * Kernel DMA Protection (also known as Memory Access Protection)
+> * Kernel Direct Memory Access Protection (also known as Memory Access Protection)
</br>
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
|Name|SecuredCore.Hardware.Identity| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate the device identity is rooted in hardware and can be the primary authentication method with Azure IoT Hub Device Provisioning Service (DPS).|
-|Requirements dependency|TPM v2.0 device|
-|Validation Type|Manual/Tools|
-|Validation|Devices are enrolled to DPS using the TPM authentication mechanism during testing.|
-|Resources|Azure IoT Hub Device Provisioning Service: <ul><li>[Quickstart - Provision a simulated TPM device to Microsoft Azure IoT Hub](../iot-dps/quick-create-simulated-device-tpm.md) </li><li>[TPM Attestation Concepts](../iot-dps/concepts-tpm-attestation.md)</li></ul>|
+|Description|The device identity must be rooted in hardware.|
+|Purpose|Protects against cloning and masquerading of the device root identity, which is key in underpinning trust in upper software layers extended through a chain-of-trust. Provide an attestable, immutable and cryptographically secure identity.|
+|Dependencies|Trusted Platform Module (TPM) v2.0 device|
</br>
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
|Name|SecuredCore.Hardware.MemoryProtection| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that DMA isn't enabled on externally accessible ports.|
-|Requirements dependency|Only if DMA capable ports exist|
-|Validation Type|Manual/Tools|
-|Validation|If DMA capable external ports exist on the device, toolset to validate that the IOMMU, or SMMU is enabled and configured for those ports.|
-
+|Description|All Direct Memory Access (DMA) enabled externally accessible ports must sit behind an enabled and appropriately configured Input-output Memory Management Unit (IOMMU) or System Memory Management Unit (SMMU).|
+|Purpose|Protects against drive-by and other attacks that seek to use other DMA controllers to bypass CPU memory integrity protections.|
+|Dependencies|Enabled and appropriately configured input/output Memory Management Unit (IOMMU) or System Memory Management Unit (SMMU)|
</br>
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
|Name|SecuredCore.Firmware.Protection| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to ensure that device has adequate mitigations from Firmware security threats.|
-|Requirements dependency|DRTM + UEFI|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to confirm it's protected from firmware security threats through one of the following approaches: <ul><li>DRTM + UEFI Management Mode mitigations</li><li>DRTM + UEFI Management Mode hardening</li></ul> |
-|Resources| <ul><li>https://trustedcomputinggroup.org/</li><li>[Intel's DRTM based computing whitepaper](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/drtm-based-computing-whitepaper.pdf)</li><li>[AMD Security whitepaper](https://www.amd.com/system/files/documents/amd-security-white-paper.pdf)</li></ul> |
+|Description|The device boot sequence must support Dynamic Root of Trust for Measurement (DRTM) alongside UEFI Management Mode mitigations.|
+|Purpose|Protects against firmware weaknesses, untrusted code, and rootkits that seek to exploit early and privileged boot stages to bypass OS protections.|
+|Dependencies|DRTM + UEFI|
+|Resources| <ul><li>[Trusted Computing Group](https://trustedcomputinggroup.org/)</li><li>[Intel's DRTM based computing whitepaper](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/drtm-based-computing-whitepaper.pdf)</li><li>[AMD Security whitepaper](https://www.amd.com/system/files/documents/amd-security-white-paper.pdf)</li></ul>|
</br>
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
|Name|SecuredCore.Firmware.SecureBoot| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate the boot integrity of the device.|
-|Requirements dependency|UEFI|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure that firmware and kernel signatures are validated every time the device boots. <ul><li>UEFI: Secure boot is enabled</li></ul>|
-
+|Description|UEFI Secure Boot must be enabled.|
+|Purpose|Ensures that the firmware and OS kernel, executed as part of the boot sequence, have first been signed by a trusted authority and retain integrity.|
+|Dependencies|UEFI|
</br>
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
|Name|SecuredCore.Firmware.Attestation| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to ensure the device can remotely attest to the Microsoft Azure Attestation service.|
-|Requirements dependency|Azure Attestation Service|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that platform boot logs and measurements of boot activity can be collected and remotely attested to the Microsoft Azure Attestation service.|
-|Resources| [Microsoft Azure Attestation](../attestation/index.yml) |
+|Description|The device identity, along with its platform boot logs and measurements, must be remotely attestable to the Microsoft Azure Attestation (MAA) service.|
+|Purpose|Enables services to establish the trustworthiness of the device. Allows for reliable security posture monitoring and other trust scenarios such as the release of access credentials.|
+|Dependencies|Microsoft Azure Attestation service|
+|Resources| [Microsoft Azure Attestation](../attestation/index.yml)|
-## Windows IoT configuration requirements
+## Windows IoT Configuration requirements
</br> |Name|SecuredCore.Encryption.Storage| |:|:| |Status|Required|
-|Description|The purpose of the requirement to validate that sensitive data can be encrypted on nonvolatile storage.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure Secure-boot and BitLocker is enabled and bound to PCR7.|
-
+|Description|Sensitive and private data must be encrypted at rest using BitLocker or similar, with encryption keys backed by hardware protection.|
+|Purpose|Protects against exfiltration of sensitive or private data by unauthorized actors or tampered software.|
</br>
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
|Name|SecuredCore.Encryption.TLS| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate support for required TLS versions and cipher suites.|
-|Requirements dependency|Windows 10 IoT Enterprise Version 1903 or greater. Note: other requirements might require greater versions for other services. |
-|Validation Type|Manual/Tools|
-Validation|Device to be validated through toolset to ensure the device supports a minimum TLS version of 1.2 and supports the following required TLS cipher suites.<ul><li>TLS_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_RSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_DHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256</li></ul>|
-|Resources| [TLS support in IoT Hub](../iot-hub/iot-hub-tls-support.md) <br /> [TLS Cipher suites in Windows 10](/windows/win32/secauthn/tls-cipher-suites-in-windows-10-v1903) |
+|Description|The OS must support a minimum Transport Layer Security (TLS) version of 1.2 and have the following TLS cipher suites available and enabled:<ul><li>TLS_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_RSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_DHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256</li></ul>|
+|Purpose|Ensures that applications are able to use end-to-end encryption protocols and ciphers without known weaknesses, that are supported by Azure Services.|
+|Dependencies|Windows 10 IoT Enterprise Version 1903 or greater. Note: other requirements might require greater versions for other services.|
+|Resources| [TLS cipher suites in Windows](/windows/win32/secauthn/cipher-suites-in-schannel)|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Protection.CodeIntegrity| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate that code integrity is available on this device.|
-|Requirements dependency|HVCI is enabled on the device.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure that HVCI is enabled on the device.|
-|Resources| [Hypervisor-protected Code Integrity enablement](/windows-hardware/design/device-experiences/oem-hvci-enablement) |
+|Description|The OS must have virtualization-based code integrity features enabled (VBS + HVCI).|
+|Purpose|Protects against modified/malicious code from within the kernel by ensuring that only code with verifiable integrity is able to run.|
+|Dependencies|VBS + HVCI is enabled on the device.|
+|Resources| [Hypervisor-protected Code Integrity enablement](/windows-hardware/design/device-experiences/oem-hvci-enablement)|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Protection.NetworkServices| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that services listening for input from the network aren't running with elevated privileges.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure that third party services accepting network connections aren't running with elevated LocalSystem and LocalService privileges. <ol><li>Exceptions might apply</li></ol>|
-
+|Description|Services listening for input from the network must not run with elevated privileges. Exceptions may apply for security-related services.|
+|Purpose|Limits the exploitability of compromised networked services.|
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Built-in.Security| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to make sure devices can report security information and events by sending data to Azure Defender for IoT. <br>Note: Download and deploy security agent from GitHub|
-|Target Availability|2022|
-|Validation Type|Manual/Tools|
-|Validation |Device must generate security logs and alerts. Device logs and alerts messages to Azure Security Center.<ol><li>Device must have the Azure Defender microagent running</li><li>Configuration_Certification_Check must report TRUE in the module twin</li><li>Validate alert messages from Azure Defender for IoT.</li></ol>|
-|Resources|[Azure Docs IoT Defender for IoT](../defender-for-iot/how-to-configure-agent-based-solution.md)|
+|Description|Devices must be able to send security logs and alerts to a cloud-native security monitoring solution, such as Microsoft Defender for Endpoint.|
+|Purpose|Enables fleet posture monitoring, diagnosis of security threats, and protects against latent and in-progress attacks.|
+|Resources| [Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-endpoints-script)|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Protection.Baselines| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that the system conforms to a baseline security configuration.|
-|Target Availability|2022|
-|Requirements dependency|Azure Defender for IoT|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that Defender IOT system configurations benchmarks have been run.|
-|Resources| https://techcommunity.microsoft.com/t5/microsoft-security-baselines/bg-p/Microsoft-Security-Baselines <br> https://www.cisecurity.org/cis-benchmarks/ |
+|Description|The system is able to successfully apply a baseline security configuration.|
+|Purpose|Ensures a secure-by-default configuration posture, reducing the risk of compromise through incorrectly configured security-sensitive settings.|
+|Resources|[Microsoft Security Baselines](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/bg-p/Microsoft-Security-Baselines)<br>[CIS Benchmarks List](https://www.cisecurity.org/cis-benchmarks)|
-## Windows IoT Policy Requirements
-
-Some requirements of this program are based on a business agreement between your company and Microsoft. The following requirements aren't validated through our test harness, but are required by your company in certifying the device.
+|Name|SecuredCore.Protection.Update Resiliency|
+|:|:|
+|Status|Required|
+|Description|The device must be restorable to the last known good state if an update causes issues.|
+|Purpose|Ensures that devices can be restored to a functional, secure, and updatable state.|
-
-</br>
++
+## Windows IoT Policy Requirements
|Name|SecuredCore.Policy.Protection.Debug| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that debug functionality on the device is disabled.|
-|Requirements dependency||
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that debug functionality requires authorization to enable.|
-
+|Description|Debug functionality on the device must be disabled or require authorization to enable.|
+|Purpose|Ensures that software and hardware protections cannot be bypassed through debugger intervention and back-channels.|
</br>
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Policy.Manageability.Reset| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate the device against two use cases: a) Ability to perform a reset (remove user data, remove user configs), b) Restore device to last known good in the case of an update causing issues.|
-|Requirements dependency||
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through a combination of toolset and submitted documentation that the device supports this functionality. The device manufacturer can determine whether to implement these capabilities to support remote reset or only local reset.|
-
+|Description|It must be possible to reset the device (remove user data, remove user configs).|
+|Purpose|Protects against exfiltration of sensitive or private data during device ownership or lifecycle transitions.|
</br>
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Policy.Updates.Duration| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that the device remains secure.|
-|Validation Type|Manual|
-|Validation|Commitment from submission that devices certified can be kept up to date for 60 months from date of submission. Specifications available to the purchaser and devices itself in some manner should indicate the duration for which their software will be updated.|
-
+|Description|Software updates must be provided for at least 60 months from date of submission.|
+|Purpose|Ensures a minimum period of continuous security.|
</br>
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Policy.Vuln.Disclosure| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that there's a mechanism for collecting and distributing reports of vulnerabilities in the product.|
-|Validation Type|Manual|
-|Validation|Documentation on the process for submitting and receiving vulnerability reports for the certified devices will be reviewed.|
-
+|Description|A mechanism for collecting and distributing reports of vulnerabilities in the product must be available.|
+|Purpose|Provides a clear path for discovered vulnerabilities to be reported, assessed, and disclosed, enabling effective risk management and timely fixes.|
+|Resources|[MSRC Portal](https://msrc.microsoft.com/report/vulnerability/new)|
</br>
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Policy.Vuln.Fixes| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that vulnerabilities that are high/critical (using CVSS 3.0) are addressed within 180 days of the fix being available.|
-|Validation Type|Manual|
-|Validation|Documentation on the process for submitting and receiving vulnerability reports for the certified devices will be reviewed.|
-
+|Description|Vulnerabilities that are high/critical (using Common Vulnerability Scoring System 3.0) must be addressed within 180 days of the fix being available.|
+|Purpose|Ensures that high-impact vulnerabilities are addressed in a timely manner, reducing likelihood and impact of a successful exploit.|
</br>
Some requirements of this program are based on a business agreement between your
## Linux OS Support >[!Note]
-> Linux is not yet supported. The below represent expected requirements. Please contact iotcert@microsoft.com if you are interested in certifying a Linux device, including device HW and OS specs, and whether or not it meets each of the draft requirements below.
+> Linux is not yet supported. The below represent expected requirements. Please fill out this [form](https://forms.office.com/r/HSAtk0Ghru) if you are interested in certifying a Linux device.
## Linux Hardware/Firmware Requirements
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Hardware.Identity| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate the device identify is rooted in hardware.|
-|Requirements dependency|TPM v2.0 </br><sup>or *other supported method</sup>|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that the device has a HWRoT present and that it can be provisioned through DPS using TPM or SE.|
-|Resources|[Setup auto provisioning with DPS](../iot-dps/quick-setup-auto-provision.md)|
+|Description|The device identity must be rooted in hardware.|
+|Purpose|Protects against cloning and masquerading of the device root identity, which is key in underpinning trust in upper software layers extended through a chain-of-trust. Provide an attestable, immutable and cryptographically secure identity.|
+|Dependencies|Trusted Platform Module (TPM) v2.0 </br><sup>or *other supported method</sup>|
</br>
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Hardware.MemoryProtection| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate ensure that memory integrity helps protect the device from vulnerable peripherals.|
-|Validation Type|Manual/Tools|
-|Validation|memory regions for peripherals must be gated with hardware/firmware such as memory region domain controllers or SMMU (System memory management Unit).|
-
+|Description|All DMA-enabled externally accessible ports must sit behind an enabled and appropriately configured Input-output Memory Management Unit (IOMMU) or System Memory Management Unit (SMMU).|
+|Purpose|Protects against drive-by and other attacks that seek to use other DMA controllers to bypass CPU memory integrity protections.|
+|Dependencies|Enabled and appropriately configured Input-output Memory Management Unit (IOMMU) or System Memory Management Unit (SMMU)|
+ </br> -+ |Name|SecuredCore.Firmware.Protection| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to ensure that device has adequate mitigations from Firmware security threats.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to confirm it's protected from firmware security threats through one of the following approaches: <ul><li>Approved FW that does SRTM + runtime firmware hardening</li><li>Firmware scanning and evaluation by approved Microsoft third party</li></ul> |
-|Resources| https://trustedcomputinggroup.org/ |
+|Description|The device boot sequence must support either: <ul><li>Approved firmware with SRTM support + runtime firmware hardening</li><li>Firmware scanning and evaluation by approved Microsoft third party</li></ul>|
+|Purpose|Protects against firmware weaknesses, untrusted code, and rootkits that seek to exploit early and privileged boot stages to bypass OS protections.|
+|Resources| [Trusted Computing Group](https://trustedcomputinggroup.org/) |
</br>
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Firmware.SecureBoot| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate the boot integrity of the device.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that firmware and kernel signatures are validated every time the device boots. <ul><li>UEFI: Secure boot is enabled</li><li>Uboot: Verified boot is enabled</li></ul>|
-
+|Description|Either:<ul><li>UEFI: Secure boot must be enabled</li><li>Uboot: Verified boot must be enabled</li></ul>|
+|Purpose|Ensures that the firmware and OS kernel, executed as part of the boot sequence, have first been signed by a trusted authority and retain integrity.|
</br>
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Firmware.Attestation| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to ensure the device can remotely attest to the Microsoft Azure Attestation service.|
-|Dependency|TPM 2.0 </br><sup>or *supported OP-TEE based application chained to a HWRoT (Secure Element or Secure Enclave)</sup>|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that platform boot logs and applicable runtime measurements can be collected and remotely attested to the Microsoft Azure Attestation service.|
-|Resources| [Microsoft Azure Attestation](../attestation/index.yml) </br> Certification portal test includes an attestation client that when combined with the TPM 2.0 can validate the Microsoft Azure Attestation service.|
+|Description|The device identity, along with its platform boot logs and measurements, must be remotely attestable to the Microsoft Azure Attestation (MAA) service.|
+|Purpose|Enables services to establish the trustworthiness of the device. Allows for reliable security posture monitoring and other trust scenarios such as the release of access credentials.|
+|Dependencies|Trusted Platform Module (TPM) 2.0 </br><sup>or *supported OP-TEE based application chained to a HWRoT (Secure Element or Secure Enclave)</sup>|
+|Resources| [Microsoft Azure Attestation](../attestation/index.yml)|
</br> |Name|SecuredCore.Hardware.SecureEnclave| |:|:|
-|Status|Required|
-|Description|The purpose of the requirement to validate the existence of a secure enclave and that the enclave can be used for security functions.|
-|Validation Type|Manual/Tools|
-|Validation||
+|Status|Optional|
+|Description|The device must feature a secure enclave capable of performing security functions.|
+|Purpose|Ensures that sensitive cryptographic operations (those key to device identity and chain-of-trust) are isolated and protected from the primary OS and some forms of side-channel attack.|
## Linux Configuration Requirements
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Encryption.Storage| |:|:| |Status|Required|
-|Description|The purpose of the requirement to validate that sensitive data can be encrypted on nonvolatile storage.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure storage encryption is enabled and default algorithm is XTS-AES, with key length 128 bits or higher.|
-
+|Description|Sensitive and private data must be encrypted at rest using dm-crypt or similar, supporting XTS-AES as the default algorithm with a key length of 128 bits or higher, with encryption keys backed by hardware protection.|
+|Purpose|Protects against exfiltration of sensitive or private data by unauthorized actors or tampered software.|
</br>
Some requirements of this program are based on a business agreement between your
|Name|SecuredCore.Encryption.TLS| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate support for required TLS versions and cipher suites.|
-|Validation Type|Manual/Tools|
-Validation|Device to be validated through toolset to ensure the device supports a minimum TLS version of 1.2 and supports the following required TLS cipher suites.<ul><li>TLS_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_RSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_DHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256</li></ul>|
-|Resources| [TLS support in IoT Hub](../iot-hub/iot-hub-tls-support.md) <br /> |
+|Description|The OS must support a minimum Transport Layer Security (TLS) version of 1.2 and have the following TLS cipher suites available and enabled:<ul><li>TLS_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_RSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_DHE_RSA_WITH_AES_128_GCM_SHA256</li><li>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256</li><li>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256</li></ul>|
+|Purpose|Ensure that applications are able to use end-to-end encryption protocols and ciphers without known weaknesses, that are supported by Azure Services.|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Protection.CodeIntegrity| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate that authorized code runs with least privilege.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that code integrity is enabled by validating dm-verity and IMA|
-
+|Description|The OS must have dm-verity and IMA code integrity features enabled, with code operating under least privilege.|
+|Purpose|Protects against modified/malicious code, ensuring that only code with verifiable integrity is able to run.|
</br> |Name|SecuredCore.Protection.NetworkServices| |:|:|
-|Status|<sup>*</sup>Required|
-|Description|The purpose of the requirement is to validate that applications accepting input from the network aren't running with elevated privileges.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that services accepting network connections aren't running with SYSTEM or root privileges.|
--
+|Status|Required|
+|Description|Services listening for input from the network must not run with elevated privileges, such as SYSTEM or root. Exceptions may apply for security-related services.|
+|Purpose|Limits the exploitability of compromised networked services.|
## Linux Software/Service Requirements |Name|SecuredCore.Built-in.Security| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to make sure devices can report security information and events by sending data to Microsoft Defender for IoT.|
-|Validation Type|Manual/Tools|
-|Validation |<ol><li>Device must generate security logs and alerts.</li><li>Device logs and alerts messages to Azure Security Center.</li><li>Device must have the Azure Defender for IoT microagent running</li><li>Configuration_Certification_Check must report TRUE in the module twin</li><li>Validate alert messages from Azure Defender for IoT.</li></ol>|
-|Resources|[Azure Docs IoT Defender for IoT](../defender-for-iot/how-to-configure-agent-based-solution.md)|
+|Description|Devices must be able to send security logs and alerts to a cloud-native security monitoring solution, such as Microsoft Defender for Endpoint.|
+|Purpose|Enables fleet posture monitoring, diagnosis of security threats, and protects against latent and in-progress attacks.|
+|Resources| [Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-endpoints-script)|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Manageability.Configuration| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that device supports auditing and setting of system configuration (and certain management actions such as reboot) through Azure.|
+|Description|The device must support auditing and setting of system configuration (and certain management actions such as reboot) through Azure. Note: Use of other system management toolchains (e.g. Ansible) by operators aren't prohibited, but the device must include the azure-osconfig agent for Azure management.|
+|Purpose|Enables the application of security baselines as part of a secure-by-default configuration posture, reducing the risk of compromise through incorrectly configured security-sensitive settings.|
|Dependency|azure-osconfig|
-|Validation Type|Manual/Tools|
-|Validation|<ol><li>Device must report, via IoT Hub, its firewall state, firewall fingerprint, ip addresses, network adapter state, host name, hosts file, TPM (absence, or presence with version) and package manager sources (see What can I manage) </li><li>Device must accept the creation, via IoT Hub, of a default firewall policy (accept vs drop), and at least one firewall rule, with positive remote acknowledgment (see configurationStatus)</li><li>Device must accept the replacement of /etc/hosts file contents via IoT Hub, with positive remote acknowledgment (see https://learn.microsoft.com/en-us/azure/osconfig/howto-hosts?tabs=portal#the-object-model )</li><li>Device must accept and implement, via IoT Hub, remote reboot</li></ol> Note: Use of other system management toolchains (for example, Ansible, etc.) by operators are not prohibited, but the device must include the azure-osconfig agent such that it's ready to be managed from Azure.|
- </br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Update| |:|:| |Status|Audit|
-|Description|The purpose of the requirement is to validate the device can receive and update its firmware and software.|
-|Validation Type|Manual/Tools|
-|Validation|Partner confirmation that they were able to send an update to the device through Azure Device update and other approved services.|
-|Resources|[Device Update for IoT Hub](../iot-hub-device-update/index.yml)|
+|Description|The device must be able to receive and update its firmware and software through Azure Device Update or other approved services.|
+|Purpose|Enables continuous security and renewable trust.|
</br>
-|Name|SecuredCore.Protection.Baselines|
+|Name|SecuredCore.UpdateResiliency|
|:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate the extent to which the device implements the Azure Security Baseline|
-|Dependency|azure-osconfig|
-|Validation Type|Manual/Tools|
-|Validation|OSConfig is present on the device and reporting to what extent it implements the Azure Security Baseline.|
-|Resources|<ul><li>https://techcommunity.microsoft.com/t5/microsoft-security-baselines/bg-p/Microsoft-Security-Baselines</li><li>https://www.cisecurity.org/cis-benchmarks/</li><li>https://learn.microsoft.com/en-us/azure/governance/policy/samples/guest-configuration-baseline-linux</li></ul>|
+|Description|The device must be restorable to the last known good state if an update causes issues.|
+|Purpose|Ensures that devices can be restored to a functional, secure, and updatable state.|
</br>
-|Name|SecuredCore.Protection.SignedUpdates|
+|Name|SecuredCore.Protection.Baselines|
|:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that updates must be signed.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that updates to the operating system, drivers, application software, libraries, packages and firmware won't be applied unless properly signed and validated.
+|Description|The system is able to successfully apply a baseline security configuration.|
+|Purpose|Ensures a secure-by-default configuration posture, reducing the risk of compromise through incorrectly configured security-sensitive settings.|
+|Resources|<ul><li>[Microsoft Security Baselines](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/bg-p/Microsoft-Security-Baselines)</li><li>[CIS Benchmarks List](https://www.cisecurity.org/cis-benchmarks/)</li><li>[Linux Security Baseline](../governance/policy/samples/guest-configuration-baseline-linux.md)</li></ul>|
+
+</br>
+|Name|SecuredCore.Protection.SignedUpdates|
+|:|:|
+|Status|Required|
+|Description|Updates to the operating system, drivers, application software, libraries, packages, and firmware must be signed.|
+|Purpose|Prevents unauthorized or malicious code from being installed during the update process.|
## Linux Policy Requirements
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Policy.Protection.Debug| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that debug functionality on the device is disabled.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that debug functionality requires authorization to enable.|
-
+|Description|Debug functionality on the device must be disabled or require authorization to enable.|
+|Purpose|Ensures that software and hardware protections cannot be bypassed through debugger intervention and back-channels.|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Policy.Manageability.Reset| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate the device against two use cases: a) Ability to perform a reset (remove user data, remove user configs), b) Restore device to last known good if an update causing issues.|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through a combination of toolset and submitted documentation that the device supports this functionality. The device manufacturer can determine whether to implement these capabilities to support remote reset or only local reset.|
-
+|Description|It must be possible to reset the device (remove user data, remove user configs).|
+|Purpose|Protects against exfiltration of sensitive or private data during device ownership or lifecycle transitions.|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Policy.Updates.Duration| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that the device remains secure.|
-|Validation Type|Manual|
-|Validation|Commitment from submission that devices certified will be required to keep devices up to date for 60 months from date of submission. Specifications available to the purchaser and devices itself in some manner should indicate the duration for which their software will be updated.|
-
+|Description|Software updates must be provided for at least 60 months from date of submission.|
+|Purpose|Ensures a minimum period of continuous security.|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Policy.Vuln.Disclosure| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that there's a mechanism for collecting and distributing reports of vulnerabilities in the product.|
-|Validation Type|Manual|
-|Validation|Documentation on the process for submitting and receiving vulnerability reports for the certified devices will be reviewed.|
-
+|Description|A mechanism for collecting and distributing reports of vulnerabilities in the product must be available.|
+|Purpose|Provides a clear path for discovered vulnerabilities to be reported, assessed, and disclosed, enabling effective risk management and timely fixes.|
</br>
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Policy.Vuln.Fixes| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that vulnerabilities that are high/critical (using CVSS 3.0) are addressed within 180 days of the fix being available.|
-|Validation Type|Manual|
-|Validation|Documentation on the process for submitting and receiving vulnerability reports for the certified devices will be reviewed.|
-
+|Description|Vulnerabilities that are high/critical (using Common Vulnerability Scoring System 3.0) must be addressed within 180 days of the fix being available.|
+|Purpose|Ensures that high-impact vulnerabilities are addressed in a timely manner, reducing likelihood and impact of a successful exploit.|
</br> ::: zone-end
Validation|Device to be validated through toolset to ensure the device supports
<!-> ::: zone pivot="platform-sphere"
-## Azure Sphere platform Support
-The Mediatek MT3620AN must be included in your design. Additional guidance for building secured Azure Sphere applications can be within the [Azure Sphere application notes](/azure-sphere/app-notes/app-notes-overview).
+## Azure Sphere Platform Support
+The Mediatek MT3620AN must be included in your design. More guidance for building secured Azure Sphere applications can be found within the [Azure Sphere application notes](/azure-sphere/app-notes/app-notes-overview).
## Azure Sphere Hardware/Firmware Requirements
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Hardware.Identity| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate the device identity is rooted in hardware.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|The device identity must be rooted in hardware.|
+|Purpose|Protects against cloning and masquerading of the device root identity, which is key in underpinning trust in upper software layers extended through a chain-of-trust. Provide an attestable, immutable and cryptographically secure identity.|
+|Dependencies| Azure Sphere meets this requirement as MT3620 includes the integrated Pluton security processor.|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Hardware.MemoryProtection| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to ensure that memory integrity helps protect the device from vulnerable peripherals.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|All DMA-enabled externally accessible ports must sit behind an enabled and appropriately configured Input-output Memory Management Unit (IOMMU) or System Memory Management Unit (SMMU).|
+|Purpose|Protects against drive-by and other attacks that seek to use other DMA controllers to bypass CPU memory integrity protections.|
+|Dependencies| Azure Sphere meets this requirement through a securely configurable peripheral firewall.|
+ </br> - |Name|SecuredCore.Firmware.Protection| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to ensure that device has adequate mitigations from Firmware security threats.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|The device boot sequence must protect against firmware security threats.|
+|Purpose|Protects against firmware weaknesses, persistent untrusted code, and rootkits that seek to exploit early and privileged boot stages to bypass OS protections.|
+|Dependencies| Azure Sphere meets this requirement through a Microsoft-managed, hardened, and authenticated boot chain.|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Firmware.SecureBoot| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate the boot integrity of the device.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|The device boot sequence must be authenticated.|
+|Purpose|Ensures that the firmware and OS kernel, executed as part of the boot sequence, have first been signed by a trusted authority and retain integrity.|
+|Dependencies| Azure Sphere meets this requirement through a Microsoft-managed authenticated boot chain.</li></ul>|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Firmware.Attestation| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to ensure the device can remotely attest to a Microsoft Azure Attestation service.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|The device identity, along with its platform boot logs and measurements, must be remotely attestable to a Microsoft Azure Attestation (MAA) service.|
+|Purpose|Enables services to establish the trustworthiness of the device. Allows for reliable security posture monitoring and other trust scenarios such as the release of access credentials.|
+|Dependencies| Azure Sphere meets this requirement through the Device Authentication and Attestation (DAA) service provided as part of the Azure Sphere Security Service (AS3).|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Hardware.SecureEnclave| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate hardware security that is accessible from a secure operating system.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|The device must feature a secure enclave capable of performing security functions.|
+|Purpose|Ensures that sensitive cryptographic operations (those key to device identity and chain-of-trust) are isolated and protected from the primary OS and some forms of side-channel attack.|
+|Dependencies| Azure Sphere meets this requirement as MT3260 includes the Pluton security processor.|
## Azure Sphere OS Configuration Requirements
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Encryption.Storage| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate that sensitive data can be encrypted on nonvolatile storage.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-|Resources|[Data at rest protection on Azure Sphere](/azure-sphere/app-notes/app-notes-overview)|
+|Description|Sensitive and private data must be encrypted at rest, with encryption keys backed by hardware protection.|
+|Purpose|Protects against exfiltration of sensitive or private data by unauthorized actors or tampered software.|
+|Dependencies| Azure Sphere enables this requirement to be met using the Pluton security processor, in-package non-volatile memory, and customer-exposed wolfCrypt APIs.|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Encryption.TLS| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate support for required TLS versions and cipher suites.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-|Resources| [TLS support in IoT Hub](../iot-hub/iot-hub-tls-support.md) <br /> |
+|Description|The OS must support a minimum Transport Layer Security (TLS) version of 1.2 and have secure TLS cipher suites available.|
+|Purpose|Ensures that applications are able to use end-to-end encryption protocols and ciphers without known weaknesses, that are supported by Azure Services.|
+|Dependencies| Azure Sphere meets this requirement through a Microsoft-managed wolfSSL library using only secure TLS cipher suites, backed by Device Authentication and Attestation (DAA) certificates.|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Protection.CodeIntegrity| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate that authorized code runs with least privilege.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|The OS must feature code integrity support, with code operating under least privilege.|
+|Purpose|Protects against modified/malicious code, ensuring that only code with verifiable integrity is able to run.|
+|Dependencies| Azure Sphere meets this requirement through the Microsoft-managed and hardened OS with read-only filesystem stored on in-package non-volatile memory storage and executed in on-die RAM, with restricted/contained and least-privileged workloads.|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Protection.NetworkServices| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that applications accepting input from the network aren't running with elevated privileges.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|Services listening for input from the network must not run with elevated privileges, such as SYSTEM or root. Exceptions may apply for security-related services.|
+|Purpose|Limits the exploitability of compromised networked services.|
+|Dependencies| Azure Sphere meets this requirement through restricted/contained and least-privileged workloads.|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Protection.NetworkFirewall| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate that applications can't connect to endpoints that haven't been authorized.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|Applications can't connect to endpoints that haven't been authorized.|
+|Purpose|Limits the exploitability of compromised or malicious applications for upstream network traffic and remote access/control.|
+|Dependencies| Azure Sphere meets this requirement through a securely configurable network firewall and Device Authentication and Attestation (DAA) certificates.|
## Azure Sphere Software/Service Requirements |Name|SecuredCore.Built-in.Security| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to make sure devices can report security information and events by sending data to a Microsoft telemetry service.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|Devices must be able to send security logs and alerts to a cloud-native security monitoring solution.|
+|Purpose|Enables fleet posture monitoring, diagnosis of security threats, and protects against latent and in-progress attacks.|
+|Dependencies| Azure Sphere meets this requirement through integration of Azure Sphere Security Service (AS3) telemetry with Azure Monitor and the ability for applications to send security logs and alerts via Azure services.|
|Resources|[Collect and interpret error data - Azure Sphere](/azure-sphere/deployment/interpret-error-data?tabs=cliv2beta)</br>[Configure crash dumps - Azure Sphere](/azure-sphere/deployment/configure-crash-dumps)|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Manageability.Configuration| |:|:| |Status|Required|
-|Description|The purpose of this requirement is to validate the device supports remote administration via service-based configuration control.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|The device must support auditing and setting of system configuration (and certain management actions) through Azure.|
+|Purpose|Enables the application of security baselines as part of a secure-by-default configuration posture, reducing the risk of compromise through incorrectly configured security-sensitive settings.|
+|Dependencies| Azure Sphere meets this requirement through secure customer application configuration manifests, underpinned by a Microsoft-managed, and hardened OS.
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Update| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate the device can receive and update its firmware and software.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|The device must be able to receive and update its firmware and software.|
+|Purpose|Enables continuous security and renewable trust.|
+|Dependencies| Azure Sphere meets this requirement through a Microsoft-managed and automatically updated OS, with customer application updates delivered remotely via the Azure Sphere Security Service (AS3).|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Protection.Baselines| |:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that the system conforms to a baseline security configuration|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|The system is able to successfully apply a baseline security configuration.|
+|Purpose|Ensures a secure-by-default configuration posture, reducing the risk of compromise through incorrectly configured security-sensitive settings.|
+|Dependencies| Azure Sphere meets this requirement through a Microsoft-managed and hardened OS.|
</br>
-|Name|SecuredCore.Protection.SignedUpdates|
+|Name|SecuredCore.Protection.Update Resiliency|
|:|:| |Status|Required|
-|Description|The purpose of the requirement is to validate that updates must be signed.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
+|Description|The device must be restorable to the last known good state if an update causes issues.|
+|Purpose|Ensures that devices can be restored to a functional, secure, and updatable state.|
+|Dependencies| Azure Sphere meets this requirement through a built-in rollback mechanism for updates.|
+
+</br>
+|Name|SecuredCore.Protection.SignedUpdates|
+|:|:|
+|Status|Required|
+|Description|Updates to the operating system, drivers, application software, libraries, packages, and firmware must be signed.|
+|Purpose|Prevents unauthorized or malicious code from being installed during the update process.|
+|Dependencies| Azure Sphere meets this requirement.|
## Azure Sphere Policy Requirements |Name|SecuredCore.Policy.Protection.Debug| |:|:| |Status|Required|
-|Description|The purpose of the policy requires that debug functionality on the device is disabled.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|Debug functionality on the device must be disabled or require authorization to enable.|
+|Purpose|Ensures that the software and hardware protections cannot be bypassed through debugger intervention and back-channels.|
+|Dependencies| Azure Sphere OS meets this requirement as debug functionality requires a signed capability that is only provided to the device OEM owner.|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Policy.Manageability.Reset| |:|:| |Status|Required|
-|Description|The policy requires that the device can execute two use cases: a) Ability to perform a reset (remove user data, remove user configurations), b) Restore device to last known good in the case of an update causing issues.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|It must be possible to reset the device (remove user data, remove user configs).|
+|Purpose|Protects against exfiltration of sensitive or private data during device ownership or lifecycle transitions.|
+|Dependencies| The Azure Sphere OS enables OEM applications to implement reset functionality.|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Policy.Updates.Duration| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that the device remains secure.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|Software updates must be provided for at least 60 months from date of submission.|
+|Purpose|Ensures a minimum period of continuous security.|
+|Dependencies| The Azure Sphere OS meets this requirement as Microsoft provides OS security updates, and the AS3 service enables OEMs to provide application software updates. |
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Policy.Vuln.Disclosure| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that there's a mechanism for collecting and distributing reports of vulnerabilities in the product.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Azure Sphere vulnerabilities are collected by Microsoft through MSRC and are published to customers through the Tech Community Blog, Azure Sphere ΓÇ£WhatΓÇÖs NewΓÇ¥ page, and through MitreΓÇÖs CVE database.|
+|Description|A mechanism for collecting and distributing reports of vulnerabilities in the product must be available.|
+|Purpose|Provides a clear path for discovered vulnerabilities to be reported, assessed, and disclosed, enabling effective risk management and timely fixes.|
+|Dependencies| Azure Sphere OS vulnerabilities can be reported to Microsoft Security Response Center (MSRC) and are published to customers through the Azure Sphere ΓÇ£WhatΓÇÖs NewΓÇ¥ page, and through MitreΓÇÖs CVE database.|
|Resources|<ul><li>[Report an issue and submission guidelines](https://www.microsoft.com/msrc/faqs-report-an-issue)</li><li>[What's new - Azure Sphere](/azure-sphere/product-overview/whats-new)</li><li>[Azure Sphere CVEs](/azure-sphere/deployment/azure-sphere-cves)</li></ul>|
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Name|SecuredCore.Policy.Vuln.Fixes| |:|:| |Status|Required|
-|Description|The purpose of this policy is to ensure that vulnerabilities that are high/critical (using CVSS 3.0) are addressed within 180 days of the fix being available.|
-|Validation Type|Prevalidated, no additional validation is required|
-|Validation|Provided by Microsoft|
-
+|Description|Vulnerabilities that are high/critical (using Common Vulnerability Scoring System 3.0) must be addressed within 180 days of the fix being available.|
+|Purpose|Ensures that high-impact vulnerabilities are addressed in a timely manner, reducing likelihood and impact of a successful exploit.|
+|Dependencies| Azure Sphere OS meets this requirement as Microsoft provides OS security updates meeting the above requirement. The AS3 service enables OEMs to provide application software updates meeting this requirement.|
</br> ::: zone-end
certification Resources Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/resources-glossary.md
- Title: Azure Certified Device program glossary
-description: A list of common terms used in the Azure Certified Device program
---- Previously updated : 03/03/2021---
-# Azure Certified Device program glossary
-
-This guide provides definitions of terms commonly used in the Azure Certified Device program and portal. Refer to this glossary for clarification to the certification process. For your convenience, this glossary is categorized based on major certification concepts that you may have questions about.
-
-## Device class
-
-When creating your certification project, you will be asked to specify a device class. Device class refers to the form factor or classification that best represents your device.
--- **Gateway**-
- A device that processes data sent over an IoT network.
--- **Sensor**-
- A device that detects and responds to changes to an environment and connects to gateways to process the changes.
--- **Other**-
- If you select Other, add a description of your device class in your own words. Over time, we may continue to add new values to this list, particularly as we continue to monitor feedback from our partners.
-
-## Device type
-
-You will also be asked to select one of two device types during the certification process.
--- **Finished Product**-
- A device that is solution-ready and ready for production deployment. Typically in a finished form factor with firmware and an operating system. These may be general-purpose devices that require additional customization or specialized devices that require no modifications for usage.
-- **Solution-Ready Dev Kit**-
- A development kit containing hardware and software ideal for easy prototyping, typically not in a finished form factor. Usually includes sample code and tutorials to enable quick prototyping.
-
-## Component type
-
-In the Device details section, you'll describe your device by listing components by component type. You can view more guidance on components [here](./how-to-using-the-components-feature.md).
--- **Customer Ready Product**-
- A component representation of the overall or primary device. This is different from a **Finished Product**, which is a classification of the device as being ready for customer use without further development. A Finished Product will contain a Customer Ready Product component.
-- **Development Board**-
- Either an integrated or detachable board with microprocessor for easy customization.
-- **Peripheral**-
- Either an integrated or detachable addition to the product (such as an accessory). These are typically devices that connect to the main device, but does not contribute to device primary functions. Instead, it provides additional functions. Memory, RAM, storage, hard disks, and CPUs are not considered peripheral devices (they instead should be listed under Additional Specs of the Customer Ready Product component).
-- **System-On-Module** -
- A board-level circuit that integrates a system function in a single module.
-
-## Component attachment method
-
-Component attachment method is another component detail that informs the customer about how the component is integrated into the overall product.
--- **Integrated**
-
- Refers to when a device component is a part of the main chassis of the product. This most commonly refers to a peripheral component type that cannot be removed from the device.
- Example: An integrated temperature sensor inside a gateway chassis.
--- **Discrete**-
- Refers to when a component is **not** a part of main chassis of the product.
- Example: An external temperature sensor that must be attached to the device.
--
-## Next steps
-
-This glossary will guide you through the process of certifying your project on the portal. You're now ready to begin your project!
-- [Tutorial: Creating your project](./tutorial-01-creating-your-project.md)
certification Tutorial 00 Selecting Your Certification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/tutorial-00-selecting-your-certification.md
- Title: Azure Certified Device program - Tutorial - Selecting your certification program
-description: Step-by-step guide to selecting the right certification programs for your device
---- Previously updated : 03/19/2021---
-# Tutorial: Select your certification program
-
-Congratulations on choosing the Azure Certified Device program! We're excited to have you join our ecosystem of certified devices. To begin, you must first determine which certification programs best suit your device capabilities.
-
-In this tutorial, you learn to:
-
-> [!div class="checklist"]
-> * Select the best certification program(s) for your device
-
-## Selecting a certification program for your device
-
-All devices are required to meet the baseline requirements outlined by the **Azure Certified Device** certification. To better promote your device and help set it apart, we offer optional certification programs (ΓÇ£IoT Plug and PlayΓÇ¥, ΓÇ£Edge ManagedΓÇ¥ and ΓÇ£Edge Secured-core *preview") that validate additional capabilities.
-
-1. Review each of the certification programs' in the table below to help identify which program is best suited to promote your device.
-
- |Program Requirements|Processor|Architecture|OS|
- |||
- [Azure Certified Device](./program-requirements-azure-certified-device.md)|Any|Any|Any|
- [IoT Plug and Play](./program-requirements-edge-secured-core.md)|Any|Any|Any|
- [Edge Managed](./program-requirements-edge-managed.md)|MPU/CPU|ARM/x86/AMD64|[Tier 1 OS](../iot-edge/support.md?view=iotedge-2018-06&preserve-view=true)|
- [*Edge Secured-core](./program-requirements-edge-secured-core.md)|MPU/CPU|ARM/AMD64|[Tier 1 OS](../iot-edge/support.md?view=iotedge-2018-06&preserve-view=true)|
-
-
-1. Review the specific requirements for the selected program and make sure your device is prepared to connect to Azure to validate the requirements.
-
-## Next steps
-
-You're now ready to begin certifying your device! Advance to the next article to begin your project.
-> [!div class="nextstepaction"]
->[Tutorial: Creating your project](tutorial-01-creating-your-project.md)
certification Tutorial 01 Creating Your Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/tutorial-01-creating-your-project.md
- Title: Azure Certified Device program - Tutorial - Creating your project
-description: Guide to create a project on the Azure Certified Device portal
---- Previously updated : 06/22/2021---
-# Tutorial: Create your project
-
-Congratulations on choosing to certify your device through the Azure Certified Device program! You've now selected the appropriate certification program for your device, and are ready to get started on the portal.
-
-In this tutorial, you will learn how to:
-
-> [!div class="checklist"]
-> * Sign into the [Azure Certified Device portal](https://certify.azure.com/)
-> * Create a new certification project for your device
-> * Specify basic device details of your project
-
-## Prerequisites
--- Valid work/school [Microsoft Entra account](../active-directory/fundamentals/active-directory-whatis.md).-- Verified Microsoft Partner Network (MPN) account. If you don't have an MPN account, [join the partner network](https://partner.microsoft.com/) before you begin. -
-> [!NOTE]
-> If you're having problems setting up or validating your MPN account, see the [Partner Center Support](/partner-center) documentation.
--
-## Signing into the Azure Certified Device portal
-
-To get started, you must sign in to the portal, where you'll be providing your device information, completing certification testing, and managing your device publications to the Azure Certified Device catalog.
-
-1. Go to the [Azure Certified Device portal](https://certify.azure.com).
-1. Select `Company profile` on the left-hand side and update your manufacturer information.
- ![Company profile section](./media/images/company-profile.png)
-1. Accept the program agreement to begin your project.
-
-## Creating your project on the portal
-
-Now that you're all set up in the portal, you can begin the certification process. First, you must create a project for your device.
-
-1. On the home screen, select `Create new project`. This will open a window to add basic device information in the next section.
-
- ![Image of the Create new project button](./media/images/create-new-project.png)
-
-## Identifying basic device information
-
-Then, you must supply basic device information. You can to edit this information later.
-
-1. Complete the fields requested under the `Basics` section. Refer to the table below for clarification regarding the **required** fields:
-
- | Fields | Description |
- ||-|
- | Project name | Internal name that will not be visible on the Azure Certified Device catalog |
- | Device name | Public name for your device |
- | Device type | Specification of Finished Product or Solution-Ready Developer Kit. For more information about the terminology, see [Certification glossary](./resources-glossary.md). |
- | Device class | Gateway, Sensor, or other. For more information about the terminology, see [Certification glossary](./resources-glossary.md). |
- | Device source code URL | Required if you are certifying a Solution-Ready Dev Kit, optional otherwise. URL must be to a GitHub location for your device code. |
-
- > [!Note]
- > If you are marketing a Microsoft service (e.g. Azure Sphere), please ensure that your device name adheres to Microsoft [branding guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks).
-
-1. Select the `Next` button to continue to the `Certifications` tab.
-
- ![Image of the Create new project form, Certifications tab](./media/images/select-the-certification.png)
-
-1. Specify which certification(s) you wish to achieve for your device.
-1. Select `Create` and the new project will be saved and visible in the home page of the portal.
-
- ![Image of project table](./media/images/project-table.png)
-
-1. Select on the Project name in the table. This will launch the project summary page where you can add and view other details about your device.
-
- ![Image of the project details page](./media/images/device-details-section.png)
-
-## Next steps
-
-You are now ready to add device details and test your device using our certification service. Advance to the next article to learn how to edit your device details.
-> [!div class="nextstepaction"]
-> [Tutorial: Adding device details](tutorial-02-adding-device-details.md)
certification Tutorial 02 Adding Device Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/tutorial-02-adding-device-details.md
- Title: Azure Certified Device program - Tutorial - Adding device details
-description: A step-by-step guide to add device details to your project on the Azure Certified Device portal
---- Previously updated : 05/04/2021---
-# Tutorial: Add device details
-
-Now you've created your project for your device, and you're all set to begin the certification process! First, let's add your device details. These will include technical specifications that your customers will be able to view on the Azure Certified Device catalog and the marketing details that they will use to purchase once they've made a decision.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Add device details using the Components and Dependencies features
-> * Upload a Get Started guide for your device
-> * Specify marketing details for customers to purchase your device
-> * Optionally identify any industry certifications
-
-## Prerequisites
-
-* You should be signed in and have a project for your device created on the [Azure Certified Device portal](https://certify.azure.com). For more information, view the [tutorial](tutorial-01-creating-your-project.md).
-* You should have a Get Started guide for your device in PDF format. We provide many Get Started templates for you to use, depending on both the certification program and your preferred language. The templates are available at our [Get started templates](https://aka.ms/GSTemplate "Get started templates") GitHub location.
-
-## Adding technical device details
-
-The first section of your project page, called 'Input device details', allows you to provide information on the core hardware capabilities of your device, such as device name, description, processor, operating system, connectivity options, hardware interfaces, industry protocols, physical dimensions, and more. While many of the fields are optional, most of this information will be made available to potential customers on the Azure Certified Device catalog if you choose to publish your device after it has been certified.
-
-1. Click `Add` in the 'Input device details' section on your project summary page to open the device details section. You will see six sections for you to complete.
-
-![Image of the project details page](./media/images/device-details-menu.png)
-
-2. Review the information you previously provided when you created the project under the `Basics` tab.
-1. Review the certifications you are applying for with your device under the `Certifications` tab.
-1. Open the `Hardware` tab and add **at least** one discrete component that describes your device. You can also view our guidance on [component usage](how-to-using-the-components-feature.md).
-1. Click `Save`. You will then be able to edit your component device and add more advanced details.
-1. Add any relevant information regarding operating conditions (such as IP rating, operating temperature, or safety certification).
-
-![Image of the hardware section](./media/images/hardware-section.png)
-
-7. List additional device details not captured by the component details under `Additional product details`.
-1. If you marked `Other` in any of the component fields or have a special circumstance you would like to flag with the Azure Certification team, leave a clarifying comment in the `Comments for reviewer` section.
-1. Open the `Software` tab and select **at least** one operating system.
-1. (**Required for Dev Kit devices** and highly recommended for all others) Select a level to indicate the expected set-up process to connect your device to Azure. If you select Level 2, you will be required to provide a link to the available software image.
-
-![Image of the software section](./media/images/software-section.png)
-
-11. Use the `Dependencies` tab to list any dependencies if your device requires additional hardware or services to send data to Azure. You can also view our additional guidance for [listing dependencies](how-to-indirectly-connected-devices.md).
-1. Once you are satisfied with the information you've provided, you can use the `Review` tab for a read-only overview of the full set of device details that been entered.
-1. Click `Project summary` at the top of the page to return to your summary page.
-
-![Review project details page](./media/images/sample-device-details.png)
-
-## Uploading a Get Started guide
-
-The Get Started guide is a PDF document to simplify the setup and configuration and management of your product. Its purpose is to make it simple for customers to connect and support devices on Azure using your device. As part of the certification process, we require our partners to provide **one** Get Started guide for their most relevant certification program.
-
-1. Double-check that you have provided all requested information in your Get Started guide PDF according to the supplied [templates](https://aka.ms/GSTemplate). The template that you use should be determined by the certification badge you are applying for. (For example, an IoT Plug and Play device will use the IoT Plug and Play template. Devices applying for *only* the Azure Certified Device baseline certification will use the Azure Certified Device template.)
-1. Click `Add` in the 'Get Started' guide section of the project summary page.
-
-![Image of GSG button](./media/images/gsg-menu.png)
-
-2. Click 'Choose File' to upload your PDF.
-1. Review the document in the preview for formatting.
-1. Save your upload by clicking the 'Save' button.
-1. Click `Project summary` at the top of the page to return to your summary page.
-
-## Providing marketing details
-
-In this area, you will provide customer-ready marketing information for your device. These fields will be showcased on the Azure Certified Device catalog if you choose to publish your certified device.
-
-1. Click `Add` in the 'Add marketing details' section to open the marketing details page.
-
-![Image of marketing details section](./media/images/marketing-details.png)
-
-1. Upload a product photo in JPEG or PNG format that will be used in the catalog.
-1. Write a short description of your device that will be displayed on the product description page of the catalog.
-1. Indicate geographic availability of your device.
-1. Provide a link to the manufacturer's marketing page for this device. This should be a link to a site that provides additional information about the device.
- > [!Note]
- > Please ensure all supplied URLs are valid or will be active at the time of publication following approval.*)
-
-1. Indicate up to three target industries that your device is optimized for.
-1. Provide information for up to five distributors of your device. This may include the manufacturer's own site.
-
- > [!Note]
- > If no distributor product page URL is supplied, then the `Shop` button on the catalog will default to the link supplied for `Distributor page`, which may not be specific to the device. Ideally, the distributor URL should lead to a specific page where a customer can purchase a device, but is not mandatory. If the distributor is the same as the manufacturer, this URL may be the same as the manufacturer's marketing page.*)
-
-1. Click `Save` to confirm your information.
-1. Click `Project summary` at the top of the page to return to your summary page.
-
-## Declaring additional industry certifications
-
-You can also promote additional industry certifications you may have received for your device. These certifications can help provide further clarity on the intended use of your device and will be searchable on the Azure Certified Device catalog.
-
-1. Click `Add` in the 'Provide industry certifications' section.
-1. Click `Add a certification`to select from a list of the common industry certification programs. If your product has achieved a certification not in our list, you can specify a custom string value by selecting `Other (please specify)`.
-1. Optionally provide a description or notes to the reviewer. However, these notes will not be publicly available to view on the catalog.
-1. Click `Save` to confirm your information.
-1. Click `Project summary` at the top of the page to return to your summary page.
-
-## Next steps
-
-Now you have completed the process of describing your device! This will help the Azure Certified Device review team and your customer better understand your product. Once you are satisfied with the information you've provided, you are now ready to move on to the testing phase of the certification process.
-> [!div class="nextstepaction"]
-> [Tutorial: Testing your device](tutorial-03-testing-your-device.md)
certification Tutorial 03 Testing Your Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/tutorial-03-testing-your-device.md
- Title: Azure Certified Device program - Tutorial - Testing your device
-description: A step-by-step guide to test you device with AICS service on the Azure Certified Device portal
---- Previously updated : 03/02/2021---
-# Tutorial: Test and submit your device
-
-The next major phase of the certification process (though it can be completed before adding your device details) involves testing your device. Through the portal, you'll use the Azure IoT Certification Service (AICS) to demonstrate your device performance according to our certification requirements. Once you've successfully passed the testing phase, you'll then submit your device for final review and approval by the Azure Certification team!
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Connect your device to IoT Hub using Device Provisioning Service (DPS)
-> * Test your device according to your selected certification program(s)
-> * Submit your device for review by the Azure Certification team
-
-## Prerequisites
--- You should be signed in and have a project for your device created on the [Azure Certified Device portal](https://certify.azure.com). For more information, view the [tutorial](tutorial-01-creating-your-project.md).-- (Optional) We advise that you prepare your device and manually verify their performance according to certification requirements. This is because if you wish to re-test with different device code or certification program, you will have to create a new project.-
-## Connecting your device using DPS
-
-All certified devices are required to demonstrate the ability to connect to IoT Hub using DPS. The following steps walk you through how to successfully connect your device for testing on the portal.
-
-1. To begin the testing phase, select the `Connect & test` link on the project summary page:
-
- ![Connect and test link](./media/images/connect-and-test-link.png)
-
-1. Depending on the certification(s) selected, you'll see the required tests on the 'Connect & test' page. Review these to ensure that you're applying for the correct certification program.
-
- ![Connect and test page](./media/images/connect-and-test.png)
-
-1. Connect your device to IoT Hub using the Device Provisioning Service (DPS). DPS supports connectivity options of Symmetric keys, X.509 certification, and a Trusted Platform Module (TPM). This is required for all certifications.
-
- - *For more information on connecting your device to Azure IoT Hub with DPS, visit [Provisioning devices overview](../iot-dps/about-iot-dps.md "Device Provisioning Service overview").*
-
-1. If using symmetric keys, you'll then be asked to configure the DPS with the supplied DPS ID scope, Device ID, authentication key, and DPS endpoint. Otherwise, you will be asked to provide either X.509 certificate or endorsement key.
-
-1. After configuring your device with DPS, confirm the connection by clicking the `Connect` button at the bottom of the page. Upon successful connection, you can proceed to the testing phase by clicking the `Next` button.
-
- ![Connect and Test connected](./media/images/connected.png)
-
-## Testing your device
-
-Once you have successfully connected your device to AICS, you are now ready to run the certification tests specific to the certification program you are applying for.
-
-1. **For Azure Certified Device certification**: In the 'Select device capability' tab, you will review and select which tests you wish to run on your device.
-1. **For IoT Plug and Play certification**: Carefully review the parameters that will be checked during the test that you declared in your device model.
-1. **For Edge Managed certification**: No additional steps are required beyond demonstrating connectivity.
-1. Once you have completed the necessary preparations for the specified certification program, select `Next` to proceed to the 'Test' phase.
-1. Select `Run tests` on the page to begin running AICS with your device.
-1. Once you have received a notification that you have passed the tests, select `Finish` to return to your summary page.
-
-![Test passed](./media/images/test-pass.png)
-
-7. If you have additional questions or need troubleshooting assistance with AICS, visit our troubleshooting guide.
-
-> [!NOTE]
-> While you will be able to complete the online certification process for IoT Plug and Play and Edge Managed without having to submit your device for manual review, you may be contacted by a Azure Certified Device team member for further device validation beyond what is tested through our automation service.
-
-## Submitting your device for review
-
-Once you have completed all of the mandatory fields in the 'Device details' section and successfully passed the automated testing in the 'Connect & test' process, you can now notify the Azure Certified Device team that you are ready for certification review.
-
-1. select `Submit for review` on the project summary page:
-
- ![Review and Certify link](./media/images/review-and-certify.png)
-
-1. Confirm your submission in the pop-up window. Once a device has been submitted, all device details will be read-only until editing is requested. (See [How to edit your device information after publishing](./how-to-edit-published-device.md).)
-
- ![Start Certification review dialog](./media/images/start-certification-review.png)
-
-1. Once the project is submitted, the project summary page will indicate the project is `Under Certification Review` by the Azure Certification team:
-
- ![Under Review](./media/images/review-and-certify-under-review.png)
-
-1. Within 5-7 business days, expect an email response from the Azure Certification team to the address provided in your company profile regarding the status of your device submission.
-
- - Approved submission
- Once your project has been reviewed and approved, you will receive an email. The email will include a set of files including the Azure Certified Device badge, badge usage guidelines, and other information on how to amplify the message that your device is certified. Congratulations!
-
- - Pending submission
- In the case your project is not approved, you will be able to make changes to the project details and then resubmit the device for certification once ready. An email will be sent with information on why the project was not approved and steps to resubmit for certification.
-
-## Next steps
-
-Congratulations! Your device has now successfully passed all of the tests and has been approved through the Azure Certified Device program. You can now publish your device to our Azure Certified Device catalog, where customers can shop for your products with confidence in their performance with Azure.
-> [!div class="nextstepaction"]
-> [Tutorial: Publishing your device](tutorial-04-publishing-your-device.md)
-
certification Tutorial 04 Publishing Your Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/tutorial-04-publishing-your-device.md
- Title: Azure Certified Device program - Tutorial - Publishing your device
-description: A step-by-step guide to publish your certified device to the Azure Certified Device catalog
---- Previously updated : 03/03/2021---
-# Tutorial: Publish your device
-
-Congratulations on successfully certifying your device! Your product is joining an ecosystem of exceptional devices that work great with Azure. Now that your device has been certified, you can optionally publish your device details to the [Azure Certified Device catalog](https://devicecatalog.azure.com) for a world of customers to discover and buy.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Publish your device to the Azure Certified Device catalog
-
-## Prerequisites
--- You should be signed in and have an **approved** project for your device on the [Azure Certified Device portal](https://certify.azure.com). If you don't have a certified device, you can view this [tutorial](tutorial-01-creating-your-project.md) to get started.-
-## Publishing your device
-
-Publishing your device is a simple process that will help bring customers to your product from the Azure Certified Device catalog.
-
-1. To publish your device, click `Publish to Device Catalog` on the project summary page.
-
- ![Publish to Catalog](./media/images/publish-to-catalog.png)
-
-1. Confirm the publication in the pop-up window
-
- ![Publish to Catalog confirmation](./media/images/publish-to-catalog-confirm.png)
-
-1. You will receive notification to the email address in your company profile once the device has been processed the Azure Certified Device catalog.
-
-## Next steps
-
-Congratulations! Your certified device is now a part of the Azure Certified Device catalog, where customers can shop for your products with confidence in their performance with Azure! Thank you for being part of our ecosystem of certified IoT products. You will notice that your project page is now read-only. If you wish to make any updates to your device information, see our how-to guide.
-> [!div class="nextstepaction"]
-> [How to edit your published device](how-to-edit-published-device.md)
-
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
The following are known limitations in Chaos Studio.
- **VMs require network access to Chaos studio** - For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: - Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md) are also required.-
+- **Network Disconnect Fault** - The agent-based "Network Disconnect" fault only affects new connections. Existing active connections continue to persist. You can restart the service or process to force connections to break.
- **Version support** - Review the [Azure Chaos Studio version compatibility](chaos-studio-versions.md) page for more information on operating system, browser, and integration version compatibility.-- **Terraform** - Chaos Studio doesn't support Terraform at this time. - **PowerShell modules** - Chaos Studio doesn't have dedicated PowerShell modules at this time. For PowerShell, use our REST API - **Azure CLI** - Chaos Studio doesn't have dedicated AzCLI modules at this time. Use our REST API from AzCLI - **Azure Policy** - Chaos Studio doesn't support the applicable built-in policies for our service (audit policy for customer-managed keys and Private Link) at this time. -- **Private Link** To use Private Link for Agent Service, you need to have your subscription allowlisted and use our preview API version. We don't support Azure portal UI experiments for Agent-based experiments using Private Link. These restrictions do NOT apply to our Service-direct faults
+- **Private Link** - We don't support Azure portal UI experiments for Agent-based experiments using Private Link. These restrictions do NOT apply to our Service-direct faults
- **Customer-Managed Keys** You need to use our 2023-10-27-preview REST API via a CLI to create CMK-enabled experiments. We don't support portal UI experiments using CMK at this time.-- **Lockbox** At present, we don't have integration with Customer Lockbox. - **Java SDK** At present, we don't have a dedicated Java SDK. If this is something you would use, reach out to us with your feature request. - **Built-in roles** - Chaos Studio doesn't currently have its own built-in roles. Permissions can be attained to run a chaos experiment by either assigning an [Azure built-in role](chaos-studio-fault-providers.md) or a created custom role to the experiment's identity. - **Agent Service Tags** Currently we don't have service tags available for our Agent-based faults.
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
You can use startup tasks to perform operations before a role starts. Installing
if %ERRORLEVEL%== 0 echo %date% %time% : Successfully downloaded .NET framework %netfx% setup file. >> %startuptasklog% goto install
- install:
+ :install
REM ***** Installing .NET ***** echo Installing .NET with commandline: start /wait %~dp0%netfxinstallfile% /q /serialdownload /log %netfxinstallerlog% /chainingpackage "CloudService Startup Task" >> %startuptasklog% start /wait %~dp0%netfxinstallfile% /q /serialdownload /log %netfxinstallerlog% /chainingpackage "CloudService Startup Task" >> %startuptasklog% 2>>&1
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
Azure AI services can be easily integrated into any application regardless of th
### Build applications that can play and recognize speech
-With the ability to, connect your Azure AI services to Azure Communication Services, you can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [SSML](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Azure AI services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Azure AI services that are bespoke to your domain and region, through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience.
+With the ability to connect your Azure AI services to Azure Communication Services. You can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [Speech Synthesis Markup Language (SSML)](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Azure AI services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Azure AI services that are bespoke to your domain and region, through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience.
## Run time flow [![Screen shot of integration run time flow.](./media/run-time-flow.png)](./media/run-time-flow.png#lightbox) ## Azure portal experience
-You will need to connect your Azure Communication Services resource with the Azure AI resource through the Azure portal. There are two ways you can accomplish this step:
+You'll need to connect your Azure Communication Services resource with the Azure AI resource through the Azure portal. There are two ways you can accomplish this step:
- By navigating through the steps of the Cognitive Services tab in your Azure Communication Services (recommended). - Manually adding the Managed Identity to your Azure Communication Services resource. This step is more advanced and requires a little more effort to connect your Azure Communication Services to your Azure AI services.
You will need to connect your Azure Communication Services resource with the Azu
### Connecting through the Azure portal 1. Open your Azure Communication Services resource and click on the Cognitive Services tab.
-2. If system-assigned managed identity isn't enabled, you will need to enable it.
+2. If system-assigned managed identity isn't enabled, you'll need to enable it.
3. In the Cognitive Services tab, click on "Enable Managed Identity" button. [![Screenshot of Enable Managed Identity button.](./media/enabled-identity.png)](./media/enabled-identity.png#lightbox)
This integration between Azure Communication Services and Azure AI services is o
- brazilsouth - uaenorth
+## Known limitations
+
+- Text-to-Speech text prompts support a maximum of 400 characters, if your prompt is longer than this we suggest using SSML for Text-to-Speech based play actions.
+- For scenarios where you exceed your Speech service quota limit, you can request to increase this limit by following the steps outlined [here](../../../ai-services/speech-service/speech-services-quotas-and-limits.md).
+ ## Next steps - Learn about [playing audio](../../concepts/call-automation/play-action.md) to callers using Text-to-Speech. - Learn about [gathering user input](../../concepts/call-automation/recognize-action.md) with Speech-to-Text.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
As part of compliance requirements in various industries, vendors are expected t
![Screenshot of flow for play action.](./media/play-action.png) ## Known limitations-- Play action isn't enabled to work with Teams Interoperability.
+- Text-to-Speech text prompts support a maximum of 400 characters, if your prompt is longer than this we suggest using SSML for Text-to-Speech based play actions.
+- For scenarios where you exceed your Speech service quota limit, you can request to increase this lilmit by following the steps outlined [here](../../../ai-services/speech-service/speech-services-quotas-and-limits.md).
## Next Steps - Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users.
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
The recognize action can be used for many reasons, here are a few examples of ho
## Known limitation - In-band DTMF is not supported, use RFC 2833 DTMF instead.
+- Text-to-Speech text prompts support a maximum of 400 characters, if your prompt is longer than this we suggest using SSML for Text-to-Speech based play actions.
+- For scenarios where you exceed your Speech service quota limit, you can request to increase this lilmit by following the steps outlined [here](../../../ai-services/speech-service/speech-services-quotas-and-limits.md).
## Next steps - Check out our how-to guide to learn how you can [gather user input](../../how-tos/call-automation/recognize-action.md).
communication-services Custom Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/custom-context.md
Title: Azure Communication Services Call Automation how-to for passing call contextual data in Call Automation description: Provides a how-to guide for passing contextual information with Call Automation.-+
For all the code samples, `client` is CallAutomationClient object that can be cr
## Technical parameters Call Automation supports up to 5 custom SIP headers and 1000 custom VOIP headers. Additionally, developers can include a dedicated User-To-User header as part of SIP headers list.
-The custom SIP header key must start with a mandatory ΓÇÿX-MS-Custom-ΓÇÖ prefix. The maximum length of a SIP header key is 64 chars, including the X-MS-Custom prefix. The maximum length of SIP header value is 256 chars. The same limitations apply when configuring the SIP headers on your SBC.
+The custom SIP header key must start with a mandatory ΓÇÿX-MS-Custom-ΓÇÖ prefix. The maximum length of a SIP header key is 64 chars, including the X-MS-Custom prefix. The SIP header key may consist of alphanumeric characters and a few selected symbols which includes ".", "!", "%", "\*", "_", "+", "~", "-". The maximum length of SIP header value is 256 chars. The same limitations apply when configuring the SIP headers on your SBC. The SIP header value may consist of alphanumeric characters and a few selected symbols which includes "=", ";", ".", "!", "%", "*", "_", "+", "~", "-".
The maximum length of a VOIP header key is 64 chars. These headers can be sent without ΓÇÿx-MS-CustomΓÇÖ prefix. The maximum length of VOIP header value is 1024 chars.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-action.md
This guide will help you get started with playing audio files to participants by
|PlayFailed | 500 | 9999 | Unknown internal server error | |PlayFailed | 500 | 8572 | Action failed due to play service shutdown. |
+## Known limitations
+- Text-to-Speech text prompts support a maximum of 400 characters, if your prompt is longer than this we suggest using SSML for Text-to-Speech based play actions.
+- For scenarios where you exceed your Speech service quota limit, you can request to increase this lilmit by following the steps outlined [here](../../../ai-services/speech-service/speech-services-quotas-and-limits.md).
## Clean up resources
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-action.md
This guide will help you get started with recognizing DTMF input provided by par
## Known limitations - In-band DTMF is not supported, use RFC 2833 DTMF instead.
+- Text-to-Speech text prompts support a maximum of 400 characters, if your prompt is longer than this we suggest using SSML for Text-to-Speech based play actions.
+- For scenarios where you exceed your Speech service quota limit, you can request to increase this lilmit by following the steps outlined [here](../../../ai-services/speech-service/speech-services-quotas-and-limits.md).
## Clean up resources
confidential-computing Quick Create Confidential Vm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli.md
az group create --name myResourceGroup --location northeurope
Create a VM with the [az vm create](/cli/azure/vm) command. The following example creates a VM named *myVM* and adds a user account named *azureuser*. The `--generate-ssh-keys` parameter is used to automatically generate an SSH key, and put it in the default key location(*~/.ssh*). To use a specific set of keys instead, use the `--ssh-key-values` option.
-For `size`, select a confidential VM size. For more information, see [supported confidential VM families](virtual-machine-solutions.md).
+For `size`, select a confidential VM size. For more information, see [supported confidential VM families](virtual-machine-options.md).
Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. Secure Boot is enabled by default, but is optional for `VMGuestStateOnly`. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption and encryption at host, see [confidential OS disk encryption](confidential-vm-overview.md) and [encryption at host](/azure/virtual-machines/linux/disks-enable-host-based-encryption-cli).
confidential-computing Quick Create Confidential Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal.md
To create a confidential VM in the Azure portal using an Azure Marketplace image
h. Toggle [Generation 2](../virtual-machines/generation-2.md) images. Confidential VMs only run on Generation 2 images. To ensure, under **Image**, select **Configure VM generation**. In the pane **Configure VM generation**, for **VM generation**, select **Generation 2**. Then, select **Apply**.
- i. For **Size**, select a VM size. For more information, see [supported confidential VM families](virtual-machine-solutions.md).
+ i. For **Size**, select a VM size. For more information, see [supported confidential VM families](virtual-machine-options.md).
j. For **Authentication type**, if you're creating a Linux VM, select **SSH public key** . If you don't already have SSH keys, [create SSH keys for your Linux VMs](../virtual-machines/linux/mac-create-ssh-keys.md).
confidential-computing Trusted Execution Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/trusted-execution-environment.md
Azure confidential computing has two offerings: one for enclave-based workloads
The enclave-based offering uses [Intel Software Guard Extensions (SGX)](virtual-machine-solutions-sgx.md) to create a protected memory region called Encrypted Protected Cache (EPC) within a VM. This allows customers to run sensitive workloads with strong data protection and privacy guarantees. Azure Confidential computing launched the first enclave-based offering in 2020.
-The lift and shift offering uses [AMD SEV-SNP (GA)](virtual-machine-solutions.md) or [Intel TDX (preview)](tdx-confidential-vm-overview.md) to encrypt the entire memory of a VM. This allows customers to migrate their existing workloads to Azure confidential Compute without any code changes or performance degradation.
+The lift and shift offering uses [AMD SEV-SNP (GA)](virtual-machine-options.md) or [Intel TDX (preview)](tdx-confidential-vm-overview.md) to encrypt the entire memory of a VM. This allows customers to migrate their existing workloads to Azure confidential Compute without any code changes or performance degradation.
Many of these underlying technologies are used to deliver [confidential IaaS and PaaS services](overview-azure-products.md) in the Azure platform making it simple for customers to adopt confidential computing in their solutions.
confidential-computing Virtual Machine Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-options.md
+
+ Title: Azure Confidential VM options
+description: Azure Confidential Computing offers multiple options for confidential virtual machines on AMD and Intel processors.
+++++++ Last updated : 11/15/2023++
+# Azure Confidential VM options
+
+Azure offers multiple confidential VMs options leveraging Trusted Execution Environments (TEE) technologies from both AMD and Intel to harden the virtualization environment. These technologies enable you to provision confidential computing environments with excellent price-to-performance without code changes.
+
+AMD confidential VMs leverage [Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP)](https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf) which was introduced with 3rd Gen AMD EPYC™ processors. Intel confidential VMs use [Trust Domain Extensions (TDX)](https://cdrdv2-public.intel.com/690419/TDX-Whitepaper-February2022.pdf) which was introduced with 4th Gen Intel® Xeon® processors.
+
+## Sizes
+
+You can create confidential VMs in the following size families:
+
+| Size Family | TEE | Description |
+| | | -- |
+| **DCasv5-series** | AMD SEV-SNP | General purpose CVM with remote storage. No local temporary disk. |
+| **DCesv5-series** | Intel TDX | General purpose CVM with remote storage. No local temporary disk. |
+| **DCadsv5-series** | AMD SEV-SNP | General purpose CVM with local temporary disk. |
+| **DCedsv5-series** | Intel TDX | General purpose CVM with local temporary disk. |
+| **ECasv5-series** | AMD SEV-SNP | Memory-optimized CVM with remote storage. No local temporary disk. |
+| **ECesv5-series** | Intel TDX | Memory-optimized CVM with remote storage. No local temporary disk. |
+| **ECadsv5-series** | AMD SEV-SNP | Memory-optimized CVM with local temporary disk. |
+| **ECedsv5-series** | Intel TDX | Memory-optimized CVM with local temporary disk. |
+
+> [!NOTE]
+> Memory-optimized confidential VMs offer double the ratio of memory per vCPU count.
+
+## Azure CLI commands
+
+You can use the [Azure CLI](/cli/azure/install-azure-cli) with your confidential VMs.
+
+To see a list of confidential VM sizes, run the following command. Replace `<vm-series>` with the series you want to use. The output shows information about available regions and availability zones.
+
+```azurecli-interactive
+vm_series='DCASv5'
+az vm list-skus \
+ --size dc \
+ --query "[?family=='standard${vm_series}Family'].{name:name,locations:locationInfo[0].location,AZ_a:locationInfo[0].zones[0],AZ_b:locationInfo[0].zones[1],AZ_c:locationInfo[0].zones[2]}" \
+ --all \
+ --output table
+```
+
+For a more detailed list, run the following command instead:
+
+```azurecli-interactive
+vm_series='DCASv5'
+az vm list-skus \
+ --size dc \
+ --query "[?family=='standard${vm_series}Family']"
+```
+
+## Deployment considerations
+
+Consider the following settings and choices before deploying confidential VMs.
+
+### Azure subscription
+
+To deploy a confidential VM instance, consider a [pay-as-you-go subscription](/azure/virtual-machines/linux/azure-hybrid-benefit-linux) or other purchase option. If you're using an [Azure free account](https://azure.microsoft.com/free/), the quota doesn't allow the appropriate number of Azure compute cores.
+
+You might need to increase the cores quota in your Azure subscription from the default value. Default limits vary depending on your subscription category. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the confidential VM sizes.
+
+To request a quota increase, [open an online customer support request](../azure-portal/supportability/per-vm-quota-requests.md).
+
+If you have large-scale capacity needs, contact Azure Support. Azure quotas are credit limits, not capacity guarantees. You only incur charges for cores that you use.
+
+### Pricing
+
+For pricing options, see the [Linux Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/).
+
+### Regional availability
+
+For availability information, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
+
+### Resizing
+
+Confidential VMs run on specialized hardware, so you can only [resize confidential VM instances](confidential-vm-faq.yml#can-i-convert-a-dcasv5-ecasv5-cvm-into-a-dcesv5-ecesv5-cvm-or-a-dcesv5-ecesv5-cvm-into-a-dcasv5-ecasv5-cvm-) to other confidential sizes in the same region. For example, if you have a DCasv5-series VM, you can resize to another DCasv5-series instance or a DCesv5-series instance.
+
+It's not possible to resize a non-confidential VM to a confidential VM.
+
+### Guest OS support
+
+OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include:
+
+- Ubuntu 20.04 LTS (AMD SEV-SNP supported only)
+- Ubuntu 22.04 LTS
+- Red Hat Enterprise Linux 9.3 (AMD SEV-SNP supported only)
+- Windows Server 2019 Datacenter - x64 Gen 2 (AMD SEV-SNP supported only)
+- Windows Server 2019 Datacenter Server Core - x64 Gen 2 (AMD SEV-SNP supported only)
+- Windows Server 2022 Datacenter - x64 Gen 2
+- Windows Server 2022 Datacenter: Azure Edition Core - x64 Gen 2
+- Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2
+- Windows Server 2022 Datacenter Server Core - x64 Gen 2
+- Windows 11 Enterprise N, version 22H2 -x64 Gen 2
+- Windows 11 Pro, version 22H2 ZH-CN -x64 Gen 2
+- Windows 11 Pro, version 22H2 -x64 Gen 2
+- Windows 11 Pro N, version 22H2 -x64 Gen 2
+- Windows 11 Enterprise, version 22H2 -x64 Gen 2
+- Windows 11 Enterprise multi-session, version 22H2 -x64 Gen 2
+
+As we work to onboard more OS images with confidential OS disk encryption, there are various images available in early preview that can be tested. You can sign up below:
+
+- [Red Hat Enterprise Linux 9.3 (Support for Intel TDX)](https://aka.ms/tdx-rhel-93-preview)
+- [SUSE Enterprise Linux 15 SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)
+- [SUSE Enterprise Linux 15 SAP SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)
+
+For more information about supported and unsupported VM scenarios, see [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
+
+### High availability and disaster recovery
+
+You're responsible for creating high availability and disaster recovery solutions for your confidential VMs. Planning for these scenarios helps minimize and avoid prolonged downtime.
+
+### Deployment with ARM templates
+
+Azure Resource Manager is the deployment and management service for Azure. You can:
+
+- Secure and organize your resources after deployment with the management features, like access control, locks, and tags.
+- Create, update, and delete resources in your Azure subscription using the management layer.
+- Use [Azure Resource Manager templates (ARM templates)](../azure-resource-manager/templates/overview.md) to deploy confidential VMs on AMD processors. There is an available [ARM template for confidential VMs](https://aka.ms/CVMTemplate).
+
+Make sure to specify the following properties for your VM in the parameters section (`parameters`):
+
+- VM size (`vmSize`). Choose from the different [confidential VM families and sizes](#sizes).
+- OS image name (`osImageName`). Choose from the qualified OS images.
+- Disk encryption type (`securityType`). Choose from VMGS-only encryption (`VMGuestStateOnly`) or full OS disk pre-encryption (`DiskWithVMGuestState`), which might result in longer provisioning times. For Intel TDX instances only we also support another security type (`NonPersistedTPM`) which has no VMGS or OS disk encryption.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy a confidential VM from the Azure portal](quick-create-confidential-vm-portal.md)
+
+For more information see our [Confidential VM FAQ](confidential-vm-faq.yml).
confidential-computing Virtual Machine Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions.md
Title: Azure Confidential VM options
-description: Azure Confidential Computing offers multiple options for confidential virtual machines on AMD and Intel processors.
+ Title: For Deletion
+description: For Deletion
Last updated 11/15/2023
-# Azure Confidential VM options
-
-Azure offers multiple confidential VMs options leveraging Trusted Execution Environments (TEE) technologies from both AMD and Intel to harden the virtualization environment. These technologies enable you to provision confidential computing environments with excellent price-to-performance without code changes.
-
-AMD confidential VMs leverage [Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP)](https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf) which was introduced with 3rd Gen AMD EPYC™ processors. Intel confidential VMs use [Trust Domain Extensions (TDX)](https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-whitepaper-v4.pdf) which was introduced with 4th Gen Intel® Xeon® processors.
-
-## Sizes
-
-You can create confidential VMs in the following size families:
-
-| Size Family | TEE | Description |
-| | | -- |
-| **DCasv5-series** | AMD SEV-SNP | General purpose CVM with remote storage. No local temporary disk. |
-| **DCesv5-series** | Intel TDX | General purpose CVM with remote storage. No local temporary disk. |
-| **DCadsv5-series** | AMD SEV-SNP | General purpose CVM with local temporary disk. |
-| **DCedsv5-series** | Intel TDX | General purpose CVM with local temporary disk. |
-| **ECasv5-series** | AMD SEV-SNP | Memory-optimized CVM with remote storage. No local temporary disk. |
-| **ECesv5-series** | Intel TDX | Memory-optimized CVM with remote storage. No local temporary disk. |
-| **ECadsv5-series** | AMD SEV-SNP | Memory-optimized CVM with local temporary disk. |
-| **ECedsv5-series** | Intel TDX | Memory-optimized CVM with local temporary disk. |
-
-> [!NOTE]
-> Memory-optimized confidential VMs offer double the ratio of memory per vCPU count.
-
-## Azure CLI commands
-
-You can use the [Azure CLI](/cli/azure/install-azure-cli) with your confidential VMs.
-
-To see a list of confidential VM sizes, run the following command. Replace `<vm-series>` with the series you want to use. The output shows information about available regions and availability zones.
-
-```azurecli-interactive
-vm_series='DCASv5'
-az vm list-skus \
- --size dc \
- --query "[?family=='standard${vm_series}Family'].{name:name,locations:locationInfo[0].location,AZ_a:locationInfo[0].zones[0],AZ_b:locationInfo[0].zones[1],AZ_c:locationInfo[0].zones[2]}" \
- --all \
- --output table
-```
-
-For a more detailed list, run the following command instead:
-
-```azurecli-interactive
-vm_series='DCASv5'
-az vm list-skus \
- --size dc \
- --query "[?family=='standard${vm_series}Family']"
-```
-
-## Deployment considerations
-
-Consider the following settings and choices before deploying confidential VMs.
-
-### Azure subscription
-
-To deploy a confidential VM instance, consider a [pay-as-you-go subscription](/azure/virtual-machines/linux/azure-hybrid-benefit-linux) or other purchase option. If you're using an [Azure free account](https://azure.microsoft.com/free/), the quota doesn't allow the appropriate number of Azure compute cores.
-
-You might need to increase the cores quota in your Azure subscription from the default value. Default limits vary depending on your subscription category. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the confidential VM sizes.
-
-To request a quota increase, [open an online customer support request](../azure-portal/supportability/per-vm-quota-requests.md).
-
-If you have large-scale capacity needs, contact Azure Support. Azure quotas are credit limits, not capacity guarantees. You only incur charges for cores that you use.
-
-### Pricing
-
-For pricing options, see the [Linux Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/).
-
-### Regional availability
-
-For availability information, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
-
-### Resizing
-
-Confidential VMs run on specialized hardware, so you can only [resize confidential VM instances](confidential-vm-faq.yml#can-i-convert-a-dcasv5-ecasv5-cvm-into-a-dcesv5-ecesv5-cvm-or-a-dcesv5-ecesv5-cvm-into-a-dcasv5-ecasv5-cvm-) to other confidential sizes in the same region. For example, if you have a DCasv5-series VM, you can resize to another DCasv5-series instance or a DCesv5-series instance.
-
-It's not possible to resize a non-confidential VM to a confidential VM.
-
-### Guest Operating System Support
-
-OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include:
--- Ubuntu 20.04 LTS (AMD SEV-SNP supported only)-- Ubuntu 22.04 LTS-- Red Hat Enterprise Linux 9.3 (AMD SEV-SNP supported only)-- Windows Server 2019 Datacenter - x64 Gen 2 (AMD SEV-SNP supported only)-- Windows Server 2019 Datacenter Server Core - x64 Gen 2 (AMD SEV-SNP supported only)-- Windows Server 2022 Datacenter - x64 Gen 2-- Windows Server 2022 Datacenter: Azure Edition Core - x64 Gen 2-- Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2-- Windows Server 2022 Datacenter Server Core - x64 Gen 2-- Windows 11 Enterprise N, version 22H2 -x64 Gen 2-- Windows 11 Pro, version 22H2 ZH-CN -x64 Gen 2-- Windows 11 Pro, version 22H2 -x64 Gen 2-- Windows 11 Pro N, version 22H2 -x64 Gen 2-- Windows 11 Enterprise, version 22H2 -x64 Gen 2-- Windows 11 Enterprise multi-session, version 22H2 -x64 Gen 2-
-As we work to onboard more OS images with confidential OS disk encryption, there are various images available in early preview that can be tested. You can sign up below:
--- [Red Hat Enterprise Linux 9.3 (Support for Intel TDX)](https://aka.ms/tdx-rhel-93-preview)-- [SUSE Enterprise Linux 15 SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)-- [SUSE Enterprise Linux 15 SAP SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)-
-For more information about supported and unsupported VM scenarios, see [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
-
-### High availability and disaster recovery
-
-You're responsible for creating high availability and disaster recovery solutions for your confidential VMs. Planning for these scenarios helps minimize and avoid prolonged downtime.
-
-### Deployment with ARM templates
-
-Azure Resource Manager is the deployment and management service for Azure. You can:
--- Secure and organize your resources after deployment with the management features, like access control, locks, and tags. -- Create, update, and delete resources in your Azure subscription using the management layer.-- Use [Azure Resource Manager templates (ARM templates)](../azure-resource-manager/templates/overview.md) to deploy confidential VMs on AMD processors. There is an available [ARM template for confidential VMs](https://aka.ms/CVMTemplate). -
-Make sure to specify the following properties for your VM in the parameters section (`parameters`):
--- VM size (`vmSize`). Choose from the different [confidential VM families and sizes](#sizes).-- OS image name (`osImageName`). Choose from the qualified OS images. -- Disk encryption type (`securityType`). Choose from VMGS-only encryption (`VMGuestStateOnly`) or full OS disk pre-encryption (`DiskWithVMGuestState`), which might result in longer provisioning times. For Intel TDX instances only we also support another security type (`NonPersistedTPM`) which has no VMGS or OS disk encryption.-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Deploy a confidential VM from the Azure portal](quick-create-confidential-vm-portal.md)
-
-For more information see our [Confidential VM FAQ](confidential-vm-faq.yml).
+# For Deletion
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
Previously updated : 06/10/2022 Last updated : 02/23/2024
container-apps Dapr Component Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-component-resiliency.md
Previously updated : 12/13/2023 Last updated : 02/22/2024 # Customer Intent: As a developer, I'd like to learn how to make my container apps resilient using Azure Container Apps.
Resiliency policies proactively prevent, detect, and recover from your container app failures. In this article, you learn how to apply resiliency policies for applications that use Dapr to integrate with different cloud services, like state stores, pub/sub message brokers, secret stores, and more.
-You can configure resiliency policies like retries and timeouts for the following outbound and inbound operation directions via a Dapr component:
+You can configure resiliency policies like retries, timeouts, and circuit breakers for the following outbound and inbound operation directions via a Dapr component:
- **Outbound operations:** Calls from the Dapr sidecar to a component, such as: - Persisting or retrieving state
The following screenshot shows how an application uses a retry policy to attempt
- [Timeouts](#timeouts) - [Retries (HTTP)](#retries)
+- [Circuit breakers](#circuit-breakers)
## Configure resiliency policies
You can choose whether to create resiliency policies using Bicep, the CLI, or th
The following resiliency example demonstrates all of the available configurations. ```bicep
-resource myPolicyDoc 'Microsoft.App/managedEnvironments/daprComponents/resiliencyPolicies@2023-08-01-preview' = {
+resource myPolicyDoc 'Microsoft.App/managedEnvironments/daprComponents/resiliencyPolicies@2023-11-02-preview' = {
name: 'my-component-resiliency-policies' parent: '${componentName}' properties: {
resource myPolicyDoc 'Microsoft.App/managedEnvironments/daprComponents/resilienc
initialDelayInMilliseconds: 1000 maxIntervalInMilliseconds: 10000 }
- }
+ }
+ circuitBreakerPolicy: {
+ intervalInSeconds: 15
+ consecutiveErrors: 10
+ timeoutInSeconds: 5
+ }
} inboundPolicy: { timeoutPolicy: {
resource myPolicyDoc 'Microsoft.App/managedEnvironments/daprComponents/resilienc
initialDelayInMilliseconds: 1000 maxIntervalInMilliseconds: 10000 }
- }
+ }
+ circuitBreakerPolicy: {
+ intervalInSeconds: 15
+ consecutiveErrors: 10
+ timeoutInSeconds: 5
+ }
} } }
outboundPolicy:
maxIntervalInMilliseconds: 10000 timeoutPolicy: responseTimeoutInSeconds: 15
+ circuitBreakerPolicy:
+ intervalInSeconds: 15
+ consecutiveErrors: 10
+ timeoutInSeconds: 5
inboundPolicy: httpRetryPolicy: maxRetries: 3 retryBackOff: initialDelayInMilliseconds: 500 maxIntervalInMilliseconds: 5000
+ circuitBreakerPolicy:
+ intervalInSeconds: 15
+ consecutiveErrors: 10
+ timeoutInSeconds: 5
``` ### Update specific policies
In the resiliency policy pane, select **Outbound** or **Inbound** to set policie
Click **Save** to save the resiliency policies.
+> [!NOTE]
+> Currently, you can only set timeout and retry policies via the Azure portal.
+ You can edit or remove the resiliency policies by selecting **Edit resiliency**. :::image type="content" source="media/dapr-component-resiliency/edit-dapr-component-resiliency.png" alt-text="Screenshot showing how you can edit existing resiliency policies for the applicable Dapr component.":::
properties: {
| `retryBackOff.initialDelayInMilliseconds` | Yes | Delay between first error and first retry. | `1000` | | `retryBackOff.maxIntervalInMilliseconds` | Yes | Maximum delay between retries. | `10000` |
+### Circuit breakers
+
+Define a `circuitBreakerPolicy` to monitor requests causing elevated failure rates and shut off all traffic to the impacted service when a certain criteria is met.
+
+```bicep
+properties: {
+ outbound: {
+ circuitBreakerPolicy: {
+ intervalInSeconds: 15
+ consecutiveErrors: 10
+ timeoutInSeconds: 5
+ }
+ },
+ inbound: {
+ circuitBreakerPolicy: {
+ intervalInSeconds: 15
+ consecutiveErrors: 10
+ timeoutInSeconds: 5
+ }
+ }
+}
+```
+
+| Metadata | Required | Description | Example |
+| -- | | -- | - |
+| `intervalInSeconds` | No | Cyclical period of time (in seconds) used by the circuit breaker to clear its internal counts. If not provided, the interval is set to the same value as provided for `timeoutInSeconds`. | `15` |
+| `consecutiveErrors` | Yes | Number of request errors allowed to occur before the circuit trips and opens. | `10` |
+| `timeoutInSeconds` | Yes | Time period (in seconds) of open state, directly after failure. | `5` |
+
+#### Circuit breaker process
+
+Specifying `consecutiveErrors` (the circuit trip condition as
+`consecutiveFailures > $(consecutiveErrors)-1`) sets the number of errors allowed to occur before the circuit trips and opens halfway.
+
+The circuit waits half-open for the `timeoutInSeconds` amount of time, during which the `consecutiveErrors` number of requests must consecutively succeed.
+- _If the requests succeed,_ the circuit closes.
+- _If the requests fail,_ the circuit remains in a half-opened state.
+
+If you didn't set any `intervalInSeconds` value, the circuit resets to a closed state after the amount of time you set for `timeoutInSeconds`, regardless of consecutive request success or failure. If you set `intervalInSeconds` to `0`, the circuit never automatically resets, only moving from half-open to closed state by successfully completing `consecutiveErrors` requests in a row.
+
+If you did set an `intervalInSeconds` value, that determines the amount of time before the circuit is reset to closed state, independent of whether the requests sent in half-opened state succeeded or not.
+ ## Resiliency logs From the *Monitoring* section of your container app, select **Logs**.
container-apps Service Discovery Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-discovery-resiliency.md
When you apply a policy to a container app, the rules are applied to all request
The following resiliency example demonstrates all of the available configurations. ```bicep
-resource myPolicyDoc 'Microsoft.App/containerApps/resiliencyPolicies@2023-08-01-preview' = {
+resource myPolicyDoc 'Microsoft.App/containerApps/resiliencyPolicies@2023-11-02-preview' = {
name: 'my-app-resiliency-policies' parent: '${appName}' properties: {
container-instances Container Instances Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure Container Instance with a public IP address using Terraform
cosmos-db Cosmos Db Vs Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cosmos-db-vs-mongodb-atlas.md
Last updated 06/03/2023
| MongoDB wire protocol | Yes | Yes | | Compatible with MongoDB tools and drivers | Yes | Yes | | Global Distribution | Yes, [globally distributed](../distribute-data-globally.md) with automatic and fast data replication across any number of Azure regions | Yes, globally distributed with manual and scheduled data replication across any number of cloud providers or regions |
-| SLA covers cloud platform | Yes | "Services, hardware, or software provided by a third party, such as cloud platform services on which MongoDB Atlas runs are not covered" |
+| SLA covers cloud platform | Yes | No. "Services, hardware, or software provided by a third party, such as cloud platform services on which MongoDB Atlas runs are not covered" |
| 99.999% availability SLA | [Yes](../high-availability.md) | No | | Instantaneous Scaling | Yes, [database instantaneously scales](../provision-throughput-autoscale.md) with zero performance impact on your applications | No, requires 1+ hours to vertically scale up and 24+ hours to vertically scale down. Performance impact during scale up may be noticeable | | True active-active clusters | Yes, with [multi-primary writes](./how-to-configure-multi-region-write.md). Data for the same shard can be written to multiple regions | No | | Vector Search for AI applications | Yes, with [Azure Cosmos DB for MongoDB vCore Vector Search](./vcore/vector-search.md) | Yes |
+| Vector Search in Free Tier | Yes, with [Azure Cosmos DB for MongoDB vCore Vector Search](./vcore/vector-search.md) | No |
| Integrated text search, geospatial processing | Yes | Yes |
-| Free tier | [1,000 request units (RUs) and 25 GB storage forever](../try-free.md). Prevents you from exceeding limits if you want | Yes, with 512 MB storage |
+| Free tier | [1,000 request units (RUs) and 25 GB storage forever](../try-free.md). Prevents you from exceeding limits if you want. Azure Cosmos DB for MognoDB vCore offers Free Tier with 32GB storage forever. | Yes, with 512 MB storage |
| Live migration | Yes | Yes | | Azure Integrations | Native [first-party integrations](./integrations-overview.md) with Azure services such as Azure Functions, Azure Logic Apps, Azure Stream Analytics, and Power BI and more | Limited number of third party integrations | | Choice of instance configuration | Yes, with [Azure Cosmos DB for MongoDB vCore](./vcore/introduction.md) | Yes |
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/introduction.md
Last updated 08/28/2023
Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture. With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones.
+## Build AI-Driven Applications with a Single Database Solution
+
+Azure Cosmos DB for MongoDB vCore empowers generative AI applications with an integrated **Vector Search** feature. This enables efficient indexing and querying of data by characteristics for advanced use cases such as generative AI, without the complexity of external integrations. Unlike MongoDB Atlas and similar platforms, Azure Cosmos DB for MongoDB vCore keeps all data within the database for vector searches, ensuring simplicity and security. Even our free tier offers this capability, making sophisticated AI features accessible without additional cost.
++ ## Effortless integration with the Azure platform Azure Cosmos DB for MongoDB vCore provides a comprehensive and integrated solution for resource management, making it easy for developers to seamlessly manage their resources using familiar Azure tools. The service features deep integration into various Azure products, such as Azure Monitor and Azure CLI. This deep integration ensures that developers have everything they need to work efficiently and effectively.
Here are the current tiers for the service:
| Cluster tier | Base storage | RAM | vCPUs | | | | | |
+| M25 | 32 GB | 8 GB | 2 burstable |
| M30 | 128 GB | 8 GB | 2 | | M40 | 128 GB | 16 GB | 4 | | M50 | 128 GB | 32 GB | 8 | | M60 | 128 GB | 64 GB | 16 | | M80 | 128 GB | 128 GB | 32 |
+| M200 | 128 GB | 256 GB | 64 |
+| M300 | 128 GB | 324 GB | 48 |
+| M400 | 128 GB | 432 GB | 64 |
+| M600 | 128 GB | 640 GB | 80 |
Azure Cosmos DB for MongoDB vCore is organized into easy to understand cluster tiers based on vCPUs, RAM, and attached storage. These tiers make it easy to lift and shift your existing workloads or build new applications.
cosmos-db Multi Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/multi-cloud.md
+
+ Title: Azure Cosmos DB for MongoDB vCore is your multi-cloud solution
+description: Azure Cosmos DB for MongoDB vCore offers a flexible, multi-cloud database service, using the MongoDB wire protocol for seamless migration and integration across environments.
+++++ Last updated : 02/12/2024++
+# Azure Cosmos DB for MongoDB vCore: Your Multi-Cloud Solution
+Azure Cosmos DB for MongoDB vCore represents a groundbreaking approach to database management, offering unparalleled flexibility and a multi-cloud capability that stands out in the modern cloud ecosystem. This document dives into the core aspects of Azure Cosmos DB for MongoDB vCore that make it an exceptional choice for organizations seeking a vendor-neutral and multi-cloud database service.
+
+### MongoDB Wire Protocol Compatibility
+Azure Cosmos DB for MongoDB has compatibility with the MongoDB wire protocol. This compatibility ensures that Azure Cosmos DB seamlessly integrates with MongoDB's ecosystem, including services hosted in other clouds and on-premises environments. It allows for a wide range of MongoDB tools and applications to communicate with Azure Cosmos DB without any modifications, ensuring a smooth and efficient migration or integration process.
+
+## Multi-Cloud and On-Premises Support
+The support for MongoDB wire protocol extends Azure Cosmos DB for MongoDB vCore's reach beyond Azure, making it an ideal solution for multi-cloud strategies. Organizations can use Azure Cosmos DB alongside other MongoDB services across different cloud providers or in on-premises data centers. This flexibility facilitates a hybrid cloud approach, allowing businesses to distribute their workloads across various environments based on their unique requirements and constraints.
+
+## Familiar Architecture and Easy Migration
+Azure Cosmos DB for MongoDB vCore is designed with a familiar architecture that reduces the learning curve and operational overhead for teams accustomed to MongoDB. This design philosophy makes it straightforward to "lift and shift" existing MongoDB databases to Azure Cosmos DB, or move them back to on-premises or another cloud provider if needed. The ease of migration and interoperability ensures that organizations are not locked into a single vendor, providing the freedom to choose the best environment for their needs.
+
+## Proven Experience and Fully Managed Service
+Since its general availability in 2017 with the Request Unit (RU) based service, Azure Cosmos DB for MongoDB has enabled users to run their MongoDB workloads on a native Azure service. This extensive experience underscores Microsoft's commitment to providing a robust, scalable, and fully managed MongoDB-compatible database solution. The Azure Cosmos DB team manages the database infrastructure, allowing users to focus on developing their applications without worrying about the underlying database management tasks.
+
+## Conclusion
+Azure Cosmos DB for MongoDB vCore stands out as a flexible, multi-cloud compatible database service that uses the MongoDB wire protocol for seamless integration and migration. Its vendor-neutral approach, coupled with support for multi-cloud and on-premises environments, ensures that organizations have the freedom and flexibility to run their applications wherever they choose. With almost a decade of experience in offering MongoDB-compatible services and the backing of a fully managed service by Microsoft Azure, Azure Cosmos DB for MongoDB vCore is the optimal choice for businesses looking to scale and innovate in the cloud.
+
+## Next steps
+
+- Get started by [creating a cluster.](quickstart-portal.md).
+- Review options for [migrating from MongoDB to Azure Cosmos DB for MongoDB vCore.](migration-options.md)
+++
cosmos-db Optimize Cost Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-regions.md
Previously updated : 08/26/2021 Last updated : 02/22/2024 # Optimize multi-region cost in Azure Cosmos DB+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-You can add and remove regions to your Azure Cosmos DB account at any time. The throughput that you configure for various Azure Cosmos DB databases and containers is reserved in each region associated with your account. If the throughput provisioned per hour, that is the sum of RU/s configured across all the databases and containers for your Azure Cosmos DB account is `T` and the number of Azure regions associated with your database account is `N`, then the total provisioned throughput for your Azure Cosmos DB account, for a given hour is equal to `T x N RU/s`.
+You can add and remove regions to your Azure Cosmos DB account at any time. The throughput that you configure for various Azure Cosmos DB databases and containers is reserved in each region associated with your account. If the throughput provisioned per hour that is the sum of request units per second (RU/s) configured across all the databases and containers for your Azure Cosmos DB account is `T` and the number of Azure regions associated with your database account is `N`, then the total provisioned throughput for your Azure Cosmos DB account, for a given hour is equal to `T x N` RU/s.
-Provisioned throughput with single write region costs $0.008/hour per 100 RU/s and provisioned throughput with multiple writable regions costs $0.016/per hour per 100 RU/s. To learn more, see Azure Cosmos DB [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
+Provisioned throughput with single write region and provisioned throughput with multiple writable regions can vary in cost. For more information, see [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
## Costs for multiple write regions
-In a multi-region writes system, the net available RUs for write operations increases `N` times where `N` is the number of write regions. Unlike single region writes, every region is now writable and supports conflict resolution. From the cost planning point of view, to perform `M` RU/s worth of writes worldwide, you will need to provision M `RUs` at a container or database level. You can then add as many regions as you would like and use them for writes to perform `M` RU worth of worldwide writes.
+In a multi-region writes system, the net available RU/s for write operations increases `N` times where `N` is the number of write regions. Unlike single region writes, every region is now writable and supports conflict resolution. From the cost planning point of view, to perform `M` RU/s worth of writes worldwide, you need to configure `M` RU/s at a container or database level. You can then add as many regions as you would like and use them for writes to perform `M` RU/s worth of worldwide writes.
### Example
-Consider that you have a container in West US configured for single-region writes, provisioned with throughput of 10K RU/s, storing 0.5 TB of data this month. LetΓÇÖs assume you add a region, East US, with the same storage and throughput and you want the ability to write to the containers in both the regions from your app. Your new total monthly bill (assuming 730 hours in a month) will be as follows:
+Consider that you have a container in a single-region write scenario. That container is provisioned with throughput of `10K` RU/s and is storing `0.5` TB of data this month. Now, letΓÇÖs assume you add another region with the same storage and throughput and you want the ability to write to the containers in both regions from your app.
+
+This example details your new total monthly consumption:
+
+| | Monthly usage |
+| | | |
+| **Throughput bill for container in a single write region** | `10K RU/s * 730 hours` |
+| **Throughput bill for container in multiple write regions (two)** | `2 * 10K RU/s * 730 hours` |
+| **Storage bill for container in a single write region** | `0.5 TB (or 512 GB)` |
+| **Storage bill for container in two write regions** | `2 * 0.5 TB (or 1,024 GB)` |
-|**Item**|**Usage (monthly)**|**Rate**|**Monthly Cost**|
-|-|-|-|-|
-|Throughput bill for container in West US (single write region) |10K RU/s * 730 hours |$0.008 per 100 RU/s per hour |$584 |
-|Throughput bill for container in 2 regions - West US & East US (multiple write regions) |2 * 10K RU/s * 730 hours |$0.016 per 100 RU/s per hour |$2,336 |
-|Storage bill for container in West US |0.5 TB (or 512 GB) |$0.25/GB |$128 |
-|Storage bill for container in 2 regions - West US & East US |2 * 0.5 TB (or 1,024 GB) |$0.25/GB |$256 |
+> [!NOTE]
+> This example assumes 730 hours in a month.
## Improve throughput utilization on a per region-basis
-If you have inefficient utilization, for example, one or more under-utilized read regions you can take steps to make the maximum use of the RUs in read regions by using change feed from the read-region or move it to another secondary if over-utilized. You will need to ensure you optimize provisioned throughput (RUs) in the write region first. Writes cost more than reads unless very large queries so maintaining even utilization can be challenging. Overall, monitor the consumed throughput in your regions and add or remove regions on demand to scale your read and write throughput, making to sure understand the impact to latency for any apps that are deployed in the same region.
+If you have inefficient utilization, you can take steps to make the maximum use of the RU/s in read regions by using change feed from the read-region. Or, you can move to another secondary if over-utilized. For example, one or more under-utilized read regions is considered inefficient. You need to ensure you optimize provisioned throughput (RU/s) in the write region first.
-## Next steps
+Writes cost more than reads for most cases excluding large queries. Maintaining even utilization can be challenging. Overall, monitor the consumed throughput in your regions and add or remove regions on demand to scale your read and write throughput. Make sure to understand the effect to latency for any apps that are deployed in the same region.
-Next you can proceed to learn more about cost optimization in Azure Cosmos DB with the following articles:
+## Related content
-* Learn more about [Optimizing for development and testing](optimize-dev-test.md)
-* Learn more about [Understanding your Azure Cosmos DB bill](understand-your-bill.md)
-* Learn more about [Optimizing throughput cost](optimize-cost-throughput.md)
-* Learn more about [Optimizing storage cost](optimize-cost-storage.md)
-* Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md)
-* Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+- Learn more about [Optimizing for development and testing](optimize-dev-test.md)
+- Learn more about [Understanding your Azure Cosmos DB bill](understand-your-bill.md)
+- Learn more about [Optimizing throughput cost](optimize-cost-throughput.md)
+- Learn more about [Optimizing storage cost](optimize-cost-storage.md)
+- Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md)
+- Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
cost-management-billing Automation Ingest Usage Details Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/automation-ingest-usage-details-overview.md
description: This article explains how to use cost details records to correlate meter-based charges with the specific resources responsible for the charges. Then you can properly reconcile your bill. Previously updated : 12/11/2023 Last updated : 02/22/2024
Sample actual cost report:
| | | | | | | | | | | | xxxxxxxx-xxxx- xxxx - xxxx -xxxxxxxxxxx | OnDemand | Usage | 24 | 1 | 0.8 | 0.8 | 1 hour | 19.2 | Manual calculation of the actual charge: multiply 24 \* 0.8 \* 1 hour. | | xxxxxxxx-xxxx- xxxx - xxxx -xxxxxxxxxxx | Reservations/SavingsPlan | Usage | 24 | 1 | 0.8 | 0 | 1 hour | 0 | Manual calculation of the actual charge: multiply 24 \* 0 \* 1 hour. |
-| xxxxxxxx-xxxx- xxxx - xxxx -xxxxxxxxxxx | Reservations | Purchase | 15 | 120 | 0.8 | 120 | 1 hour | 1800 | Manual calculation of the actual charge: multiply 15 \* 120 \* 1 hour. |
+| xxxxxxxx-xxxx- xxxx - xxxx -xxxxxxxxxxx | Reservations | Purchase | 15 | 120 | 120 | 120 | 1 hour | 1800 | Manual calculation of the actual charge: multiply 15 \* 120 \* 1 hour. |
Sample amortized cost report:
Sample amortized cost report:
>[!NOTE] > - Limitations on `PayGPrice`
-> - For EA customers `PayGPrice` isn't populated when `PricingModel` = `Reservations`, `Spot`, `Marketplace`, or `SavingsPlan`.
-> - For MCA customers, `PayGPrice` isn't populated when `PricingModel` = `Reservations`, `Spot`, or `Marketplace`.
+> - For EA customers `PayGPrice` isn't populated when `PricingModel` = `Reservations`, `Marketplace`, or `SavingsPlan`.
+> - For MCA customers, `PayGPrice` isn't populated when `PricingModel` = `Reservations` or `Marketplace`.
>- Limitations on `UnitPrice`
-> - For EA customers, `UnitPrice` isn't populated when `PricingModel` = `Spot`, or `MarketPlace`.
-> - For MCA customers, `UnitPrice` isn't populated when `PricingModel` = `Reservations`, `Spot`, or `SavingsPlan`.
+> - For EA customers, `UnitPrice` isn't populated when `PricingModel` = `MarketPlace`.
+> - For MCA customers, `UnitPrice` isn't populated when `PricingModel` = `Reservations` or `SavingsPlan`.
## Unexpected charges
cost-management-billing Migrate Ea Balance Summary Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-balance-summary-api.md
description: This article has information to help you migrate from the EA Balance Summary API. Previously updated : 11/17/2023 Last updated : 02/23/2024
EA customers who were previously using the Enterprise Reporting consumption.azure.com API to get their balance summary need to migrate to a replacement Azure Resource Manager API. Instructions to do this are outlined below along with any contract differences between the old API and the new API.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ ## Assign permissions to an SPN to call the API Before calling the API, you need to configure a Service Principal with the correct permission. You use the service principal to call the API. For more information, see [Assign permissions to ACM APIs](cost-management-api-permissions.md).
cost-management-billing Migrate Ea Marketplace Store Charge Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-marketplace-store-charge-api.md
description: This article has information to help you migrate from the EA Marketplace Store Charge API. Previously updated : 01/31/2024 Last updated : 02/22/2024
EA customers who were previously using the Enterprise Reporting consumption.azure.com API to [get their marketplace store charges](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge) need to migrate to a replacement Azure Resource Manager API. This article helps you migrate by using the following instructions. It also explains the contract differences between the old API and the new API.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ Endpoints to migrate off: |Endpoint|API Comments|
cost-management-billing Migrate Ea Price Sheet Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-price-sheet-api.md
description: This article has information to help you migrate from the EA Price Sheet API. Previously updated : 11/17/2023 Last updated : 02/22/2024
EA customers who were previously using the Enterprise Reporting consumption.azure.com API to get their price sheet need to migrate to a replacement Azure Resource Manager API. Instructions to do this are outlined below along with any contract differences between the old API and the new API.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ ## Assign permissions to an SPN to call the API Before calling the API, you need to configure a Service Principal with the correct permission. You use the service principal to call the API. For more information, see [Assign permissions to ACM APIs](cost-management-api-permissions.md).
cost-management-billing Migrate Ea Reporting Arm Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reporting-arm-apis-overview.md
description: This article provides an overview about migrating from Azure Enterprise Reporting to Microsoft Cost Management APIs. Previously updated : 11/17/2023 Last updated : 02/22/2024
cost-management-billing Migrate Ea Reserved Instance Charges Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reserved-instance-charges-api.md
description: This article has information to help you migrate from the EA Reserved Instance Charges API. Previously updated : 11/17/2023 Last updated : 02/22/2023
EA customers who were previously using the Enterprise Reporting consumption.azure.com API to obtain reserved instance charges need to migrate to a parity Azure Resource Manager API. Instructions to do this are outlined below along with any contract differences between the old API and the new API.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ ## Assign permissions to an SPN to call the API Before calling the API, you need to configure a Service Principal with the correct permission. You use the service principal to call the API. For more information, see [Assign permissions to ACM APIs](cost-management-api-permissions.md).
cost-management-billing Migrate Ea Reserved Instance Recommendations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reserved-instance-recommendations-api.md
description: This article has information to help you migrate from the EA Reserved Instance Recommendations API. Previously updated : 11/17/2023 Last updated : 02/22/2023
EA customers who were previously using the Enterprise Reporting consumption.azure.com API to obtain reserved instance recommendations need to migrate to a parity Azure Resource Manager API. Instructions to do this are outlined below along with any contract differences between the old API and the new API.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ ## Assign permissions to an SPN to call the API Before calling the API, you need to configure a Service Principal with the correct permission. You use the service principal to call the API. For more information, see [Assign permissions to ACM APIs](cost-management-api-permissions.md).
cost-management-billing Migrate Ea Reserved Instance Usage Details Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reserved-instance-usage-details-api.md
description: This article has information to help you migrate from the EA Reserved Instance Usage Details API. Previously updated : 11/17/2023 Last updated : 02/22/2024
EA customers who were previously using the Enterprise Reporting consumption.azure.com API to obtain reserved instance usage details need to migrate to a parity Azure Resource Manager API. Instructions to do this are outlined below along with any contract differences between the old API and the new API.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ ## Assign permissions to an SPN to call the API Before calling the API, you need to configure a Service Principal with the correct permission. You use the service principal to call the API. For more information, see [Assign permissions to ACM APIs](cost-management-api-permissions.md).
cost-management-billing Migrate Ea Reserved Instance Usage Summary Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reserved-instance-usage-summary-api.md
description: This article has information to help you migrate from the EA Reserved Instance Usage Summary API. Previously updated : 11/17/2023 Last updated : 02/22/2024
EA customers who were previously using the Enterprise Reporting consumption.azure.com API to obtain reserved instance usage summaries need to migrate to a parity Azure Resource Manager API. Instructions to do this are outlined below along with any contract differences between the old API and the new API.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ ## Assign permissions to an SPN to call the API Before calling the API, you need to configure a Service Principal with the correct permission. You use the service principal to call the API. For more information, see [Assign permissions to ACM APIs](cost-management-api-permissions.md).
cost-management-billing Migrate Ea Usage Details Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-usage-details-api.md
description: This article has information to help you migrate from the EA Usage Details APIs. Previously updated : 01/30/2024 Last updated : 02/22/2024
EA customers who were previously using the Enterprise Reporting APIs behind the
The dataset is referred to as *cost details* instead of *usage details*.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ ## New solutions generally available The following table provides a summary of the migration destinations that are available along with a summary of what to consider when choosing which solution is best for you.
cost-management-billing Migrate Enterprise Agreement Billing Periods Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-enterprise-agreement-billing-periods-api.md
description: This article has information to help you migrate from the EA Billing Periods API. Previously updated : 02/21/2024 Last updated : 02/24/2024
EA customers that previously used the [Billing periods](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods) Enterprise Reporting consumption.azure.com API to get their billing periods need to use different mechanisms to get the data they need. This article helps you migrate from the old API by using replacement APIs.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. [Migrate to Microsoft Cost Management APIs](migrate-ea-reporting-arm-apis-overview.md) before then.
+ Endpoints to migrate off: | **Endpoint** | **API Comments** |
cost-management-billing Understand Usage Details Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/understand-usage-details-fields.md
description: This article describes the fields in the usage data files. Previously updated : 12/07/2023 Last updated : 02/22/2024
If you're using an older cost details solution and want to migrate to Exports or
- [Migrate from EA to MCA APIs](../costs/migrate-cost-management-api.md) - [Migrate from Consumption Usage Details API](migrate-consumption-usage-details-api.md)
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. Any remaining Enterprise Reporting APIs will stop responding to requests. Customers need to transition to using Microsoft Cost Management APIs before then.
+> To learn more, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](migrate-ea-reporting-arm-apis-overview.md).
+ ## List of fields and descriptions The following table describes the important terms used in the latest version of the cost details file. The list covers pay-as-you-go (also called Microsoft Online Services Program), Enterprise Agreement (EA), Microsoft Customer Agreement (MCA), and Microsoft Partner Agreement (MPA) accounts.
cost-management-billing Migrate Cost Management Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/migrate-cost-management-api.md
Title: Migrate EA to Microsoft Customer Agreement APIs - Azure
description: This article helps you understand the consequences of migrating a Microsoft Enterprise Agreement (EA) to a Microsoft Customer Agreement. Previously updated : 07/19/2022 Last updated : 02/22/2024
The following items help you transition to MCA APIs.
EA APIs use an API key for authentication and authorization. MCA APIs use Microsoft Entra authentication.
+> [!NOTE]
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. Any remaining Enterprise Reporting APIs will stop responding to requests. Customers need to transition to using Microsoft Cost Management APIs before then.
+> To learn more, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](../automate/migrate-ea-reporting-arm-apis-overview.md).
+ | Purpose | EA API | MCA API | | | | | | Balance and credits | [/balancesummary](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) | Microsoft.Billing/billingAccounts/billingProfiles/availableBalanceussae |
cost-management-billing Enterprise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enterprise-api.md
Previously updated : 02/14/2024 Last updated : 02/22/2024 # Overview of the Azure Enterprise Reporting APIs > [!NOTE]
-> Microsoft no longer updates the Azure Enterprise Reporting APIs. Instead, you should use Cost Management APIs. To learn more, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](../automate/migrate-ea-reporting-arm-apis-overview.md).
+> On May 1, 2024, Azure Enterprise Reporting APIs will be retired. Any remaining Enterprise Reporting APIs will stop responding to requests. Customers need to transition to using Microsoft Cost Management APIs before then.
+> To learn more, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](../automate/migrate-ea-reporting-arm-apis-overview.md).
The Azure Enterprise Reporting APIs enable Enterprise Azure customers to programmatically pull consumption and billing data into preferred data analysis tools. Enterprise customers signed an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) with Azure to make negotiated Azure Prepayment (previously called monetary commitment) and gain access to custom pricing for Azure resources.
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 11/30/2023 Last updated : 2/22/2024
When you exchange reservations, the new purchase currency amount must be greater
## Exchange nonpremium storage for premium storage You can exchange a reservation purchased for a VM size that doesn't support premium storage to a corresponding VM size that does. For example, an _F1_ for an _F1s_. To make the exchange, go to Reservation Details and select **Exchange**. The exchange doesn't reset the term of the reserved instance or create a new transaction.
-If you're exchanging for a different size, series, region or payment frequency, the term is reset for the new reservation.
+If you're exchanging for a different size, series, region, or payment frequency, the term is reset for the new reservation.
## How transactions are processed
If the original reservation purchase was made from an overage, the refund is ret
For customers that pay by wire transfer, the refunded amount is automatically applied to the next monthΓÇÖs invoice. The return or refund doesn't generate a new invoice.
-For customers that pay by credit card, the refunded amount is returned to the credit card that was used for the original purchase. If you've changed your card, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+For customers that pay by credit card, the refunded amount is returned to the credit card that was used for the original purchase. If you changed your card, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
### Pay-as-you-go invoice payments and CSP program
-The original reservation purchase invoice is canceled and then a new invoice is created for the refund. For exchanges, the new invoice shows the refund and the new purchase. The refund amount is adjusted against the purchase. If you only refunded a reservation, then the prorated amount stays with Microsoft and it's adjusted against a future reservation purchase. If you bought a reservation at pay-as-you-go rates and later move to a CSP, the reservation can be returned and repurchased without a penalty.
+The original reservation purchase invoice is canceled and then a new invoice is created for the refund. For exchanges, the new invoice shows the refund and the new purchase. The refund amount is adjusted against the purchase. If you only refunded a reservation, then the prorated amount stays with Microsoft and it gets adjusted against a future reservation purchase. If you bought a reservation at pay-as-you-go rates and later move to a CSP, the reservation can be returned and repurchased without a penalty.
Although a CSP customer canΓÇÖt exchange, cancel, renew, or refund a reservation themself, they can ask their partner to do it on their behalf. ### Pay-as-you-go credit card customers
-The original invoice is canceled, and a new invoice is created. The money is refunded to the credit card that was used for the original purchase. If you've changed your card, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+The original invoice is canceled, and a new invoice is created. The money is refunded to the credit card that was used for the original purchase. If you changed your card, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
## Cancel, exchange, and refund policies
Azure has the following policies for cancellations, exchanges, and refunds.
- The new reservation's lifetime commitment should equal or be greater than the returned reservation's remaining commitment. Example: for a three-year reservation that's 100 USD per month and exchanged after the 18th payment, the new reservation's lifetime commitment should be 1,800 USD or more (paid monthly or upfront). - The new reservation purchased as part of exchange has a new term starting from the time of exchange. - There's no penalty or annual limits for exchanges.-- Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md).
+- Through a grace period, you will have the ability to exchange Azure compute reservations (Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations) **until at least July 1, 2024**. In October 2022, it was announced that the ability to exchange Azure compute reservations would be deprecated on January 1, 2024. This policyΓÇÖs start date remains January 1, 2024 but with this grace period you now have until at least July 1, 2024 to exchange your Azure compute reservations. Compute reservations purchased prior to the end of the grace period will reserve the right to exchange one more time after the grace period ends. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md).
**Refund policies** - We're currently not charging an early termination fee, but in the future there might be a 12% early termination fee for cancellations.-- The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, assume you have a three-year reservation (36 months). It costs 100 USD per month. It's refunded in the 12th month. The canceled commitment is 2,400 USD (for the remaining 24 months). After the refund, your new available limit for refund is 47,600 USD (50,000-2,400). In 365 days from the refund, the 47,600 USD limit increases by 2,400 USD. Your new pool is 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment depletes the same pool, and the same replenishment logic applies.
+- The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, assume you have a three-year reservation (36 months). It costs 100 USD per month. It gets refunded in the 12th month. The canceled commitment is 2,400 USD (for the remaining 24 months). After the refund, your new available limit for refund is 47,600 USD (50,000-2,400). In 365 days from the refund, the 47,600 USD limit increases by 2,400 USD. Your new pool is 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment depletes the same pool, and the same replenishment logic applies.
- Azure doesn't process any refund that exceeds the 50,000 USD limit in a 12-month window for a billing profile or EA enrollment. - Refunds that result from an exchange don't count against the refund limit. - Refunds are calculated based on the lowest price of either your purchase price or the current price of the reservation.
If you have questions or need help, [create a support request](https://portal.az
- [What are Azure Reservations?](save-compute-costs-reservations.md) - [Manage Reservations in Azure](manage-reserved-vm-instance.md) - [Understand how the reservation discount is applied](../manage/understand-vm-reservation-charges.md)
- - [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+ - [Understand reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md)
- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md) - [Windows software costs not included with reservations](reserved-instance-windows-software-costs.md) - [Azure Reservations in the CSP program](/partner-center/azure-reservations)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
- ignite-2023 Previously updated : 02/14/2024 Last updated : 02/22/2024
Notifications are sent to the following users:
- Customers with Microsoft Customer Agreement (Azure Plan) - Notifications are sent to the reservation owners and the reservation administrator. - Cloud Solution Provider and new commerce partners
- - Partner Center Action Center emails are sent to partners. For more information about how partners can update their transactional notifications, see [Action Center preferences](/partner-center/action-center-overview#preferences).
+ - Notifications are sent to the primary contact partner identified by the partner legal information account settings. For more information about how to update the primary contact email address for partner account settings, see [Verify or update your company profile information](/partner-center/update-your-partner-profile#update-your-legal-business-profile).
- Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators, reservation owners, and the reservation administrator.
data-factory How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-access-secured-purview-account.md
If you have permission to approve the Microsoft Purview private endpoint connect
1. Go to **Manage** -> **Microsoft Purview** -> **Edit** 2. In the private endpoint list, click the **Edit** (pencil) button next to each private endpoint name 3. Click **Manage approvals in Azure portal** which will bring you to the resource.
-4. On the given resource, go to **Networking** -> **Private endpoint connection** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name".
+4. On the given resource, go to **Networking** -> **Private endpoint connection** or **Ingestion private endpoint connections** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name".
5. Repeat this operation for all private endpoints. If you don't have permission to approve the Microsoft Purview private endpoint connection, ask the Microsoft Purview account owner to do as follows.
+For Microsoft Purview accounts using the [Microsoft Purview portal](/purview/purview-portal):
+
+1. Go to the Azure portal -> your Microsoft Purview account.
+1. Select **Networking** -> **Ingestion private endpoint connections** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name".
+
+For Microsoft Purview accounts using the [classic Microsoft Purview governance portal](/purview/use-microsoft-purview-governance-portal):
+ - For *account* private endpoint, go to Azure portal -> your Microsoft Purview account -> Networking -> Private endpoint connection to approve.-- For *ingestion* private endpoints, go to Azure portal -> your Microsoft Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
+- If your account was created after November 10 2023 (or deployed using API version 2023-05-01-preview onwards):
+ 1. Go to the Azure portal -> your Microsoft Purview account.
+ 1. Select **Networking** -> **Ingestion private endpoint connections** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name".
+- If your account was created before November 10 2023 (or deployed using a version of the API older than 2023-05-01-preview):
+ 1. Go to Azure portal -> your Microsoft Purview account -> Managed resources.
+ 1. Select the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
+
+ >[!TIP]
+ > Your account will only have a managed Event Hubs namespace if it is [configured for Kafka notifications](/purview/configure-event-hubs-for-kafka).
### Monitor managed private endpoints
ddos-protection Manage Ddos Protection Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-terraform.md
Last updated 4/14/2023 content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create and configure Azure DDoS Network Protection using Terraform
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Microsoft Security DevOps uses the following Open Source tools:
- Open the [Microsoft Security DevOps GitHub action](https://github.com/marketplace/actions/security-devops-action) in a new window. -- Ensure that [Workflow permissions are set to Read and Write](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository) on the GitHub repository.
+- Ensure that [Workflow permissions are set to Read and Write](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository) on the GitHub repository. This includes setting "id-token: write" permissions in the GitHub Workflow for federation with Defender for Cloud.
## Configure the Microsoft Security DevOps GitHub action
Microsoft Security DevOps uses the following Open Source tools:
# MSDO runs on windows-latest. # ubuntu-latest also supported runs-on: windows-latest-
+
+ permissions:
+ contents: read
+ id-token: write
+
steps: # Checkout your code repository to scan
devtest-labs Create Lab Windows Vm Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/quickstarts/create-lab-windows-vm-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create a lab in Azure DevTest Labs using Terraform
dns Dns Get Started Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure DNS zone and record using Terraform
education-hub Custom Tenant Set Up Classroom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/custom-tenant-set-up-classroom.md
Title: How to create a custom Azure for Classroom Tenant and Billing Profile
-description: This article shows you how to make a custom tenant and billing profile for educators in your organization
+ Title: How to create a custom Azure Classroom Tenant and Billing Profile
+description: This article shows you how to make a custom tenant and billing profile for educators in your organization.
Previously updated : 3/17/2023 Last updated : 2/22/2024 # Create a custom Tenant and Billing Profile for Microsoft for Teaching Paid
-This article is meant for IT Admins utilizing Azure for Classroom. When signing up for this offer, you should already have a tenant and billing profile created, but this article is meant to help walk you through how to create a custom tenant and billing profile and associate them with an educator.
+This article is meant for IT Admins utilizing Azure Classroom (subject to regional availability). When signing up for this offer, you should already have a tenant and billing profile created, but this article is meant to help walk you through how to create a custom tenant and billing profile and associate them with an educator.
## Prerequisites -- Be signed up for Azure for Classroom
+- Be signed up for Azure Classroom
## Create a new tenant
-This section walks you through how to create a new tenant and associate it with your university tenant using multi-tenant
+This section walks you through how to create a new tenant and associate it with your university tenant using multitenant.
1. Go to the Azure portal and search for "Microsoft Entra ID" 2. Create a new tenant in the "Manage tenants" tab 3. Fill in and Finalize Tenant information
-4. After the tenant has been created copy the Tenant ID of the new tenant
+4. Copy the Tenant ID of the newly created tenant
## Associate new tenant with university tenant
This section walks through how to add an Educator to the newly created tenant.
1. Change the role to "Global administrator" :::image type="content" source="media/custom-tenant-set-up-classroom/add-user.png" alt-text="Screenshot of user inviting existing user." border="true"::: 1. Tell the Educator to accept the invitation to this tenant
-2. After the Educator has joined the tenant, go into the tenant properties and click Yes under the Access management for Azure resources.
+2. After the Educator has joined the tenant, go into the tenant properties and click Yes under the Access management for Azure resources
Now that you've created a custom Tenant, you can go into Education Hub and begin distributing credit to Educators to use in labs.
event-grid Choose Right Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/choose-right-tier.md
Use this tier if any of the following statements is true:
* You require HTTP communication rates greater than 5 MB/s for ingress and egress using pull delivery or push delivery. Event Grid currently supports up to 40 MB/s for ingress and 80 MB/s for egress for events published to namespace topics (HTTP). MQTT supports a throughput rate of 40 MB/s for publisher and subscriber clients. * You require CloudEvents retention of up to 7 days.
-For more information, see quotas and limits for [namespaces](quotas-limits.md#namespace-resource-limits).
+For more information, see quotas and limits for [namespaces](quotas-limits.md#event-grid-namespace-resource-limits).
## Event Grid basic tier
event-grid Monitor Mqtt Delivery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-mqtt-delivery-reference.md
This article provides a reference of log and metric data collected to analyze th
| Protocol | The protocol used in the operation. The available values include: <br><br>- MQTT3: MQTT v3.1.1 <br>- MQTT5: MQTT v5 <br>- MQTT3-WS: MQTT v3.1.1 over WebSocket <br>- MQTT5-WS: MQTT v5 over WebSocket | Result | Result of the operation. The available values include: <br><br>- Success <br>- ClientError <br>- ServiceError | | Error | Error occurred during the operation.<br> The available values for MQTT: RequestCount, MQTT: Failed Published Messages, MQTT: Failed Subscription Operations metrics include: <br><br>-QuotaExceeded: the client exceeded one or more of the throttling limits that resulted in a failure <br>- AuthenticationError: a failure because of any authentication reasons. <br>- AuthorizationError: a failure because of any authorization reasons.<br>- ClientError: the client sent a bad request or used one of the unsupported features that resulted in a failure. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. <br><br> [Learn more about how the supported MQTT features.](mqtt-support.md) <br><br>The available values for MQTT: Failed Routed Messages metric include: <br><br>-AuthenticationError: the EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. <br>-TopicNotFoundError: The custom topic that is configured to receive all the MQTT routed messages was deleted. <br>-TooManyRequests: the number of MQTT routed messages per second exceeds the limit of the destination (namespace topic or custom topic) for MQTT routed messages. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. <br><br> [Learn more about how the MQTT broker handles each of these routing errors.](mqtt-routing.md#mqtt-message-routing-behavior)|
-| ThrottleType | The type of throttle limit that got exceeded in the namespace. The available values include: <br>- InboundBandwidthPerNamespace <br>- InboundBandwidthPerConnection <br>- IncomingPublishPacketsPerNamespace <br>- IncomingPublishPacketsPerConnection <br>- OutboundPublishPacketsPerNamespace <br>- OutboundPublishPacketsPerConnection <br>- OutboundBandwidthPerNamespace <br>- OutboundBandwidthPerConnection <br>- SubscribeOperationsPerNamespace <br>- SubscribeOperationsPerConnection <br>- ConnectPacketsPerNamespace <br><br>[Learn more about the limits](quotas-limits.md#mqtt-limits-in-namespace). |
+| ThrottleType | The type of throttle limit that got exceeded in the namespace. The available values include: <br>- InboundBandwidthPerNamespace <br>- InboundBandwidthPerConnection <br>- IncomingPublishPacketsPerNamespace <br>- IncomingPublishPacketsPerConnection <br>- OutboundPublishPacketsPerNamespace <br>- OutboundPublishPacketsPerConnection <br>- OutboundBandwidthPerNamespace <br>- OutboundBandwidthPerConnection <br>- SubscribeOperationsPerNamespace <br>- SubscribeOperationsPerConnection <br>- ConnectPacketsPerNamespace <br><br>[Learn more about the limits](quotas-limits.md#mqtt-limits-in-event-grid-namespace). |
| QoS | Quality of service level. The available values are: 0, 1. | | Direction | The direction of the operation. The available values are: <br><br>- Inbound: inbound throughput to Event Grid. <br>- Outbound: outbound throughput from Event Grid. | | DropReason | The reason a session was dropped. The available values include: <br><br>- SessionExpiry: a persistent session has expired. <br>- TransientSession: a non-persistent session has expired. <br>- SessionOverflow: a client didn't connect during the lifespan of the session to receive queued QOS1 messages until the queue reached its maximum limit. <br>- AuthorizationError: a session drop because of any authorization reasons.
Here are the columns of the `EventGridNamespaceFailedMqttSubscriptions` Log Anal
See the following articles: - [Monitor pull delivery reference](monitor-pull-reference.md).-- [Monitor push delivery reference](monitor-push-reference.md).
+- [Monitor push delivery reference](monitor-push-reference.md).
firewall Deploy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-terraform.md
Last updated 10/15/2023
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Deploy Azure Firewall with Availability Zones - Terraform
firewall Quick Create Ipgroup Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-terraform.md
Last updated 10/17/2023 content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure Firewall and IP Groups - Terraform
firewall Quick Create Multiple Ip Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-terraform.md
Last updated 10/15/2023 content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure Firewall with multiple public IP addresses - Terraform
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure Front Door Standard/Premium profile using Terraform
frontdoor Quickstart Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-terraform.md
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Quickstart: Create an Azure Front Door (classic) using Terraform
governance Assign Policy Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-azurecli.md
Title: "Quickstart: New policy assignment with Azure CLI"
-description: In this quickstart, you use Azure CLI to create an Azure Policy assignment to identify non-compliant resources.
Previously updated : 08/17/2021
+ Title: "Quickstart: Create policy assignment using Azure CLI"
+description: In this quickstart, you create an Azure Policy assignment to identify non-compliant resources using Azure CLI.
Last updated : 02/23/2024 -+
-# Quickstart: Create a policy assignment to identify non-compliant resources with Azure CLI
-The first step in understanding compliance in Azure is to identify the status of your resources.
-This quickstart steps you through the process of creating a policy assignment to identify virtual
-machines that aren't using managed disks.
+# Quickstart: Create a policy assignment to identify non-compliant resources using Azure CLI
-At the end of this process, you'll successfully identify virtual machines that aren't using managed
-disks. They're _non-compliant_ with the policy assignment.
+The first step in understanding compliance in Azure is to identify the status of your resources. In this quickstart, you create a policy assignment to identify non-compliant resources using Azure CLI. The policy is assigned to a resource group and audits virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines.
-Azure CLI is used to create and manage Azure resources from the command line or in scripts. This
-guide uses Azure CLI to create a policy assignment and to identify non-compliant resources in your
-Azure environment.
+Azure CLI is used to create and manage Azure resources from the command line or in scripts. This guide uses Azure CLI to create a policy assignment and to identify non-compliant resources in your Azure environment.
## Prerequisites -- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
- account before you begin.
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli).
+- [Visual Studio Code](https://code.visualstudio.com/).
+- `Microsoft.PolicyInsights` must be [registered](../../azure-resource-manager/management/resource-providers-and-types.md) in your Azure subscription. To register a resource provider, you must have permission to register resource providers. That permission is included in the Contributor and Owner roles.
+- A resource group with at least one virtual machine that doesn't use managed disks.
+
+## Connect to Azure
+
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
-- This quickstart requires that you run Azure CLI version 2.0.76 or later. To find the version, run
- `az --version`. If you need to install or upgrade, see
- [Install Azure CLI](/cli/azure/install-azure-cli).
+```azurecli
+az login
-- Register the Azure Policy Insights resource provider using Azure CLI. Registering the resource
- provider makes sure that your subscription works with it. To register a resource provider, you
- must have permission to the register resource provider operation. This operation is included in
- the Contributor and Owner roles. Run the following command to register the resource provider:
+# Run these commands if you have multiple subscriptions
+az account list --output table
+az account set --subscription <subscriptionID>
+```
+
+## Register resource provider
- ```azurecli-interactive
- az provider register --namespace 'Microsoft.PolicyInsights'
- ```
+When a resource provider is registered, it's available to use in your Azure subscription.
- For more information about registering and viewing resource providers, see
- [Resource Providers and Types](../../azure-resource-manager/management/resource-providers-and-types.md)
+To verify if `Microsoft.PolicyInsights` is registered, run `Get-AzResourceProvider`. The resource provider contains several resource types. If the result is `NotRegistered` run `Register-AzResourceProvider`:
-- If you haven't already, install the [ARMClient](https://github.com/projectkudu/ARMClient). It's a
- tool that sends HTTP requests to Azure Resource Manager-based APIs.
+```azurecli
+az provider show \
+ --namespace Microsoft.PolicyInsights \
+ --query "{Provider:namespace,State:registrationState}" \
+ --output table
+az provider register --namespace Microsoft.PolicyInsights
+```
-## Create a policy assignment
+The Azure CLI commands use a backslash (`\`) for line continuation to improve readability. For more information, go to [az provider](/cli/azure/provider).
-In this quickstart, you create a policy assignment and assign the **Audit VMs that do not use
-managed disks** definition. This policy definition identifies resources that aren't compliant to the
-conditions set in the policy definition.
+## Create policy assignment
-Run the following command to create a policy assignment:
+Use the following commands to create a new policy assignment for your resource group. This example uses an existing resource group that contains a virtual machine _without_ managed disks. The resource group is the scope for the policy assignment.
-```azurecli-interactive
-az policy assignment create --name 'audit-vm-manageddisks' --display-name 'Audit VMs without managed disks Assignment' --scope '<scope>' --policy '<policy definition ID>'
+Run the following commands and replace `<resourceGroupName>` with your resource group name:
+
+```azurepowershell
+rgid=$(az group show --resource-group <resourceGroupName> --query id --output tsv)
+
+definition=$(az policy definition list \
+ --query "[?displayName=='Audit VMs that do not use managed disks']".name \
+ --output tsv)
```
-The preceding command uses the following information:
+The `rgid` variable stores the resource group ID. The `definition` variable stores the policy definition's name, which is a GUID.
-- **Name** - The actual name of the assignment. For this example, _audit-vm-manageddisks_ was used.-- **DisplayName** - Display name for the policy assignment. In this case, you're using _Audit VMs
- without managed disks Assignment_.
-- **Policy** - The policy definition ID, based on which you're using to create the assignment. In
- this case, it's the ID of policy definition _Audit VMs that do not use managed disks_. To get the
- policy definition ID, run this command:
- `az policy definition list --query "[?displayName=='Audit VMs that do not use managed disks']"`
-- **Scope** - A scope determines what resources or grouping of resources the policy assignment gets
- enforced on. It could range from a subscription to resource groups. Be sure to replace
- &lt;scope&gt; with the name of your resource group.
+Run the following command to create the policy assignment:
-## Identify non-compliant resources
+```azurecli
+az policy assignment create \
+ --name 'audit-vm-managed-disks' \
+ --display-name 'Audit VMs without managed disks Assignment' \
+ --scope $rgid \
+ --policy $definition \
+ --description 'Azure CLI policy assignment to resource group'
+```
-To view the resources that aren't compliant under this new assignment, get the policy assignment ID
-by running the following commands:
+- `name` creates the policy assignment name used in the assignment's `ResourceId`.
+- `display-name` is the name for the policy assignment and is visible in Azure portal.
+- `scope` uses the `$rgid` variable to assign the policy to the resource group.
+- `policy` assigns the policy definition stored in the `$definition` variable.
+- `description` can be used to add context about the policy assignment.
+
+The results of the policy assignment resemble the following example:
+
+```output
+"description": "Azure CLI policy assignment to resource group",
+"displayName": "Audit VMs without managed disks Assignment",
+"enforcementMode": "Default",
+"id": "/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/policyAssignments/audit-vm-managed-disks",
+"identity": null,
+"location": null,
+"metadata": {
+ "createdBy": "11111111-1111-1111-1111-111111111111",
+ "createdOn": "2024-02-23T18:42:27.4780803Z",
+ "updatedBy": null,
+ "updatedOn": null
+},
+"name": "audit-vm-managed-disks",
+```
-```azurecli-interactive
-az policy assignment list --query "[?displayName=='Audit VMs without managed disks Assignment'].id"
+If you want to redisplay the policy assignment information, run the following command:
+
+```azurecli
+az policy assignment show --name "audit-vm-managed-disks" --scope $rgid
```
-For more information about policy assignment IDs, see
-[az policy assignment](/cli/azure/policy/assignment).
+For more information, go to [az policy assignment](/cli/azure/policy/assignment).
+
+## Identify non-compliant resources
+
+The compliance state for a new policy assignment takes a few minutes to become active and provide results about the policy's state.
+
+Use the following command to identify resources that aren't compliant with the policy assignment
+you created:
-Next, run the following command to get the resource IDs of the non-compliant resources that are
-output into a JSON file:
+```azurecli
+policyid=$(az policy assignment show \
+ --name "audit-vm-managed-disks" \
+ --scope $rgid \
+ --query id \
+ --output tsv)
-```console
-armclient post "/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$filter=IsCompliant eq false and PolicyAssignmentId eq '<policyAssignmentID>'&$apply=groupby((ResourceId))" > <json file to direct the output with the resource IDs into>
+az policy state list --resource $policyid --filter "(isCompliant eq false)"
```
-Your results resemble the following example:
-
-```json
-{
- "@odata.context": "https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest",
- "@odata.count": 3,
- "value": [{
- "@odata.id": null,
- "@odata.context": "https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest/$entity",
- "ResourceId": "/subscriptions/<subscriptionId>/resourcegroups/<rgname>/providers/microsoft.compute/virtualmachines/<virtualmachineId>"
- },
- {
- "@odata.id": null,
- "@odata.context": "https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest/$entity",
- "ResourceId": "/subscriptions/<subscriptionId>/resourcegroups/<rgname>/providers/microsoft.compute/virtualmachines/<virtualmachine2Id>"
- },
- {
- "@odata.id": null,
- "@odata.context": "https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest/$entity",
- "ResourceId": "/subscriptions/<subscriptionName>/resourcegroups/<rgname>/providers/microsoft.compute/virtualmachines/<virtualmachine3ID>"
- }
-
- ]
-}
+The `policyid` variable uses an expression to get the policy assignment's ID. The `filter` parameter limits the output to non-compliant resources.
+
+The `az policy state list` output is verbose, but for this article the `complianceState` shows `NonCompliant`:
+
+```output
+"complianceState": "NonCompliant",
+"components": null,
+"effectiveParameters": "",
+"isCompliant": false,
```
-The results are comparable to what you'd typically see listed under **Non-compliant resources** in
-the Azure portal view.
+For more information, go to [az policy state](/cli/azure/policy/state).
## Clean up resources
-To remove the assignment created, use the following command:
+To remove the policy assignment, run the following command:
+
+```azurecli
+az policy assignment delete --name "audit-vm-managed-disks" --scope $rgid
+```
+
+To sign out of your Azure CLI session:
-```azurecli-interactive
-az policy assignment delete --name 'audit-vm-manageddisks' --scope '/subscriptions/<subscriptionID>/<resourceGroupName>'
+```azurecli
+az logout
``` ## Next steps
az policy assignment delete --name 'audit-vm-manageddisks' --scope '/subscriptio
In this quickstart, you assigned a policy definition to identify non-compliant resources in your Azure environment.
-To learn more about assigning policies to validate that new resources are compliant, continue to the
-tutorial for:
+To learn more how to assign policies that validate if new resources are compliant, continue to the
+tutorial.
> [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Tutorial: Create and manage policies to enforce compliance](./tutorials/create-and-manage.md)
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|[Azure Cosmos DB should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F797b37f7-06b8-444c-b1ad-fc62867f335a) |Disabling public network access improves security by ensuring that your CosmosDB account isn't exposed on the public internet. Creating private endpoints can limit exposure of your CosmosDB account. Learn more at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints#blocking-public-network-access-during-account-creation](../../../cosmos-db/how-to-configure-private-endpoints.md#blocking-public-network-access-during-account-creation). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateNetworkAccess_AuditDeny.json) | |[Azure Databricks Clusters should disable public IP](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51c1490f-3319-459c-bbbc-7f391bbed753) |Disabling public IP of clusters in Azure Databricks Workspaces improves security by ensuring that the clusters aren't exposed on the public internet. Learn more at: [https://learn.microsoft.com/azure/databricks/security/secure-cluster-connectivity](/azure/databricks/security/secure-cluster-connectivity). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_DisablePublicIP_Audit.json) | |[Azure Databricks Workspaces should be in a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c25c9e4-ee12-4882-afd2-11fb9d87893f) |Azure Virtual Networks provide enhanced security and isolation for your Azure Databricks Workspaces, as well as subnets, access control policies, and other features to further restrict access. Learn more at: [https://docs.microsoft.com/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject). |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_VNETEnabled_Audit.json) |
-|[Azure Databricks Workspaces should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e7849de-b939-4c50-ab48-fc6b0f5eeba2) |Disabling public network access improves security by ensuring that the resource isn't exposed on the public internet. You can control exposure of your resources by creating private endpoints instead. Learn more at: [https://learn.microsoft.com/azure/databricks/administration-guide/cloud-configurations/azure/private-link](/azure/databricks/administration-guide/cloud-configurations/azure/private-link). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_AuditPublicNetworkAccess.json) |
+|[Azure Databricks Workspaces should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e7849de-b939-4c50-ab48-fc6b0f5eeba2) |Disabling public network access improves security by ensuring that the resource isn't exposed on the public internet. You can control exposure of your resources by creating private endpoints instead. Learn more at: [https://learn.microsoft.com/azure/databricks/administration-guide/cloud-configurations/azure/private-link](/azure/databricks/administration-guide/cloud-configurations/azure/private-link). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_AuditPublicNetworkAccess.json) |
|[Azure Databricks Workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F258823f2-4595-4b52-b333-cc96192710d8) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Databricks workspaces, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/adbpe](https://aka.ms/adbpe). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_PrivateEndpoint_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
### Ensure security of key and certificate repository
initiative definition.
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38668f5-d155-42c7-ab3d-9b57b50f8fbf) |Audit PostgreSQL flexible servers without Advanced Data Security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/PostgreSQL_FlexibleServers_DefenderForSQL_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md) |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) | |[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) | |[Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL should be enabled for unprotected Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd31e5c31-63b2-4f12-887b-e49456834fa1) |Enable Defender for SQL to protect your Synapse workspaces. Defender for SQL monitors your Synapse SQL to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/TdOnSynapseWorkspaces_Audit.json) |
|[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | |[SQL server-targeted autoprovisioning should be enabled for SQL servers on machines plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6283572-73bb-4deb-bf2c-7a2b8f7462cb) |To ensure your SQL VMs and Arc-enabled SQL Servers are protected, ensure the SQL-targeted Azure Monitoring Agent is configured to automatically deploy. This is also necessary if you've previously configured autoprovisioning of the Microsoft Monitoring Agent, as that component is being deprecated. Learn more: [https://aka.ms/SQLAMAMigration](https://aka.ms/SQLAMAMigration) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_DFSQL_AMA_Migration_Audit.json) |
initiative definition.
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38668f5-d155-42c7-ab3d-9b57b50f8fbf) |Audit PostgreSQL flexible servers without Advanced Data Security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/PostgreSQL_FlexibleServers_DefenderForSQL_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md) |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) | |[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL should be enabled for unprotected Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd31e5c31-63b2-4f12-887b-e49456834fa1) |Enable Defender for SQL to protect your Synapse workspaces. Defender for SQL monitors your Synapse SQL to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/TdOnSynapseWorkspaces_Audit.json) |
|[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | |[SQL server-targeted autoprovisioning should be enabled for SQL servers on machines plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6283572-73bb-4deb-bf2c-7a2b8f7462cb) |To ensure your SQL VMs and Arc-enabled SQL Servers are protected, ensure the SQL-targeted Azure Monitoring Agent is configured to automatically deploy. This is also necessary if you've previously configured autoprovisioning of the Microsoft Monitoring Agent, as that component is being deprecated. Learn more: [https://aka.ms/SQLAMAMigration](https://aka.ms/SQLAMAMigration) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_DFSQL_AMA_Migration_Audit.json) |
initiative definition.
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38668f5-d155-42c7-ab3d-9b57b50f8fbf) |Audit PostgreSQL flexible servers without Advanced Data Security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/PostgreSQL_FlexibleServers_DefenderForSQL_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) | |[Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL should be enabled for unprotected Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd31e5c31-63b2-4f12-887b-e49456834fa1) |Enable Defender for SQL to protect your Synapse workspaces. Defender for SQL monitors your Synapse SQL to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/TdOnSynapseWorkspaces_Audit.json) |
|[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | |[SQL server-targeted autoprovisioning should be enabled for SQL servers on machines plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6283572-73bb-4deb-bf2c-7a2b8f7462cb) |To ensure your SQL VMs and Arc-enabled SQL Servers are protected, ensure the SQL-targeted Azure Monitoring Agent is configured to automatically deploy. This is also necessary if you've previously configured autoprovisioning of the Microsoft Monitoring Agent, as that component is being deprecated. Learn more: [https://aka.ms/SQLAMAMigration](https://aka.ms/SQLAMAMigration) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_DFSQL_AMA_Migration_Audit.json) |
initiative definition.
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38668f5-d155-42c7-ab3d-9b57b50f8fbf) |Audit PostgreSQL flexible servers without Advanced Data Security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/PostgreSQL_FlexibleServers_DefenderForSQL_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) | |[Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL should be enabled for unprotected Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd31e5c31-63b2-4f12-887b-e49456834fa1) |Enable Defender for SQL to protect your Synapse workspaces. Defender for SQL monitors your Synapse SQL to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/TdOnSynapseWorkspaces_Audit.json) |
|[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | |[SQL server-targeted autoprovisioning should be enabled for SQL servers on machines plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6283572-73bb-4deb-bf2c-7a2b8f7462cb) |To ensure your SQL VMs and Arc-enabled SQL Servers are protected, ensure the SQL-targeted Azure Monitoring Agent is configured to automatically deploy. This is also necessary if you've previously configured autoprovisioning of the Microsoft Monitoring Agent, as that component is being deprecated. Learn more: [https://aka.ms/SQLAMAMigration](https://aka.ms/SQLAMAMigration) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_DFSQL_AMA_Migration_Audit.json) |
initiative definition.
|[Azure registry container images should have vulnerabilities resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | |[Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17f4b1cc-c55c-4d94-b1f9-2978f6ac2957) |Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_K8sRuningImagesVulnerabilityAssessmentBasedOnMDVM_Audit.json) | |[Azure running container images should have vulnerabilities resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
-|[Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.5.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Update%20Manager/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) |
+|[Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.6.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Update%20Manager/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) |
|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | |[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | |[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
This built-in initiative is deployed as part of the
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
-### Flaw Remediation
+### Malicious Code Protection
-**ID**: CCCS SI-2
+**ID**: CCCS SI-3
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
This built-in initiative is deployed as part of the
|[Enable dual or joint authorization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c843d78-8f64-92b5-6a9b-e8186c0e7eb6) |CMA_0226 - Enable dual or joint authorization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0226.json) | |[Maintain integrity of audit system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc0559109-6a27-a217-6821-5a6d44f92897) |CMA_C1133 - Maintain integrity of audit system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1133.json) | |[Protect audit information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e696f5a-451f-5c15-5532-044136538491) |CMA_0401 - Protect audit information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0401.json) |
-|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
### Ensure that logging for Azure KeyVault is 'Enabled'
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|[Enable dual or joint authorization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c843d78-8f64-92b5-6a9b-e8186c0e7eb6) |CMA_0226 - Enable dual or joint authorization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0226.json) | |[Maintain integrity of audit system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc0559109-6a27-a217-6821-5a6d44f92897) |CMA_C1133 - Maintain integrity of audit system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1133.json) | |[Protect audit information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e696f5a-451f-5c15-5532-044136538491) |CMA_0401 - Protect audit information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0401.json) |
-|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
### Ensure that logging for Azure KeyVault is 'Enabled'
governance Cis Azure 1 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|[Enable dual or joint authorization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c843d78-8f64-92b5-6a9b-e8186c0e7eb6) |CMA_0226 - Enable dual or joint authorization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0226.json) | |[Maintain integrity of audit system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc0559109-6a27-a217-6821-5a6d44f92897) |CMA_C1133 - Maintain integrity of audit system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1133.json) | |[Protect audit information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e696f5a-451f-5c15-5532-044136538491) |CMA_0401 - Protect audit information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0401.json) |
-|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
### Ensure that logging for Azure KeyVault is 'Enabled'
governance Cis Azure 2 0 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-2-0-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 2.0.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 2.0.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.5.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Update%20Manager/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) |
+|[Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.6.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Update%20Manager/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) |
### Ensure Any of the ASC Default Policy Settings are Not Set to 'Disabled'
initiative definition.
|[Enable dual or joint authorization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c843d78-8f64-92b5-6a9b-e8186c0e7eb6) |CMA_0226 - Enable dual or joint authorization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0226.json) | |[Maintain integrity of audit system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc0559109-6a27-a217-6821-5a6d44f92897) |CMA_C1133 - Maintain integrity of audit system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1133.json) | |[Protect audit information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e696f5a-451f-5c15-5532-044136538491) |CMA_0401 - Protect audit information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0401.json) |
-|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
### Ensure that logging for Azure Key Vault is 'Enabled'
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Azure Key Vault should use RBAC permission model](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12d4fa5e-1f9f-4c21-97a9-b99b3c6611b5) |Enable RBAC permission model across Key Vaults. Learn more at: [https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-migration](../../../key-vault/general/rbac-migration.md) |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVault_Should_Use_RBAC.json) |
+|[Azure Key Vault should use RBAC permission model](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12d4fa5e-1f9f-4c21-97a9-b99b3c6611b5) |Enable RBAC permission model across Key Vaults. Learn more at: [https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-migration](../../../key-vault/general/rbac-migration.md) |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVault_Should_Use_RBAC.json) |
### Ensure that Private Endpoints are Used for Azure Key Vault
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) | |[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) | |[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
initiative definition.
|[Display an explicit logout message](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0471c6b7-1588-701c-2713-1fade73b75f6) |CMA_C1056 - Display an explicit logout message |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1056.json) | |[Provide the logout capability](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb580551-0b3c-4ea1-8a4c-4cdb5feb340f) |CMA_C1055 - Provide the logout capability |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1055.json) |
-### Permitted Actions Without Identification Or
-Authentication
+### Permitted Actions Without Identification Or Authentication
**ID**: FedRAMP High AC-14 **Ownership**: Shared
Authentication
|[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Document mobility training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83dfb2b8-678b-20a0-4c44-5c75ada023e6) |CMA_0191 - Document mobility training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0191.json) | |[Document remote access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d492600-27ba-62cc-a1c3-66eb919f6a0d) |CMA_0196 - Document remote access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0196.json) | |[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) |
Authentication
|[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
Authentication
## Audit And Accountability
-### Audit And Accountability Policy And
-Procedures
+### Audit And Accountability Policy And Procedures
**ID**: FedRAMP High AU-1 **Ownership**: Shared
Procedures
## Security Assessment And Authorization
-### Security Assessment And Authorization
-Policy And Procedures
+### Security Assessment And Authorization Policy And Procedures
**ID**: FedRAMP High CA-1 **Ownership**: Shared
Policy And Procedures
||||| |[Review and update identification and authentication policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F29acfac0-4bb4-121b-8283-8943198b1549) |CMA_C1299 - Review and update identification and authentication policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1299.json) |
-### Identification And Authentication
-(Organizational Users)
+### Identification And Authentication (Organizational Users)
**ID**: FedRAMP High IA-2 **Ownership**: Shared
Policy And Procedures
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | |[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | |[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) | |[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) |
Policy And Procedures
|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
Policy And Procedures
|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) | |[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) | |[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
initiative definition.
|[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Document mobility training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83dfb2b8-678b-20a0-4c44-5c75ada023e6) |CMA_0191 - Document mobility training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0191.json) | |[Document remote access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d492600-27ba-62cc-a1c3-66eb919f6a0d) |CMA_0196 - Document remote access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0196.json) | |[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) |
initiative definition.
|[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | |[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | |[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) | |[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) |
initiative definition.
|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
initiative definition.
|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[CosmosDB accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|[Azure Cosmos DB should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F797b37f7-06b8-444c-b1ad-fc62867f335a) |Disabling public network access improves security by ensuring that your CosmosDB account isn't exposed on the public internet. Creating private endpoints can limit exposure of your CosmosDB account. Learn more at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints#blocking-public-network-access-during-account-creation](../../../cosmos-db/how-to-configure-private-endpoints.md#blocking-public-network-access-during-account-creation). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateNetworkAccess_AuditDeny.json) | |[Azure Databricks Clusters should disable public IP](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51c1490f-3319-459c-bbbc-7f391bbed753) |Disabling public IP of clusters in Azure Databricks Workspaces improves security by ensuring that the clusters aren't exposed on the public internet. Learn more at: [https://learn.microsoft.com/azure/databricks/security/secure-cluster-connectivity](/azure/databricks/security/secure-cluster-connectivity). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_DisablePublicIP_Audit.json) | |[Azure Databricks Workspaces should be in a virtual network](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c25c9e4-ee12-4882-afd2-11fb9d87893f) |Azure Virtual Networks provide enhanced security and isolation for your Azure Databricks Workspaces, as well as subnets, access control policies, and other features to further restrict access. Learn more at: [https://docs.microsoft.com/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject). |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_VNETEnabled_Audit.json) |
-|[Azure Databricks Workspaces should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e7849de-b939-4c50-ab48-fc6b0f5eeba2) |Disabling public network access improves security by ensuring that the resource isn't exposed on the public internet. You can control exposure of your resources by creating private endpoints instead. Learn more at: [https://learn.microsoft.com/azure/databricks/administration-guide/cloud-configurations/azure/private-link](/azure/databricks/administration-guide/cloud-configurations/azure/private-link). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_AuditPublicNetworkAccess.json) |
+|[Azure Databricks Workspaces should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e7849de-b939-4c50-ab48-fc6b0f5eeba2) |Disabling public network access improves security by ensuring that the resource isn't exposed on the public internet. You can control exposure of your resources by creating private endpoints instead. Learn more at: [https://learn.microsoft.com/azure/databricks/administration-guide/cloud-configurations/azure/private-link](/azure/databricks/administration-guide/cloud-configurations/azure/private-link). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_AuditPublicNetworkAccess.json) |
|[Azure Databricks Workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F258823f2-4595-4b52-b333-cc96192710d8) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Databricks workspaces, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/adbpe](https://aka.ms/adbpe). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Databricks/Databricks_PrivateEndpoint_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
initiative definition.
|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for Resource Manager should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | |[Azure Defender for servers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected PostgreSQL flexible servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38668f5-d155-42c7-ab3d-9b57b50f8fbf) |Audit PostgreSQL flexible servers without Advanced Data Security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/PostgreSQL_FlexibleServers_DefenderForSQL_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL should be enabled for unprotected Synapse workspaces](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd31e5c31-63b2-4f12-887b-e49456834fa1) |Enable Defender for SQL to protect your Synapse workspaces. Defender for SQL monitors your Synapse SQL to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/TdOnSynapseWorkspaces_Audit.json) |
|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | ### Enable threat detection for identity and access management
initiative definition.
|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for Resource Manager should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | |[Azure Defender for servers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected PostgreSQL flexible servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38668f5-d155-42c7-ab3d-9b57b50f8fbf) |Audit PostgreSQL flexible servers without Advanced Data Security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/PostgreSQL_FlexibleServers_DefenderForSQL_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL should be enabled for unprotected Synapse workspaces](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd31e5c31-63b2-4f12-887b-e49456834fa1) |Enable Defender for SQL to protect your Synapse workspaces. Defender for SQL monitors your Synapse SQL to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/TdOnSynapseWorkspaces_Audit.json) |
|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | ### Enable logging for security investigation
initiative definition.
|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for Resource Manager should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | |[Azure Defender for servers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected PostgreSQL flexible servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38668f5-d155-42c7-ab3d-9b57b50f8fbf) |Audit PostgreSQL flexible servers without Advanced Data Security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/PostgreSQL_FlexibleServers_DefenderForSQL_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL should be enabled for unprotected Synapse workspaces](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd31e5c31-63b2-4f12-887b-e49456834fa1) |Enable Defender for SQL to protect your Synapse workspaces. Defender for SQL monitors your Synapse SQL to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/TdOnSynapseWorkspaces_Audit.json) |
|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | ### Detection and analysis - investigate an incident
initiative definition.
|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for Resource Manager should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | |[Azure Defender for servers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected PostgreSQL flexible servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38668f5-d155-42c7-ab3d-9b57b50f8fbf) |Audit PostgreSQL flexible servers without Advanced Data Security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/PostgreSQL_FlexibleServers_DefenderForSQL_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL should be enabled for unprotected Synapse workspaces](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd31e5c31-63b2-4f12-887b-e49456834fa1) |Enable Defender for SQL to protect your Synapse workspaces. Defender for SQL monitors your Synapse SQL to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/TdOnSynapseWorkspaces_Audit.json) |
|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | ## Posture and Vulnerability Management
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
### Ensure that logging for Azure KeyVault is 'Enabled'
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
### Ensure that logging for Azure KeyVault is 'Enabled'
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[CosmosDB accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
initiative definition.
|[CosmosDB accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.4.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
-|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
initiative definition.
|[CosmosDB accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.4.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
-|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
initiative definition.
## Identification And Authentication
-### Identification And Authentication
-(Organizational Users)
+### Identification And Authentication (Organizational Users)
**ID**: FedRAMP High IA-2 **Ownership**: Shared
initiative definition.
|[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[CosmosDB accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
initiative definition.
|[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[CosmosDB accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/06/2024 Last updated : 02/22/2024
initiative definition.
|[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[CosmosDB accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) |
-|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
initiative definition.
|[CosmosDB accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.4.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
-|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[IoT Hub device provisioning service instances should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf39c015-56a4-45de-b4a3-efe77bed320d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to the IoT Hub device provisioning service, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/iotdpsvnet](https://aka.ms/iotdpsvnet). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_EnablePrivateEndpoint_Audit.json) |
initiative definition.
|[CosmosDB accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58440f8a-10c5-4151-bdce-dfbaad4a20b7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your CosmosDB account, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/cosmos-db/how-to-configure-private-endpoints](../../../cosmos-db/how-to-configure-private-endpoints.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_PrivateEndpoint_Audit.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.4.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
-|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
+|[Disk access resources should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff39f5f49-4abf-44de-8c70-0756997bfb51) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/disksprivatelinksdoc](https://aka.ms/disksprivatelinksdoc). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DiskAccesses_PrivateEndpoints_Audit.json) |
|[Event Hub namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8564268-eb4a-4337-89be-a19db070c59d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Event Hub namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/event-hubs/private-link-service](../../../event-hubs/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_PrivateEndpoint_Audit.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debuggi