Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md | Data resides in the **United States** for the following locations: Data resides in **Europe** for the following locations: -> Algeria (DZ), Austria (AT), Azerbaijan (AZ), Bahrain (BH), Belarus (BY), Belgium (BE), Bulgaria (BG), Croatia (HR), Cyprus (CY), Czech Republic (CZ), Denmark (DK), Egypt (EG), Estonia (EE), Finland (FT), France (FR), Germany (DE), Greece (GR), Hungary (HU), Iceland (IS), Ireland (IE), Israel (IL), Italy (IT), Jordan (JO), Kazakhstan (KZ), Kenya (KE), Kuwait (KW), Latvia (LV), Lebanon (LB), Liechtenstein (LI), Lithuania (LT), Luxembourg (LU), North Macedonia (ML), Malta (MT), Montenegro (ME), Morocco (MA), Netherlands (NL), Nigeria (NG), Norway (NO), Oman (OM), Pakistan (PK), Poland (PL), Portugal (PT), Qatar (QA), Romania (RO), Russia (RU), Saudi Arabia (SA), Serbia (RS), Slovakia (SK), Slovenia (ST), South Africa (ZA), Spain (ES), Sweden (SE), Switzerland (CH), Tunisia (TN), T├╝rkiye (TR), Ukraine (UA), United Arab Emirates (AE) and United Kingdom (GB) +> Algeria (DZ), Austria (AT), Azerbaijan (AZ), Bahrain (BH), Belarus (BY), Belgium (BE), Bulgaria (BG), Croatia (HR), Cyprus (CY), Czech Republic (CZ), Denmark (DK), Egypt (EG), Estonia (EE), Finland (Fl), France (FR), Germany (DE), Greece (GR), Hungary (HU), Iceland (IS), Ireland (IE), Israel (IL), Italy (IT), Jordan (JO), Kazakhstan (KZ), Kenya (KE), Kuwait (KW), Latvia (LV), Lebanon (LB), Liechtenstein (LI), Lithuania (LT), Luxembourg (LU), North Macedonia (ML), Malta (MT), Montenegro (ME), Morocco (MA), Netherlands (NL), Nigeria (NG), Norway (NO), Oman (OM), Pakistan (PK), Poland (PL), Portugal (PT), Qatar (QA), Romania (RO), Russia (RU), Saudi Arabia (SA), Serbia (RS), Slovakia (SK), Slovenia (ST), South Africa (ZA), Spain (ES), Sweden (SE), Switzerland (CH), Tunisia (TN), T├╝rkiye (TR), Ukraine (UA), United Arab Emirates (AE) and United Kingdom (GB) Data resides in **Asia Pacific** for the following locations: After sign-up, profile editing, or sign-in action is complete, Azure AD B2C incl ## Next steps -- [Create an Azure AD B2C tenant](tutorial-create-tenant.md).+- [Create an Azure AD B2C tenant](tutorial-create-tenant.md). |
active-directory | Powershell Assign Group To App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md | |
active-directory | Powershell Assign User To App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md | |
active-directory | Powershell Display Users Group Of App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md | |
active-directory | Powershell Get All App Proxy Apps Basic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md | |
active-directory | Powershell Get All App Proxy Apps By Connector Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md | |
active-directory | Powershell Get All App Proxy Apps Extended | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md | |
active-directory | Powershell Get All App Proxy Apps With Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md | |
active-directory | Powershell Get All Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md | |
active-directory | Powershell Get All Custom Domain No Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md | |
active-directory | Powershell Get All Custom Domains And Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md | |
active-directory | Powershell Get All Default Domain Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md | |
active-directory | Powershell Get All Wildcard Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md | |
active-directory | Powershell Get Custom Domain Identical Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md | |
active-directory | Powershell Get Custom Domain Replace Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md | |
active-directory | Powershell Move All Apps To Connector Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md | |
active-directory | 1 Secure Access Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/1-secure-access-posture.md | + + Title: Determine your security posture for external access with Azure Active Directory +description: Learn about governance of external access and assessing collaboration needs, by scenario +++++++ Last updated : 02/23/2023+++++++# Determine your security posture for external access with Azure Active Directory ++As you consider the governance of external access, assess your organization's security and collaboration needs, by scenario. You can start with the level of control the IT team has over the day-to-day collaboration of end users. Organizations in highly regulated industries might require more IT team control. For example, defense contractors can have a requirement to positively identify and document external users, their access, and access removal: all access, scenario-based, or workloads. Consulting agencies can use certain features to allow end users to determine the external users they collaborate with. ++ ![Bar graph of the span from full IT team control, to end-user self service.](media/secure-external-access/1-overall-control.png) ++ > [!NOTE] + > A high degree of control over collaboration can lead to higher IT budgets, reduced productivity, and delayed business outcomes. When official collaboration channels are perceived as onerous, end users tend to evade official channels. An example is end users sending unsecured documents by email. ++## Before you begin ++This article is number 1 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Scenario-based planning ++IT teams can delegate partner access to empower employees to collaborate with partners. This delegation can occur while maintaining sufficient security to protect intellectual property. ++Compile and assess your organizations scenarios to help assess employee versus business partner access to resources. Financial institutions might have compliance standards that restrict employee access to resources such as account information. Conversely, the same institutions can enable delegated partner access for projects such as marketing campaigns. ++ ![Diagram of a balance of IT team goverened access to partner self-service.](media/secure-external-access/1-scenarios.png) ++### Scenario considerations ++Use the following list to help measure the level of access control. ++* Information sensitivity, and associated risk of its exposure +* Partner access to information about other end users +* The cost of a breach versus the overhead of centralized control and end-user friction ++Organizations can start with highly managed controls to meet compliance targets, and then delegate some control to end users, over time. There can be simultaneous access-management models in an organization. ++> [!NOTE] +> Partner-managed credentials are a method to signal the termination of access to resources, when an external user loses access to resources in their own company. Learn more: [B2B collaboration overview](../external-identities/what-is-b2b.md) ++## External-access security goals ++The goals of IT-governed and delegated access differ. The primary goals of IT-governed access are: ++* Meet governance, regulatory, and compliance (GRC) targets +* High level of control over partner access to information about end users, groups, and other partners ++The primary goals of delegating access are: ++* Enable business owners to determine collaboration partners, with security constraints +* Enable partners to request access, based on rules defined by business owners ++### Common goals ++#### Control access to applications, data, and content ++Levels of control can be accomplished through various methods, depending on your version of Azure AD and Microsoft 365. ++* [Azure AD plans and pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) +* [Compare Microsoft 365 Enterprise pricing](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans) ++#### Reduce attack surface ++* [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md) - manage, control, and monitor access to resources in Azure AD, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune +* [Data loss prevention in Exchange Server](/exchange/policy-and-compliance/data-loss-prevention/data-loss-prevention?view=exchserver-2019&preserve-view=true) ++#### Confirm compliance with activity and audit log reviews ++IT teams can delegate access decisions to business owners through entitlement management, while access reviews help confirm continued access. You can use automated data classification with sensitivity labels to automate the encryption of sensitive content, easing compliance for end users. ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) (You're here) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) |
active-directory | 10 Secure Local Guest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/10-secure-local-guest.md | + + Title: Convert local guest accounts to Azure AD B2B guest accounts +description: Learn to convert local guests into Azure AD B2B guest accounts by identifying apps and local guest accounts, migration, and more. ++++ Last updated : 02/23/2023+++++++++# Convert local guest accounts to Azure Active Directory B2B guest accounts ++With Azure Active Directory (Azure AD B2B), external users collaborate with their identities. Although organizations can issue local usernames and passwords to external users, this approach isn't recommended. Azure AD B2B has improved security, lower cost, and less complexity, compared to creating local accounts. In addition, if your organization issues local credentials that external users manage, you can use Azure AD B2B instead. Use the guidance in this document to make the transition. ++Learn more: [Plan an Azure AD B2B collaboration deployment](secure-external-access-resources.md) ++## Before you begin ++This article is number 10 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Identify external-facing applications ++Before migrating local accounts to Azure AD B2B, confirm the applications and workloads external users can access. For example, for applications hosted on-premises, validate the application is integrated with Azure AD. On-premises applications are a good reason to create local accounts. ++Learn more: [Grant B2B users in Azure AD access to your on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md) ++We recommend that external-facing applications have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience. ++## Identify local guest accounts ++Identify the accounts to be migrated to Azure AD B2B. External identities in Active Directory are identifiable with an attribute-value pair. For example, making ExtensionAttribute15 = `External` for external users. If these users are set up with Azure AD Connect or Cloud Sync, configure synced external users to have the `UserType` attributes set to `Guest`. If the users are set up as cloud-only accounts, you can modify user attributes. Primarily, identify users to convert to B2B. ++## Map local guest accounts to external identities ++Identify user identities or external emails. Confirm that the local account (v-lakshmi@contoso.com) is a user with the home identity and email address: lakshmi@fabrikam.com. To identify home identities: ++- The external user's sponsor provides the information +- The external user provides the information +- Refer to an internal database, if the information is known and stored ++After mapping external local accounts to identities, add external identities or email to the user.mail attribute on local accounts. ++## End user communications ++Notify external users about migration timing. Communicate expectations, for instance when external users must stop using a current password to enable authentication by home and corporate credentials. Communications can include email campaigns and announcements. ++## Migrate local guest accounts to Azure AD B2B ++After local accounts have user.mail attributes populated with the external identity and email, convert local accounts to Azure AD B2B by inviting the local account. You can use PowerShell or the Microsoft Graph API. ++Learn more: [Invite internal users to B2B collaboration](../external-identities/invite-internal-users.md) ++## Post-migration considerations ++If external user local accounts were synced from on-premises, reduce their on-premises footprint and use B2B guest accounts. You can: ++- Transition external user local accounts to Azure AD B2B and stop creating local accounts + - Invite external users in Azure AD +- Randomize external user's local-account passwords to prevent authentication to on-premises resources + - This action ensures authentication and user lifecycle is connected to the external user home identity ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) (You're here) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) (You're here) |
active-directory | 11 Onboard External User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/11-onboard-external-user.md | + + Title: Onboard external users to Line-of-business applications using Azure Active Directory B2B +description: Learn how to onboard external users to Line-of-business applications using Azure Active Directory B2B +++++++ Last updated : 5/08/2023+++++++# Onboard external users to Line-of-business applications using Azure Active Directory B2B ++Application developers can use Azure Active Directory B2B (Azure AD B2B) to onboard and collaborate with external users within line-of-business (LOB) applications. Similar to the **Share** button in many Office 365 applications, application developers can create a one-click invitation experience within any LOB application that is integrated with Azure AD. ++Benefits include: ++- Simple and easy user onboarding and access to the LOB applications with users able to gain access with a few steps. ++- Enables external users to bring their own identity and perform Single sign-on (SSO). ++- Automatic provisioning of external identities to Azure AD. ++- Apply Azure AD Conditional Access and cross tenant access policies to enforce authorization policies such as requiring multi-factor authentication. ++## Integration flow ++To integrate LOB applications with Azure AD B2B, follow this pattern: ++![Screenshot shows the integration of LOB applications.](media/onboard-external-user/integration-flow.png) ++| Step | Description | +|:-|:--| +| 1. | The end user triggers the **invitation** within the LOB application and provides the email address of the external user. The application checks if the user already exists, and if they donΓÇÖt, proceeds to [step #2](#step-2-create-and-send-invitation)| +| 2. | The application sends a POST to the Microsoft Graph API on behalf of the user. It provides the redirect URL and external userΓÇÖs email that is defined in [step #1](#step-1-check-if-the-external-user-already-exists). | +| 3. | Microsoft Graph API provisions the guest user in Azure AD. | +| 4. | Microsoft Graph API returns the success/failure status of the API call. If successful, the response includes the Azure AD user object ID and the invitation link that is sent to the invited userΓÇÖs email. You can optionally suppress the Microsoft email and send your own custom email. | +| 5. | (Optional) If you want to write more attributes to the invited user or add the invited user to a group, the application makes an extra API call to the Microsoft Graph API. | +| 6. | (Optional) Microsoft Graph API makes the desired updates to Azure AD.| +| 7. | (Optional) Microsoft Graph API returns the success/failure status to the application. | +| 8. | The application provisions the user to its own database/backend user directory using the userΓÇÖs object ID attribute as the **immutable ID**. | +| 9. | The application presents the success/failure status to the end user. | ++If assignment is required to access the LOB application, the invited guest user must also be assigned to the application with an appropriate application role. This can be done as another API call adding the invited guest to a group (steps #5-7) or by automating group membership with Azure AD dynamic groups. Using dynamic groups wouldn't require another API call by the application. However, group membership wouldn't be updated as quickly compared to adding a user to a group immediately after user invitation. ++## Step 1: Check if the external user already exists ++It's possible that the external user has previously been invited and onboarded. The LOB application should check whether the user already exists in the directory. There are many approaches, however, the simplest involves making an API call to the Microsoft Graph API and presenting the possible matches to the inviting user for them to pick from. ++For example: ++``` +Application Permission: User.read.all ++GET https://graph.microsoft.com/v1.0/users?$filter=othermails/any(id:id eq 'userEmail@contoso.com') +``` +If you receive a userΓÇÖs details in the response, then the user already exists. You should present the users returned to the inviting user and allow them to choose which external user they want to grant access. You should proceed to make appropriate API calls or trigger other processes to grant this user access to the application rather than proceeding with the invitation step. ++## Step 2: Create and send invitation ++If the external user doesn't already exist in the directory, you can use Azure AD B2B to invite the user and onboard them to your Azure AD tenant. As an application developer, you need to determine what to include in the invitation request to Microsoft Graph API. ++At minimum, you need to: ++- Prompt the end user to provide the external userΓÇÖs email address. ++- Determine the invitation URL. This URL is where the invited user gets redirected to after they authenticate and redeem the B2B invitation. The URL can be a generic landing page for the application or dynamically determined by the LOB application based on where the end user triggered the invitation. ++More flags and attributes to consider for inclusion in the invitation request: ++- Display name of the invited user. +- Determine whether you want to use the default Microsoft invitation email or suppress the default email to create your own. ++Once the application has collected the required information and determined any other flags or information to include, the application must POST the request to the Microsoft Graph API invitation manager. Ensure the application registration has the appropriate permissions in Azure AD. ++For example: ++``` +Delegated Permission: User.Invite.All ++POST https://graph.microsoft.com/v1.0/invitations +Content-type: application/json ++{ +"invitedUserDisplayName": "John Doe", +"invitedUserEmailAddress": "john.doe@contoso.com", +"sendInvitationMessage": true, +"inviteRedirectUrl": "https://customapp.contoso.com" +} +``` ++>[!NOTE] +> To see the full list of available options for the JSON body of the invitation, check out [invitation resource type - Microsoft Graph v1.0](/graph/api/resources/invitation). ++Application developers can alternatively onboard external users using Azure AD Self-service sign-up or Entitlement management access packages. You can create your **invitation** button in your LOB application that triggers a custom email containing a predefined Self-service sign-up URL or access package URL. The invited user then self-service onboard and access the application. ++## Step 3: Write other attributes to Azure AD (optional) ++>[!IMPORTANT] +>Granting an application permission to update users in your directory is a highly privileged action. You should take steps to secure and monitor your LOB app if you grant the application these highly privileged permissions. ++Your organization or the LOB application may require to store more information for future use, such as claims emittance in tokens or granular authorization policies. Your application can make another API call to update the external user after theyΓÇÖve been invited/created in Azure AD. Doing so requires your application to have extra API permissions and would require an extra call to the Microsoft Graph API. ++To update the user, you need to use the object ID of the newly created guest user received in the response from the invitation API call. This is the **ID** value in the API response from either the existence check or invitation. You can write to any standard attribute or custom extension attributes you may have created. ++For example: ++``` +Application Permission: User.ReadWrite.All ++PATCH https://graph.microsoft.com/v1.0/users/<userΓÇÖs object ID> +Content-type: application/json ++{ +"businessPhones": [ + "+1 234 567 8900" + ], +"givenName": "John" +"surname": "Doe", +"extension_cf4ff515cbf947218d468c96f9dc9021_appRole": "external" +} +``` +For more information, see [Update user - Microsoft Graph v1.0](/graph/api/user-update). ++## Step 4: Assign the invited user to a group ++>[!NOTE] +>If user assignment is not required to access the application, you may skip this step. ++If user assignment is required in Azure AD for application access and/or role assignment, the user must be assigned to the application, or else the user is unable to gain access regardless of successful authentication. To achieve this, you should make another API call to add the invited external user to a specific group. The group can be assigned to the application and mapped to a specific application role. ++For example: ++Permissions: Assign the Group updater role or a custom role to the enterprise application and scope the role assignment to only the group(s) this application should be updating. Or assign the `group.readwrite.all` permission in Microsoft Graph API. ++``` +POST https://graph.microsoft.com/v1.0/groups/<insert group id>/members/$ref +Content-type: application/json ++{ +"@odata.id": "https://graph.microsoft.com/v1.0/directoryObjects/<insert user id>" +} +``` +For more information, see [Add members - Microsoft Graph v1.0](/graph/api/group-post-members). + +Alternatively, you can use Azure AD dynamic groups, which can automatically assign users to group based on the userΓÇÖs attributes. However, if end-user access is time-sensitive this wouldn't be the recommended approach as dynamic groups can take up to 24 hours to populate. ++If you prefer to use dynamic groups, you don't need to add the users to a group explicitly with another API call. Create a dynamic group that automatically adds the user as a member of the group based on available attributes such as userType, email, or a custom attribute. For more information, see [Create or edit a dynamic group and get status](../enterprise-users/groups-create-rule.md). + +## Step 5: Provision the invited user to the application ++Once the invited external user has been provisioned to Azure AD, the Microsoft Graph API returns a response with the necessary user information such as object ID and email. The LOB application can then provision the user to its own directory/database. Depending on the type of application and internal directory type the application uses, the actual implementation of this provisioning varies. ++With the external user provisioned in both Azure AD and the application, the LOB application can now notify the end user who initiated the invitation that the process has been successful. The invited user can get SSO with their own identity without the inviting organization needing to onboard and issue extra credentials. Azure AD can enforce authorization policies such as Conditional Access, Azure AD Multi-Factor Authentication, and risk-based Identity Protection. ++## Other considerations ++- Ensure proper error handling is done within the LOB application. The application should validate that each API call is successful. If unsuccessful, extra attempts and/or presenting error messages to the end user would be appropriate. ++- If you need the LOB application to update external users once theyΓÇÖve been invited, consider granting a custom role that allows the application to only update users and assign the scope to a dynamic administrative unit. For example, you can create a dynamic administrative unit to contain all users where usertype = guest. Once the external user is onboarded to Azure AD, it takes some time for them to be added to the administrative unit. So, the LOB application needs to attempt to update the user after some time and it may take more than one attempt if there are delays. Despite these delays, this is the best approach available to enable the LOB application to update external users without granting it permission to update any user in the directory. |
active-directory | 2 Secure Access Current State | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/2-secure-access-current-state.md | + + Title: Discover the current state of external collaboration in your organization +description: Discover the current state of an organization's collaboration with audit logs, reporting, allowlist, blocklist, and more. +++++++ Last updated : 02/23/2023+++++++# Discover the current state of external collaboration in your organization ++Before you learn about the current state of your external collaboration, determine a security posture. Consider centralized vs. delegated control, also governance, regulatory, and compliance targets. ++Learn more: [Determine your security posture for external access with Azure Active Directory](1-secure-access-posture.md) ++Users in your organization likely collaborate with users from other organizations. Collaboration occurs with productivity applications like Microsoft 365, by email, or sharing resources with external users. These scenarios include users: ++* Initiating external collaboration +* Collaborating with external users and organizations +* Granting access to external users ++## Before you begin ++This article is number 2 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Determine who initiates external collaboration ++Generally, users seeking external collaboration know the applications to use, and when access ends. Therefore, determine users with delegated permissions to invite external users, create access packages, and complete access reviews. ++To find collaborating users: ++* Microsoft 365 [Audit log activities](/microsoft-365/compliance/audit-log-activities?view=o365-worldwide&preserve-view=true) - search for events and discover activities audited in Microsoft 365 +* [Auditing and reporting a B2B collaboration user](../external-identities/auditing-and-reporting.md) - verify guest user access, and see records of system and user activities ++## Enumerate guest users and organizations ++External users might be Azure AD B2B users with partner-managed credentials, or external users with locally provisioned credentials. Typically, these users are the Guest UserType. To learn about inviting guests users and sharing resources, see [B2B collaboration overview](../external-identities/what-is-b2b.md). ++You can enumerate guest users with: ++* [Microsoft Graph API](/graph/api/user-list?tabs=http) +* [PowerShell](/graph/api/user-list?tabs=http) +* [Azure portal](../enterprise-users/users-bulk-download.md) ++Use the following tools to identify Azure AD B2B collaboration, external Azure AD tenants, and users accessing applications: ++* PowerShell module, [Get MsIdCrossTenantAccessActivity](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity) +* [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md) ++### Discover email domains and companyName property ++You can determine external organizations with the domain names of external user email addresses. This discovery might not be possible with consumer identity providers. We recommend you write the companyName attribute to identify external organizations. ++### Use allowlist, blocklist, and entitlement management ++Use the allowlist or blocklist to enable your organization to collaborate with, or block, organizations at the tenant level. Control B2B invitations and redemptions regardless of source (such as Microsoft Teams, SharePoint, or the Azure portal). ++See, [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md) ++If you use entitlement management, you can confine access packages to a subset of partners with the **Specific connected organizations** option, under New access packages, in Identity Governance. ++ ![Screenshot of settings and options under Identity Governance, New access package.](media/secure-external-access/2-new-access-package.png) ++## Determine external user access ++With an inventory of external users and organizations, determine the access to grant to the users. You can use the Microsoft Graph API to determine Azure AD group membership or application assignment. ++* [Working with groups in Microsoft Graph](/graph/api/resources/groups-overview?context=graph%2Fcontext&view=graph-rest-1.0&preserve-view=true) +* [Applications API overview](/graph/applications-concept-overview?view=graph-rest-1.0&preserve-view=true) ++### Enumerate application permissions ++Investigate access to your sensitive apps for awareness about external access. See, [Grant or revoke API permissions programmatically](/graph/permissions-grant-via-msgraph?view=graph-rest-1.0&tabs=http&pivots=grant-application-permissions&preserve-view=true). ++### Detect informal sharing ++If your email and network plans are enabled, you can investigate content sharing through email or unauthorized software as a service (SaaS) apps. ++* Identify, prevent, and monitor accidental sharing + * [Learn about data loss prevention](/microsoft-365/compliance/dlp-learn-about-dlp?view=o365-worldwide&preserve-view=true ) +* Identify unauthorized apps + * [Microsoft Defender for Cloud Apps overview](/defender-cloud-apps/what-is-defender-for-cloud-apps) ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) (You're here) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) + |
active-directory | 3 Secure Access Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/3-secure-access-plan.md | + + Title: Create a security plan for external access to resources +description: Plan the security for external access to your organization's resources. +++++++ Last updated : 02/23/2023+++++++# Create a security plan for external access to resources ++Before you create an external-access security plan, review the following two articles, which add context and information for the security plan. ++* [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) +* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++## Before you begin ++This article is number 3 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Security plan documentation ++For your security plan, document the following information: ++* Applications and resources grouped for access +* Sign-in conditions for external users + * Device state, sign-in location, client application requirements, user risk, etc. +* Policies to determine timing for reviews and access removal +* User populations grouped for similar experiences ++To implement the security plan, you can use Microsoft identity and access management policies, or another identity provider (IdP). ++Learn more: [Identity and access management overview](/compliance/assurance/assurance-identity-and-access-management) ++## Use groups for access ++See the following links to articles about resource grouping strategies: ++* Microsoft Teams groups files, conversation threads, and other resources + * Formulate an external access strategy for Teams + * See, [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) +* Use entitlement management access packages to create and delegate package management of applications, groups, teams, SharePoint sites, etc. + * [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) +* Apply Conditional Access policies to up to 250 applications, with the same access requirements + * [What is Conditional Access?](../conditional-access/overview.md) +* Define access for external user application groups + * [Overview: Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md) ++Document the grouped applications. Considerations include: ++* **Risk profile** - assess the risk if a bad actor gains access to an application + * Identify application as High, Medium, or Low risk. We recommend you don't group High-risk with Low-risk. + * Document applications that can't be shared with external users +* **Compliance frameworks** - determine compliance frameworks for apps + * Identify access and review requirements +* **Applications for roles or departments** - assess applications grouped for role, or department, access +* **Collaboration applications** - identify collaboration applications external users can access, such as Teams or SharePoint + * For productivity applications, external users might have licenses, or you might provide access ++Document the following information for application and resource group access by external users. ++* Descriptive group name, for example High_Risk_External_Access_Finance +* Applications and resources in the group +* Application and resource owners and their contact information +* The IT team controls access, or control is delegated to a business owner +* Prerequisites for access: background check, training, etc. +* Compliance requirements to access resources +* Challenges, for example multi-factor authentication (MFA) for some resources +* Cadence for reviews, by whom, and where results are documented ++> [!TIP] +> Use this type of governance plan for internal access. ++## Document sign-in conditions for external users ++Determine the sign-in requirements for external users who request access. Base requirements on the resource risk profile, and the user's risk assessment during sign-in. Configure sign-in conditions using Conditional Access: a condition and an outcome. For example, you can require MFA. ++Learn more: [What is Conditional Access?](../conditional-access/overview.md) ++**Resource risk-profile sign-in conditions** ++Consider the following risk-based policies to trigger MFA. ++* **Low** - MFA for some application sets +* **Medium** - MFA when other risks are present +* **High** - external users always use MFA ++Learn more: ++* [Tutorial: Enforce multi-factor authentication for B2B guest users](../external-identities/b2b-tutorial-require-mfa.md) +* Trust MFA from external tenants + * See, [Configure cross-tenant access settings for B2B collaboration, Modify inbound access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings) ++### User and device sign-in conditions ++Use the following table to help assess policy to address risk. ++| User or sign-in risk| Proposed policy | +| | | +| Device| Require compliant devices | +| Mobile apps| Require approved apps | +| Identity protection is High risk| Require user to change password | +| Network location| To access confidential projects, require sign-in from an IP address range | ++To use device state as policy input, register or join the device to your tenant. To trust the device claims from the home tenant, configure cross-tenant access settings. See, [Modify inbound access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings). ++You can use identity-protection risk policies. However, mitigate issues in the user home tenant. See, [Common Conditional Access policy: Sign-in risk-based multifactor authentication](../conditional-access/howto-conditional-access-policy-risk.md). ++For network locations, you can restrict access to IP addresses ranges that you own. Use this method if external partners access applications while at your location. See, [Conditional Access: Block access by location](../conditional-access/howto-conditional-access-policy-location.md) ++## Document access review policies ++Document policies that dictate when to review resource access, and remove account access for external users. Inputs might include: ++* Compliance frameworks requirements +* Internal business policies and processes +* User behavior ++Generally, organizations customize policy, however consider the following parameters: ++* **Entitlement management access reviews**: + * [Change lifecycle settings for an access package in entitlement management](../governance/entitlement-management-access-package-lifecycle-policy.md) + * [Create an access review of an access package in entitlement management](../governance/entitlement-management-access-reviews-create.md) + * [Add a connected organization in entitlement management](../governance/entitlement-management-organization.md): group users from a partner and schedule reviews +* **Microsoft 365 groups** + * [Microsoft 365 group expiration policy](/microsoft-365/solutions/microsoft-365-groups-expiration-policy?view=o365-worldwide&preserve-view=true) +* **Options**: + * If external users don't use access packages or Microsoft 365 groups, determine when accounts become inactive or deleted + * Remove sign-in for accounts that don't sign in for 90 days + * Regularly assess access for external users ++## Access control methods ++Some features, for example entitlement management, are available with an Azure AD Premium 2 (P2) license. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD Premium P2 licenses. Learn more in the following entitlement management section. ++> [!NOTE] +> Licenses are for one user. Therefore users, administrators, and business owners can have delegated access control. This scenario can occur with Azure AD Premium P2 or Microsoft 365 E5, and you don't have to enable licenses for all users. The first 50,000 external users are free. If you don't enable P2 licenses for other internal users, they can't use entitlement management. ++Other combinations of Microsoft 365, Office 365, and Azure AD have functionality to manage external users. See, [Microsoft 365 guidance for security & compliance](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance). ++## Govern access with Azure AD Premium P2 and Microsoft 365 or Office 365 E5 ++Azure AD Premium P2, included in Microsoft 365 E5, has additional security and governance capabilities. ++### Provision, sign-in, review access, and deprovision access ++Entries in bold are recommended actions. ++| Feature| Provision external users| Enforce sign-in requirements| Review access| Deprovision access | +| - | - | - | - | - | +| Azure AD B2B collaboration| Invite via email, one-time password (OTP), self-service|N/A| **Periodic partner review**| Remove account<br>Restrict sign-in | +| Entitlement management| **Add user by assignment or self-service access**|N/A| Access reviews|**Expiration of, or removal from, access package**| +| Office 365 groups|N/A|N/A| Review group memberships| Group expiration or deletion<br> Removal from group | +| Azure AD security groups|N/A| **Conditional Access policies**: Add external users to security groups as needed|N/A| N/A| ++### Resource access + +Entries in bold are recommended actions. ++|Feature | App and resource access| SharePoint and OneDrive access| Teams access| Email and document security | +| - |-|-|-|-| +| Entitlement management| **Add user by assignment or self-service access**| **Access packages**| **Access packages**| N/A| +| Office 365 Group|N/A | Access to site(s) and group content| Access to teams and group content|N/A| +| Sensitivity labels|N/A| **Manually and automatically classify and restrict access**| **Manually and automatically classify and restrict access**| **Manually and automatically classify and restrict access** | +| Azure AD security groups| **Conditional Access policies for access not included in access packages**|N/A|N/A|N/A| ++### Entitlement management  ++Use entitlement management to provision and deprovision access to groups and teams, applications, and SharePoint sites. Define the connected organizations granted access, self-service requests, and approval workflows. To ensure access ends correctly, define expiration policies and access reviews for packages. ++Learn more: [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) ++## Manage access with Azure AD P1, Microsoft 365, Office 365 E3 ++### Provision, sign-in, review access, and deprovision access ++Items in bold are recommended actions. ++|Feature | Provision external users| Enforce sign-in requirements| Review access| Deprovision access | +| - |-|-|-|-| +| Azure AD B2B collaboration| **Invite by email, OTP, self-service**| Direct B2B federation| **Periodic partner review**| Remove account<br>Restrict sign-in | +| Microsoft 365 or Office 365 groups|N/A|N/A|N/A|Group expiration or deletion<br>Removal from group | +| Security groups|N/A| **Add external users to security groups (org, team, project, etc.)**|N/A| N/A| +| Conditional Access policies|N/A| **Sign-in Conditional Access policies for external users**|N/A|N/A| ++### Resource access ++|Feature | App and resource access| SharePoint and OneDrive access| Teams access| Email and document security | +| - |-|-|-|-| +| Microsoft 365 or Office 365 groups|N/A| **Access to group site(s) and associated content**|**Access to Microsoft 365 group teams and associated content**|N/A| +| Sensitivity labels|N/A| Manually classify and restrict access| Manually classify and restrict access| Manually classify to restrict and encrypt | +| Conditional Access policies| Conditional Access policies for access control|N/A|N/A|N/A| +| Other methods|N/A| Restrict SharePoint site access with security groups<br>Disallow direct sharing| **Restrict external invitations from a team**|N/A| ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) (You're here) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) |
active-directory | 4 Secure Access Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/4-secure-access-groups.md | + + Title: Secure external access with groups in Azure Active Directory and Microsoft 365 +description: Azure Active Directory and Microsoft 365 Groups can be used to increase security when external users access your resources. +++++++ Last updated : 02/09/2023+++++++# Secure external access with groups in Azure Active Directory and Microsoft 365 ++Groups are part of an access control strategy. You can use Azure Active Directory (Azure AD) security groups and Microsoft 365 Groups as the basis for securing access to resources. Use groups for the following access-control mechanisms: ++* Conditional Access policies + * [What is Conditional Access?](../conditional-access/overview.md) +* Entitlement management access packages + * [What is entitlement management?](../governance/entitlement-management-overview.md) +* Access to Microsoft 365 resources, Microsoft Teams, and SharePoint sites ++Groups have the following roles: ++* **Group owners** ΓÇô manage group settings and its membership +* **Members** ΓÇô inherit permissions and access assigned to the group +* **Guests** ΓÇô are members outside your organization ++## Before you begin ++This article is number 4 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Group strategy ++To develop a group strategy to secure external access to your resources, consider the security posture that you want. ++Learn more: [Determine your security posture for external access](1-secure-access-posture.md) ++### Group creation ++Determine who is granted permissions to create groups: Administrators, employees, and/or external users. Consider the following scenarios: ++* Tenant members can create Azure AD security groups +* Internal and external users can join groups in your tenant +* Users can create Microsoft 365 Groups +* [Manage who can create Microsoft 365 Groups](/microsoft-365/solutions/manage-creation-of-groups?view=o365-worldwide&preserve-view=true) + * Use Windows PowerShell to configure this setting +* [Restrict your Azure AD app to a set of users in an Azure AD tenant](../develop/howto-restrict-your-app-to-a-set-of-users.md) +* [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md) +* [Troubleshoot and resolve groups issues](../enterprise-users/groups-troubleshooting.md) ++### Invitations to groups ++As part of the group strategy, consider who can invite people, or add them, to groups. Group members can add other members, or group owners can add members. Decide who can be invited. By default, external users can be added to groups. ++### Assign users to groups ++Users are assigned to groups manually, based on user attributes in their user object, or users are assigned based on other criteria. Users are assigned to groups dynamically based on their attributes. For example, you can assign users to groups based on: ++* Job title or department +* Partner organization to which they belong + * Manually, or through connected organizations +* Member or guest user type +* Participation in a project + * Manually +* Location ++Dynamic groups have users or devices, but not both. To assign users to the dynamic group, add queries based on user attributes. The following screenshot has queries that add users to the group if they are finance department members. ++ ![Screenshot of options and entries under Dynamic membership rules.](media/secure-external-access/4-dynamic-membership-rules.png) ++Learn more: [Create or update a dynamic group in Azure AD](../enterprise-users/groups-create-rule.md) ++### Use groups for one function ++When using groups, it's important they have a single function. If a group is used to grant access to resources, don't use it for another purpose. We recommend a security-group naming convention that makes the purpose clear: ++* Secure_access_finance_apps +* Team_membership_finance_team +* Location_finance_building ++### Group types ++You can create Azure AD security groups and Microsoft 365 Groups in the Azure portal or the Microsoft 365 Admin portal. Use either group type for securing external access. ++| Considerations |Manual and dynamic Azure AD security groups| Microsoft 365 Groups | +| - | - | - | +| The group contains| Users<br>Groups<br>Service principals<br>Devices| Users only | +| Where the group is created| Azure portal<br>Microsoft 365 portal, if mail-enabled)<br>PowerShell<br>Microsoft Graph<br>End user portal| Microsoft 365 portal<br>Azure portal<br>PowerShell<br>Microsoft Graph<br>In Microsoft 365 applications | +| Who creates, by default| Administrators <br>Users| Administrators<br>Users | +| Who is added, by default| Internal users (tenant members) and guest users | Tenant members and guests from an organization | +| Access is granted to| Resources to which it's assigned.| Group-related resources:<br>(Group mailbox, site, team, chats, and other Microsoft 365 resources)<br>Other resources to which group is added | +| Can be used with| Conditional Access<br>entitlement management<br>group licensing| Conditional Access<br>entitlement management<br>sensitivity labels | ++> [!NOTE] +> Use Microsoft 365 Groups to create and manage a set of Microsoft 365 resources, such as a Team and its associated sites and content. ++## Azure AD security groups ++Azure AD security groups can have users or devices. Use these groups to manage access to: ++* Azure resources + * Microsoft 365 apps + * Custom apps + * Software as a Service (SaaS) apps such as Dropbox ServiceNow +* Azure data and subscriptions +* Azure services ++Use Azure AD security groups to assign: ++* Licenses for services + * Microsoft 365 + * Dynamics 365 + * Enterprise mobility and security + * See, [What is group-based licensing in Azure Active Directory?](../fundamentals/licensing-whatis-azure-portal.md) +* Elevated permissions + * See, [Use Azure AD groups to manage role assignments](../roles/groups-concept.md) ++Learn more: ++* [Manage Azure AD groups and group membership](../fundamentals/how-to-manage-groups.md) +* [Azure AD version 2 cmdlets for group management](../enterprise-users/groups-settings-v2-cmdlets.md). ++> [!NOTE] +> Use security groups to assign up to 1,500 applications. ++ ![Screenshot of entries and options under New Group.](media/secure-external-access/4-create-security-group.png) ++### Mail-enabled security group ++To create a mail-enabled security group, go to the [Microsoft 365 admin center](https://admin.microsoft.com/). Enable a security group for mail during creation. You canΓÇÖt enable it later. You can't create the group in the Azure portal. ++### Hybrid organizations and Azure AD security groups ++Hybrid organizations have infrastructure for on-premises and an Azure AD. Hybrid organizations that use Active Directory can create security groups on-premises and sync them to the cloud. Therefore, only users in the on-premises environment can be added to the security groups. ++> [!IMPORTANT] +> Protect your on-premises infrastructure from compromise. See, [Protecting Microsoft 365 from on-premises attacks](./protect-m365-from-on-premises-attacks.md). ++## Microsoft 365 Groups ++Microsoft 365 Groups is the membership service for access across Microsoft 365. They can be created from the Azure portal, or the Microsoft 365 admin center. When you create a Microsoft 365 Group, you grant access to a group of resources for collaboration. ++Learn more: ++* [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups?view=o365-worldwide&preserve-view=true) +* [Create a group in the Microsoft 365 admin center](/microsoft-365/admin/create-groups/create-groups?view=o365-worldwide&preserve-view=true) +* [Azure portal](https://portal.azure.com/) +* [Microsoft 365 admin center](https://admin.microsoft.com/) ++### Microsoft 365 Groups roles ++* **Group owners** + * Add or remove members + * Delete conversations from the shared inbox + * Change group settings + * Rename the group + * Update the description or picture +* **Members** + * Access everything in the group + * Can't change group settings + * Can invite guests to join the group + * [Manage guest access in Microsoft 365 groups](/microsoft-365/admin/create-groups/manage-guest-access-in-groups) +* **Guests** + * Are members from outside your organization + * Have some limits to functionality in Teams ++### Microsoft 365 Group settings ++Select email alias, privacy, and whether to enable the group for teams. ++ ![Screenshot of options and entries under Edit settings.](media/secure-external-access/4-edit-group-settings.png) ++After setup, add members, and configure settings for email usage, etc. ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) (You're here) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) |
active-directory | 5 Secure Access B2b | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/5-secure-access-b2b.md | + + Title: Transition to governed collaboration with Azure Active Directory B2B collaboration +description: Move to governed collaboration with Azure Ad B2B collaboration by using controls, tools, and settings. +++++++ Last updated : 02/22/2023+++++++# Transition to governed collaboration with Azure Active Directory B2B collaboration ++Understanding collaboration helps secure external access to your resources. Use the information in this article to move external collaboration into Azure Active Directory B2B (Azure AD B2B) collaboration. ++* See, [B2B collaboration overview](../external-identities/what-is-b2b.md) +* Learn about: [External Identities in Azure AD](../external-identities/external-identities-overview.md) ++## Before you begin ++This article is number 5 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Control collaboration ++You can limit the organizations your users collaborate with (inbound and outbound), and who in your organization can invite guests. Most organizations permit business units to decide collaboration, and delegate approval and oversight. For example, organizations in government, education, and finance often don't permit open collaboration. You can use Azure AD features to control collaboration. ++To control access your tenant, deploy one or more of the following solutions: ++- **External collaboration settings** – restrict the email domains that invitations go to +- **Cross tenant access settings** – control application access by guests by user, group, or tenant (inbound). Control external Azure AD tenant and application access for users (outbound). +- **Connected organizations** – determine what organizations can request access packages in Entitlement Management ++### Determine collaboration partners ++Document the organizations you collaborate with, and organization users' domains, if needed. Domain-based restrictions might be impractical. One collaboration partner can have multiple domains, and a partner can add domains. For example, a partner with multiple business units, with separate domains, can add more domains as they configure synchronization. ++If your users use Azure AD B2B, you can discover the external Azure AD tenants they're collaborating with, with the sign-in logs, PowerShell, or a workbook. Learn more: ++* [Get MsIdCrossTenantAccessActivity](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity) +* [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md) ++You can enable future collaboration with: ++- **External organizations** - most inclusive +- **External organizations, but not denied organizations** +- **Specific external organizations** - most restrictive ++> [!NOTE] +> If your collaboration settings are highly restrictive, your users might go outside the collaboration framework. We recommend you enable a broad collaboration that your security requirements allow. ++Limits to one domain can prevent authorized collaboration with organizations that have other, unrelated domains. For example, the initial point of contact with Contoso might be a US-based employee with email that has a `.com` domain. However if you allow only the `.com` domain, you can omit Canadian employees who have the `.ca` domain. ++You can allow specific collaboration partners for a subset of users. For example, a university might restrict student accounts from accessing external tenants, but can allow faculty to collaborate with external organizations. ++### Allowlist and blocklist with external collaboration settings ++You can use an allowlist or blocklist for organizations. You can use an allowlist, or a blocklist, not both. ++* **Allowlist** - limit collaboration to a list of domains. Other domains are on the blocklist. +* **Blocklist** - allow collaboration with domains not on the blocklist ++Learn more: [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md) ++> [!IMPORTANT] +> Allowlists and blocklists don't apply to users in your directory. By default, they don't apply to OneDrive for Business and SharePoint allowlist or blocklists; these lists are separate. However, you can enable [SharePoint-OneDrive B2B integration](/sharepoint/sharepoint-azureb2b-integration). ++Some organizations have a blocklist of bad-actor domains from a managed security provider. For example, if the organization does business with Contoso and uses a `.com` domain, an unrelated organization can use the `.org` domain, and attempt a phishing attack. ++### Cross tenant access settings ++You can control inbound and outbound access using cross tenant access settings. In addition, you can trust multi-factor authentication (MFA), a compliant device, and hybrid Azure Active Directory joined device (HAAJD) claims from external Azure AD tenants. When you configure an organizational policy, it applies to the Azure AD tenant and applies to users in that tenant, regardless of domain suffix. ++You can enable collaboration across Microsoft clouds, such as Microsoft Azure operated by 21Vianet (Azure China) or Azure Government. Determine if your collaboration partners reside in a different Microsoft cloud. ++Learn more: ++* [Microsoft Azure operated by 21Vianet](/azure/china/overview-operations) +* [Azure Government developer guide](/azure/azure-government/documentation-government-developer-guide) +* [Configure Microsoft cloud settings for B2B collaboration (Preview)](../external-identities/cross-cloud-settings.md). ++You can allow inbound access to specific tenants (allowlist), and set the default policy to block access. Then, create organizational policies that allow access by user, group, or application. ++You can block access to tenants (blocklist). Set the default policy to **Allow** and then create organizational policies that block access to some tenants. ++> [!NOTE] +> Cross tenant access settings, inbound access does not prevent users from sending invitations, nor prevent them from being redeemed. However, it does control application access and whether a token is issued to the guest user. If the guest can redeem an invitation, policy blocks application access. ++To control external organizations users access, configure outbound access policies similarly to inbound access: allowlist and blocklist. Configure default and organization-specific policies. ++Learn more: [Configure cross-tenant access settings for B2B collaboration](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) ++> [!NOTE] +> Cross tenant access settings apply to Azure AD tenants. To control access for partners not using Azure AD, use external collaboration settings. ++### Entitlement management and connected organizations ++Use entitlement management to ensure automatic guest-lifecycle governance. Create access packages and publish them to external users or to connected organizations, which support Azure AD tenants and other domains. When you create an access package, restrict access to connected organizations. ++Learn more: [What is entitlement management?](../governance/entitlement-management-overview.md) ++## Control external user access ++To begin collaboration, invite or enable a partner to access resources. Users gain access by: ++* [Azure AD B2B collaboration invitation redemption](../external-identities/redemption-experience.md) +* [Self-service sign-up](../external-identities/self-service-sign-up-overview.md) +* [Requesting access to an access package in entitlement management](../governance/entitlement-management-request-access.md) ++When you enable Azure AD B2B, you can invite guest users with links and email invitations. Self-service sign-up, and publishing access packages to the My Access portal, require more configuration. ++> [!NOTE] +> Self-service sign-up enforces no allowlist or blocklist in external collaboration settings. Instead, use cross tenant access settings. You can integrate allowlists and blocklists with self-service sign-up using custom API connectors. See, [Add an API connector to a user flow](../external-identities/self-service-sign-up-add-api-connector.md). ++### Guest user invitations ++Determine who can invite guest users to access resources. ++* Most restrictive: Allow only administrators and users with the Guest Inviter role + * See, [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md) +* If security requirements permit, allow all Member UserType to invite guests +* Determine if Guest UserType can invite guests + * Guest is the default Azure AD B2B user account ++ ![Screenshot of guest invitation settings.](media/secure-external-access/5-guest-invite-settings.png) ++### External user information ++Use Azure AD entitlement management to configure questions that external users answer. The questions appear to approvers to help them make a decision. You can configure sets of questions for each access package policy, so approvers have relevant information for access they approve. For example, ask vendors for their vendor contract number. ++Learn more: [Change approval and requestor information settings for an access package in entitlement management](../governance/entitlement-management-access-package-approval-policy.md) ++If you use a self-service portal, use API connectors to collect user attributes during sign-up. Use the attributes to assign access. You can create custom attributes in the Azure portal and use them in your self-service sign-up user flows. Read and write these attributes by using the Microsoft Graph API. ++Learn more: ++* [Use API connectors to customize and extend self-service sign-up](../external-identities/api-connectors-overview.md) +* [Manage Azure AD B2C with Microsoft Graph](../../active-directory-b2c/microsoft-graph-operations.md) ++### Troubleshoot invitation redemption to Azure AD users ++Invited guest users from a collaboration partner can have trouble redeeming an invitation. See the following list for mitigations. ++* User domain isn't on an allowlist +* The partner’s home tenant restrictions prevent external collaboration +* The user isn't in the partner Azure AD tenant. For example, users at contoso.com are in Active Directory. + * They can redeem invitations with the email one-time password (OTP) + * See, [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md) ++## External user access ++Generally, there are resources you can share with external users, and some you can't. You can control what external users access. ++Learn more: [Manage external access with Entitlement Management](6-secure-access-entitlement-managment.md) ++By default, guest users see information and attributes about tenant members and other partners, including group memberships. Consider limiting external user access to this information. ++ ![Screenshot of guest user access options on External collaboration settings.](media/secure-external-access/5-external-collaboration-settings.png) ++We recommend the following guest-user restrictions: ++* Limit guest access to browsing groups and other properties in the directory + * Use external collaboration settings to restrict guests from reading groups they aren't members of +* Block access to employee-only apps + * Create a Conditional Access policy to block access to Azure AD-integrated applications for non-guest users +* Block access to the Azure portal + * You can make needed exceptions + * Create a Conditional Access policy with all guest and external users. Implement a policy to block access. ++Learn more: [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md) ++## Remove users who don't need access ++Establish a process to review and remove users who don't need access. Include external users in your tenant as guests, and users with member accounts. ++Learn more: [Use Azure AD Identity Governance to review and remove external users who no longer have resource access](../governance/access-reviews-external-users.md) ++Some organizations add external users as members (vendors, partners, and contractors). Assign an attribute, or username: ++* **Vendors** - v-alias@contoso.com +* **Partners** - p-alias@contoso.com +* **Contractors** - c-alias@contoso.com ++Evaluate external users with member accounts to determine access. You might have guest users not invited through entitlement management or Azure AD B2B. ++To find these users: ++* [Use Azure AD Identity Governance to review and remove external users who no longer have resource access](../governance/access-reviews-external-users.md) +* Use a sample PowerShell script on [access-reviews-samples/ExternalIdentityUse/](https://github.com/microsoft/access-reviews-samples/tree/master/ExternalIdentityUse) ++## Transition current external users to Azure AD B2B ++If you don't use Azure AD B2B, you likely have non-employee users in your tenant. We recommend you transition these accounts to Azure AD B2B external user accounts and then change their UserType to Guest. Use Azure AD and Microsoft 365 to handle external users. ++Include or exclude: ++* Guest users in Conditional Access policies +* Guest users in access packages and access reviews +* External access to Microsoft Teams, SharePoint, and other resources ++You can transition these internal users while maintaining current access, user principal name (UPN), and group memberships. ++Lear more: [Invite external users to B2B collaboration](../external-identities/invite-internal-users.md) ++## Decommission collaboration methods ++To complete the transition to governed collaboration, decommission unwanted collaboration methods. Decommissioning is based on the level of control to exert on collaboration, and the security posture. See, [Determine your security posture for external access](1-secure-access-posture.md). ++### Microsoft Teams invitation ++By default, Teams allows external access. The organization can communicate with external domains. To restrict or allow domains for Teams, use the [Teams admin center](https://admin.teams.microsoft.com/company-wide-settings/external-communications). ++### Sharing through SharePoint and OneDrive ++Sharing through SharePoint and OneDrive adds users not in the entitlement management process. ++* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) +* [Block OneDrive use from Office](/office365/troubleshoot/group-policy/block-onedrive-use-from-office) ++### Emailed documents and sensitivity labels ++Users send documents to external users by email. You can use sensitivity labels to restrict and encrypt access to documents. ++See, [Learn about sensitivity labels](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide&preserve-view=true). ++### Unsanctioned collaboration tools ++Some users likely use Google Docs, DropBox, Slack, or Zoom. You can block use of these tools from a corporate network, at the firewall level, and with mobile application management for organization-managed devices. However, this action blocks sanctioned instances and doesn't block access from unmanaged devices. Block tools you don’t want, and create policies for no unsanctioned usage. ++For more information on governing applications, see: ++* [Governing connected apps](/defender-cloud-apps/governance-actions) +* [Govern discovered apps](/defender-cloud-apps/governance-discovery) ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) (You're here) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) |
active-directory | 6 Secure Access Entitlement Managment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/6-secure-access-entitlement-managment.md | + + Title: Manage external access with Azure Active Directory entitlement management +description: How to use Azure AD Entitlement Management as a part of your overall external access security plan. +++++++ Last updated : 02/23/2023+++++++# Manage external access with Azure Active Directory entitlement management ++Use the entitlement management feature to manage the identity and access lifecycle. You can automate access request workflows, access assignments, reviews, and expiration. Delegated non-admins use entitlement management to create access packages that external users, from other organizations, can request access to. One and multi-stage approval workflows are configurable to evaluate requests, and provision users for time-limited access with recurring reviews. Use entitlement management for policy-based provisioning and deprovisioning of external accounts. ++Learn more: ++* [What is entitlement management?](../governance/entitlement-management-overview.md) +* [What are access packages and what resources can I manage with them?](../governance/entitlement-management-overview.md#what-are-access-packages-and-what-resources-can-i-manage-with-them) +* [What is provisioning?](../governance/what-is-provisioning.md) ++## Before you begin ++This article is number 6 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Enable entitlement management ++The following key concepts are important to understand for entitlement management. ++### Access packages ++An access package is the foundation of entitlement management: groupings of policy-governed resources for users to collaborate on a project or do other tasks. For example, an access package might include: ++* Access to SharePoint sites +* Enterprise applications, including your custom in-house and SaaS apps, like Salesforce +* Microsoft Teams +* Microsoft 365 Groups ++### Catalogs ++Access packages reside in catalogs. When you want to group related resources and access packages and delegate their management, you create a catalog. First, you add resources to a catalog, and then you can add resources to access packages. For example, you can create a finance catalog, and delegate its management to a member of the finance team. That person can add resources, create access packages, and manage access approval. ++Learn more: ++* [Create and manage a catalog of resources in entitlement management](../governance/entitlement-management-catalog-create.md) +* [Delegation and roles in entitlement management](../governance/entitlement-management-delegate.md) +* [Add resources to a catalog](../governance/entitlement-management-catalog-create.md#add-resources-to-a-catalog) ++The following diagram shows a typical governance lifecycle of an external user gaining access to an access package, with an expiration. ++ ![A diagram of the external user governance cycle.](media/secure-external-access/6-governance-lifecycle.png) ++### Self-service external access ++You can make access packages available, through the Azure AD My Access portal, to enable external users to request access. Policies determine who can request an access package. See, [Request access to an access package in entitlement management](../governance/entitlement-management-request-access.md). ++You specify who is allowed to request the access package: ++* Connected organizations + * See, [Add a connected organization in entitlement management](../governance/entitlement-management-organization.md) +* Configured connected organizations +* Users from organizations +* Member or guest users in your tenant ++### Approvals ++Access packages can include mandatory approval for access. Approvals can be single or multi-stage and are determined by policies. If internal and external users need to access the same package, you can set up access policies for categories of connected organizations, and for internal users. ++> [!IMPORTANT] +> Implement approval processes for external users. ++### Expiration ++Access packages can include an expiration date or a number of days you set for access. When the access package expires, and access ends, the B2B guest user object representing the user can be deleted or blocked from signing in. We recommend you enforce expiration on access packages for external users. Not all access packages have expirations. ++> [!IMPORTANT] +> For packages without expiration, perform regular access reviews. ++### Access reviews ++Access packages can require periodic access reviews, which require the package owner or a designee to attest to the continued need for usersΓÇÖ access. See, [Manage guest access with access reviews](../governance/manage-guest-access-with-access-reviews.md). ++Before you set up your review, determine the following criteria: ++* Who + * Criteria for continued access + * Reviewers +* How often + * Built-in options are monthly, quarterly, bi-annually, or annually + * We recommend quarterly, or more frequent, reviews for packages that support external access ++> [!IMPORTANT] +> Access package reviews examine access granted through entitlement management. Set up other processes to review access to external users, outside entitlement management. ++Learn more: [Plan a Microsoft Entra access reviews deployment](../governance/deploy-access-reviews.md). ++## Using entitlement management automation ++* [Working with the Azure AD entitlement management API](/graph/api/resources/entitlementmanagement-overview?view=graph-rest-1.0&preserve-view=true ) +* [accessPackage resource type](/graph/api/resources/accesspackage?view=graph-rest-1.0&preserve-view=true ) +* [Azure AD access reviews](/graph/api/resources/accessreviewsv2-overview?view=graph-rest-1.0&preserve-view=true ) +* [connectedOrganization resource type](/graph/api/resources/connectedorganization?view=graph-rest-1.0&preserve-view=true ) +* [entitlementManagementSettings resource type](/graph/api/resources/entitlementmanagementsettings?view=graph-rest-1.0&preserve-view=true ) ++## External access governance recommendations ++### Best practices ++We recommend the following practices to govern external access with entitlement management. ++* For projects with one or more business partners, create and use access packages to onboard and provide access to resources. + * [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) +* If you have B2B users in your directory, you can assign them to access packages. +* You can assign access in the Azure portal or with Microsoft Graph + * [View, add, and remove assignments for an access package in entitlement management](../governance/entitlement-management-access-package-assignments.md) + * [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) ++### Identity Governance - Settings ++Use **Identity Governance - Settings** to remove users from your directory when their access packages expire. The following settings apply to users onboarded with entitlement management. ++ ![Screenshot of settings and entries for Manage the lifecycle of external users.](media/secure-external-access/6-manage-external-lifecycle.png) ++### Delegate catalog and package management ++You can delegate catalog and package management to business owners, who have more information on who should access. See, [Delegation and roles in entitlement managements](../governance/entitlement-management-delegate.md) ++ ![Screenshot of options and entries under Roles and administrators.](media/secure-external-access/6-catalog-management.png) ++### Enforce access package expiration ++You can enforce access expiration for external users. See, [Change lifecycle settings for an access package in entitlement management](../governance/entitlement-management-access-package-lifecycle-policy.md). ++ ![Screenshot of options and entries for Expiration.](media/secure-external-access/6-access-package-expiration.png) ++* For the end date of a project-based access package, use **On date** to set the date. + * Otherwise we recommend expiration to be no longer 365 days, unless it's a multi-year project +* Allow users to extend access + * Require approval to grant the extension ++### Enforce guest-access package reviews ++You can enforce reviews of guest-access packages to avoid inappropriate access for guests. See, [Manage guest access with access reviews](../governance/manage-guest-access-with-access-reviews.md). ++ ![Screenshot of options and entries under New access package.](media/secure-external-access/6-new-access-package.png) ++* Enforce quarterly reviews +* For compliance-related projects, set the reviewers to be reviewers, rather than self-review for external users. + * You can use access package managers as reviewers +* For less sensitive projects, users self-reviewing reduces the burden to remove access from users no longer with the organization. ++Learn more: [Govern access for external users in entitlement management](../governance/entitlement-management-external-users.md) ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) (You're here) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) +++ + |
active-directory | 7 Secure Access Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/7-secure-access-conditional-access.md | + + Title: Manage external access to resources with Conditional Access +description: Learn to use Conditional Access policies to secure external access to resources. +++++++ Last updated : 02/23/2023+++++++# Manage external access to resources with Conditional Access policies ++Conditional Access interprets signals, enforces policies, and determines if a user is granted access to resources. In this article, learn about applying Conditional Access policies to external users. The article assumes you might not have access to entitlement management, a feature you can use with Conditional Access. ++Learn more: ++* [What is Conditional Access?](../conditional-access/overview.md) +* [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) +* [What is entitlement management?](../governance/entitlement-management-overview.md) ++The following diagram illustrates signals to Conditional Access that trigger access processes. ++ ![Diagram of Conditional Access signal input and resulting access processes.](media/secure-external-access//7-conditional-access-signals.png) ++## Before you begin ++This article is number 7 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Align a security plan with Conditional Access policies ++In the third article, in the set of 10 articles, there's guidance on creating a security plan. Use that plan to help create Conditional Access policies for external access. Part of the security plan includes: ++* Grouped applications and resources for simplified access +* Sign-in requirements for external users ++> [!IMPORTANT] +> Create internal and external user test accounts to test policies before applying them. ++See article three, [Create a security plan for external access to resources](3-secure-access-plan.md) ++## Conditional Access policies for external access ++The following sections are best practices for governing external access with Conditional Access policies. ++### Entitlement management or groups ++If you canΓÇÖt use connected organizations in entitlement management, create an Azure AD security group, or Microsoft 365 Group for partner organizations. Assign users from that partner to the group. You can use the groups in Conditional Access policies. ++Learn more: ++* [What is entitlement management?](../governance/entitlement-management-overview.md) +* [Manage Azure Active Directory groups and group membership](../fundamentals/how-to-manage-groups.md) +* [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups?view=o365-worldwide&preserve-view=true) ++### Conditional Access policy creation ++Create as few Conditional Access policies as possible. For applications that have the same access requirements, add them to the same policy. ++Conditional Access policies apply to a maximum of 250 applications. If more than 250 applications have the same access requirement, create duplicate policies. For instance, Policy A applies to apps 1-250, Policy B applies to apps 251-500, etc. ++### Naming convention ++Use a naming convention that clarifies policy purpose. External access examples are: ++* ExternalAccess_actiontaken_AppGroup +* ExternalAccess_Block_FinanceApps ++## Block external users from resources ++You can block external users from accessing resources with Conditional Access policies. ++1. Sign in to the [Azure portal](https://portal.azure.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator. +2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +3. Select **New policy**. +4. Enter a policy a name. +5. Under **Assignments**, select **Users or workload identities**. +6. Under **Include**, select **All guests and external users**. +7. Under **Exclude**, select **Users and groups**. +8. Select emergency access accounts. +9. Select **Done**. +10. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. +11. Under **Exclude**, select applications you want to exclude. +12. Under **Access controls** > **Grant**, select **Block access**. +13. Select **Select**. +14. Select **Enable policy** to **Report-only**. +15. Select **Create**. ++> [!NOTE] +> You can confirm settings in **report only** mode. See, Configure a Conditional Access policy in repory-only mode, in [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md). ++Learn more: [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md) ++### Allow external access to specific external users ++There are scenarios when it's necessary to allow access for a small, specific group. ++Before you begin, we recommend you create a security group, which contains external users who access resources. See, [Quickstart: Create a group with members and view all groups and members in Azure AD](../fundamentals/groups-view-azure-portal.md). ++1. Sign in to the [Azure portal](https://portal.azure.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator. +2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +3. Select **New policy**. +4. Enter a policy name. +5. Under **Assignments**, select **Users or workload identities**. +6. Under **Include**, select **All guests and external users**. +7. Under **Exclude**, select **Users and groups** +8. Select emergency access accounts. +9. Select the external users security group. +10. Select **Done**. +11. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. +12. Under **Exclude**, select applications you want to exclude. +13. Under **Access controls** > **Grant**, select **Block access**. +14. Select **Select**. +15. Select **Create**. ++> [!NOTE] +> You can confirm settings in **report only** mode. See, Configure a Conditional Access policy in repory-only mode, in [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md). ++Learn more: [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md) ++### Service provider access ++Conditional Access policies for external users might interfere with service provider access, for example granular delegated administrate privileges. ++Learn more: [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction) ++## Conditional Access templates ++Conditional Access templates are a convenient method to deploy new policies aligned with Microsoft recommendations. These templates provide protection aligned with commonly used policies across various customer types and locations. ++Learn more: [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md) ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) (You're here) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) |
active-directory | 8 Secure Access Sensitivity Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/8-secure-access-sensitivity-labels.md | + + Title: Control external access to resources in Azure Active Directory with sensitivity labels +description: Use sensitivity labels as a part of your overall security plan for external access +++++++ Last updated : 02/23/2023+++++++# Control external access to resources in Azure Active Directory with sensitivity labels ++Use sensitivity labels to help control access to your content in Office 365 applications, and in containers like Microsoft Teams, Microsoft 365 Groups, and SharePoint sites. They protect content without hindering user collaboration. Use sensitivity labels to send organization-wide content across devices, apps, and services, while protecting data. Sensitivity labels help organizations meet compliance and security policies. + +See, [Learn about sensitivity labels](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide&preserve-view=true) ++## Before you begin ++This article is number 8 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## Assign classification and enforce protection settings ++You can classify content without adding any protection settings. Content classification assignment stays with the content while itΓÇÖs used and shared. The classification generates usage reports with sensitive-content activity data. ++Enforce protection settings such as encryption, watermarks, and access restrictions. For example, users apply a Confidential label to a document or email. The label can encrypt the content and add a Confidential watermark. In addition, you can apply a sensitivity label to a container like a SharePoint site, and help manage external users access. ++Learn more: ++* [Restrict access to content by using sensitivity labels to apply encryption](/microsoft-365/compliance/encryption-sensitivity-labels?view=o365-worldwide&preserve-view=true) +* [Use sensitivity labels to protect content in Microsoft Teams, Microsoft 365 Groups, and SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites) ++Sensitivity labels on containers can restrict access to the container, but content in the container doesn't inherit the label. For example, a user takes content from a protected site, downloads it, and then shares it without restrictions, unless the content had a sensitivity label. ++ >[!NOTE] +>To apply sensitivity labels users sign into their Microsoft work or school account. ++## Permissions to create and manage sensitivity levels ++Team members who need to create sensitivity labels require permissions to: ++* Microsoft 365 Defender portal, +* Microsoft Purview compliance portal, or +* [Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center?view=o365-worldwide&preserve-view=true) ++By default, tenant Global Administrators have access to admin centers and can provide access, without granting tenant Admin permissions. For this delegated limited admin access, add users to the following role groups: ++* Compliance Data Administrator, +* Compliance Administrator, or +* Security Administrator ++## Sensitivity label strategy ++As you plan the governance of external access to your content, consider content, containers, email, and more. ++### High, Medium, or Low Business Impact ++To define High, Medium, or Low Business Impact (HBI, MBI, LBI) for data, sites, and groups, consider the effect on your organization if the wrong content types are shared. ++* Credit card, passport, national/regional ID numbers + * [Apply a sensitivity label to content automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide&preserve-view=true) +* Content created by corporate officers: compliance, finance, executive, etc. +* Strategic or financial data in libraries or sites. ++Consider the content categories that external users can't have access to, such as containers and encrypted content. You can use sensitivity labels, enforce encryption, or use container access restrictions. ++### Email and content ++Sensitivity labels can be applied automatically or manually to content. ++See, [Apply a sensitivity label to content automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide&preserve-view=true) ++#### Sensitivity labels on email and content ++A sensitivity label in a document or email is customizable, clear text, and persistent. ++* **Customizable** - create labels for your organization and determine the resulting actions +* **Clear text** - is incorporated in metadata and readable by applications and services +* **Persistency** - ensures the label and associated protections stay with the content, and help enforce policies ++> [!NOTE] +> Each content item can have one sensitivity label applied. ++### Containers ++Determine the access criteria if Microsoft 365 Groups, Teams, or SharePoint sites are restricted with sensitivity labels. You can label content in containers or use automatic labeling for files in SharePoint, OneDrive, etc. ++Learn more: [Get started with sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels?view=o365-worldwide&preserve-view=true) ++#### Sensitivity labels on containers ++You can apply sensitivity labels to containers such as Microsoft 365 Groups, Microsoft Teams, and SharePoint sites. Sensitivity labels on a supported container apply the classification and protection settings to the connected site or group. Sensitivity labels on these containers can control: ++* **Privacy** - select the users who can see the site +* **External user access** - determine if group owners can add guests to a group +* **Access from unmanaged devices** - decide if and how unmanaged devices access content ++ ![Screenshot of options and entries under Site and group settings.](media/secure-external-access/8-edit-label.png) ++Sensitivity labels applied to a container, such as a SharePoint site, aren't applied to content in the container; they control access to content in the container. Labels can be applied automatically to the content in the container. For users to manually apply labels to content, enable sensitivity labels for Office files in SharePoint and OneDrive. ++Learn more: ++* [Enable sensitivity labels for Office files in SharePoint and OneDrive](/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files?view=o365-worldwide&preserve-view=true). +* [Use sensitivity labels to protect content in Microsoft Teams, Microsoft 365 Groups, and SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites) +* [Assign sensitivity labels to Microsoft 365 groups in Azure AD](../enterprise-users/groups-assign-sensitivity-labels.md) ++### Implement sensitivity labels ++After you determine use of sensitivity labels, see the following documentation for implementation. ++* [Get started with sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels?view=o365-worldwide&preserve-view=true) +* [Create and publish sensitivity labels](/microsoft-365/compliance/create-sensitivity-labels?view=o365-worldwide&preserve-view=true) +* [Restrict access to content by using sensitivity labels to apply encryption](/microsoft-365/compliance/encryption-sensitivity-labels?view=o365-worldwide&preserve-view=true) ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) (You're here) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) |
active-directory | 9 Secure Access Teams Sharepoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/9-secure-access-teams-sharepoint.md | + + Title: Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure Active Directory +description: Secure access to Microsoft 365 services as a part of your external access security plan +++++++ Last updated : 02/28/2023+++++++# Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure Active Directory ++Use this article to determine and configure your organization's external collaboration using Microsoft Teams, OneDrive for Business, and SharePoint. A common challenge is balancing security and ease of collaboration for end users and external users. If an approved collaboration method is perceived as restrictive and onerous, end users evade the approved method. End users might email unsecured content, or set up external processes and applications, such as a personal DropBox or OneDrive. ++## Before you begin ++This article is number 9 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series. ++## External Identities settings and Azure Active Directory ++Sharing in Microsoft 365 is partially governed by the **External Identities, External collaboration settings** in Azure Active Directory (Azure AD). If external sharing is disabled or restricted in Azure AD, it overrides sharing settings configured in Microsoft 365. An exception is if Azure AD B2B integration isn't enabled. You can configure SharePoint and OneDrive to support ad-hoc sharing via one-time password (OTP). The following screenshot shows the External Identities, External collaboration settings dialog. +++Learn more: ++* [Azure portal](https://portal.azure.com/) +* [External Identities in Azure AD](../external-identities/external-identities-overview.md) ++### Guest user access ++Guest users are invited to have access to resources. ++1. Sign in to the **Azure portal** +1. Browse to **Azure Active Directory** > **External Identities** > **External collaboration settings**. +1. Find the **Guest user access** options. +1. To prevent guest-user access to other guest-user details, and to prevent enumeration of group membership, select **Guest users have limited access to properties and memberships of directory objects**. ++### Guest invite settings ++Guest invite settings determine who invites guests and how guests are invited. The settings are enabled if the B2B integration is enabled. It's recommended that administrators and users, in the Guest Inviter role, can invite. This setting allows setup of controlled collaboration processes. For example: ++* Team owner submits a ticket requesting assignment to the Guest Inviter role: + * Responsible for guest invitations + * Agrees to not add users to SharePoint + * Performs regular access reviews + * Revokes access as needed ++* The IT team: + * After training is complete, the IT team grants the Guest Inviter role + * Ensures there are sufficient Azure AD Premium P2 licenses for the Microsoft 365 group owners who will review + * Creates a Microsoft 365 group access review + * Confirms access reviews occur + * Removes users added to SharePoint ++1. Select the banner for **Email one-time passcodes for guests**. +2. For **Enable guest self-service sign up via user flows**, select **Yes**. ++### Collaboration restrictions ++For the Collaboration restrictions option, the organization's business requirements dictate the choice of invitation. ++* **Allow invitations to be sent to any domain (most inclusive)** - any user can be invited +* **Deny invitations to the specified domains** - any user outside those domains can be invited +* **Allow invitations only to the specified domains (most restrictive)** - any user outside those domains can't be invited ++## External users and guest users in Teams ++Teams differentiates between external users (outside your organization) and guest users (guest accounts). You can manage collaboration setting in the [Microsoft Teams admin center](https://admin.teams.microsoft.com/company-wide-settings/external-communications) under Org-wide settings. Authorized account credentials are required to sign in to the Teams Admin portal. ++* **External Access** - Teams allows external access by default. The organization can communicate with all external domains + * Use External Access setting to restrict or allow domains +* **Guest Access** - manage guest access in Teams ++Learn more: [Use guest access and external access to collaborate with people outside your organization](/microsoftteams/communicate-with-users-from-other-organizations). ++The External Identities collaboration feature in Azure AD controls permissions. You can increase restrictions in Teams, but restrictions can't be lower than Azure AD settings. ++Learn more: ++* [Manage external meetings and chat in Microsoft Teams](/microsoftteams/manage-external-access) +* [Step 1. Determine your cloud identity model](/microsoft-365/enterprise/about-microsoft-365-identity) +* [Identity models and authentication for Microsoft Teams](/microsoftteams/identify-models-authentication) +* [Sensitivity labels for Microsoft Teams](/microsoftteams/sensitivity-labels) ++## Govern access in SharePoint and OneDrive ++SharePoint administrators can find organization-wide settings in the SharePoint admin center. It's recommended that your organization-wide settings are the minimum security levels. Increase security on some sites, as needed. For example, for a high-risk project, restrict users to certain domains, and disable members from inviting guests. ++Learn more: +* [SharePoint admin center](https://microsoft-admin.sharepoint.com) - access permissions are required +* [Get started with the SharePoint admin center](/sharepoint/get-started-new-admin-center) +* [External sharing overview](/sharepoint/external-sharing-overview) ++### Integrating SharePoint and OneDrive with Azure AD B2B ++As a part of your strategy to govern external collaboration, it's recommended you enable SharePoint and OneDrive integration with Azure AD B2B. Azure AD B2B has guest-user authentication and management. With SharePoint and OneDrive integration, use one-time passcodes for external sharing of files, folders, list items, document libraries, and sites. ++Learn more: +* [Email one-time passcode authentication](../external-identities/one-time-passcode.md) +* [SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration) +* [B2B collaboration overview](../external-identities/what-is-b2b.md) ++If you enable Azure AD B2B integration, then SharePoint and OneDrive sharing is subject to the Azure AD organizational relationships settings, such as **Members can invite** and **Guests can invite**. ++### Sharing policies in SharePoint and OneDrive ++In the Azure portal, you can use the External Sharing settings for SharePoint and OneDrive to help configure sharing policies. OneDrive restrictions can't be more permissive than SharePoint settings. ++Learn more: [External sharing overview](/sharepoint/external-sharing-overview) ++ ![Screenshot of external sharing settings for SharePoint and OneDrive.](media/secure-external-access/9-sharepoint-settings.png) ++#### External sharing settings recommendations ++Use the guidance in this section when configuring external sharing. ++* **Anyone** - Not recommended. If enabled, regardless of integration status, no Azure policies are applied for this link type. + * Don't enable this functionality for governed collaboration + * Use it for restrictions on individual sites +* **New and existing guests** - Recommended, if integration is enabled + * Azure AD B2B integration enabled: new and current guests have an Azure AD B2B guest account you can manage with Azure AD policies + * Azure AD B2B integration not enabled: new guests don't have an Azure AD B2B account, and can't be managed from Azure AD + * Guests have an Azure AD B2B account, depending on how the guest was created +* **Existing guests** - Recommended, if you don't have integration enabled + * With this option enabled, users can share with other users in your directory +* **Only people in your organization** - Not recommended with external user collaboration + * Regardless of integration status, users can share with other users in your organization +* **Limit external sharing by domain** - By default, SharePoint allows external access. Sharing is allowed with external domains. + * Use this option to restrict or allow domains for SharePoint +* **Allow only users in specific security groups to share externally** - Use this setting to restrict who shares content in SharePoint and OneDrive. The setting in Azure AD applies to all applications. Use the restriction to direct users to training about secure sharing. Completion is the signal to add them to a sharing security group. If this setting is selected, and users can't become an approved sharer, they might find unapproved ways to share. +* **Allow guests to share items they donΓÇÖt own** - Not recommended. The guidance is to disable this feature. +* **People who use a verification code must reauthenticate after this many days (default is 30)** - Recommended ++### Access controls ++Access controls setting affect all users in your organization. Because you might not be able to control whether external users have compliant devices, the controls won't be addressed in this article. ++* **Idle session sign-out** - Recommended + * Use this option to warn and sign out users on unmanaged devices, after a period of inactivity + * You can configure the period of inactivity and the warning +* **Network location** - Set this control to allow access from IP addresses your organization owns. + * For external collaboration, set this control if your external partners access resources when in your network, or with your virtual private network (VPN). ++### File and folder links ++In the SharePoint admin center, you can set how file and folder links are shared. You can configure the setting for each site. ++ ![Screenshot of File and folder links options.](media/secure-external-access/9-file-folder-links.png) ++With Azure AD B2B integration enabled, sharing files and folders with users outside the organization results in the creation of a B2B user. ++1. For **Choose the type of link that's selected by default when users share files and folders in SharePoint and OneDrive**, select **Only people in your organization**. +2. For **Choose the permission that's selected by default for sharing links**, select **Edit**. ++You can customize this setting for a per-site default. ++### Anyone links ++Enabling Anyone links isn't recommended. If you enable it, set an expiration, and restrict users to view permissions. If you select View only permissions for files or folders, users can't change Anyone links to include edit privileges. ++Learn more: ++* [External sharing overview](/sharepoint/external-sharing-overview) +* [SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration) ++## Next steps ++Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order. ++1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) ++2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) ++3. [Create a security plan for external access to resources](3-secure-access-plan.md) ++4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) ++5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) ++6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) ++7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) ++8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) ++9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) (You're here) ++10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) |
active-directory | Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/architecture.md | + + Title: Architecture overview +description: Learn what an Azure Active Directory tenant is and how to manage Azure using Azure Active Directory. ++++++++ Last updated : 08/17/2022+++++++# What is the Azure Active Directory architecture? ++Azure Active Directory (Azure AD) enables you to securely manage access to Azure services and resources for your users. Included with Azure AD is a full suite of identity management capabilities. For information about Azure AD features, see [What is Azure Active Directory?](../fundamentals/active-directory-whatis.md) ++With Azure AD, you can create and manage users and groups, and enable permissions to allow and deny access to enterprise resources. For information about identity management, see [The fundamentals of Azure identity management](../fundamentals/active-directory-whatis.md). ++## Azure AD architecture ++Azure AD's geographically distributed architecture combines extensive monitoring, automated rerouting, failover, and recovery capabilities, which deliver company-wide availability and performance to customers. ++The following architecture elements are covered in this article: ++* Service architecture design +* Scalability +* Continuous availability +* Datacenters ++### Service architecture design ++The most common way to build an accessible and usable, data-rich system is through independent building blocks or scale units. For the Azure AD data tier, scale units are called *partitions*. ++The data tier has several front-end services that provide read-write capability. The diagram below shows how the components of a single-directory partition are delivered throughout geographically distributed datacenters. ++ ![Single-directory partition diagram](./media/architecture/active-directory-architecture.png) ++The components of Azure AD architecture include a primary replica and secondary replicas. ++#### Primary replica ++The *primary replica* receives all *writes* for the partition it belongs to. Any write operation is immediately replicated to a secondary replica in a different datacenter before returning success to the caller, thus ensuring geo-redundant durability of writes. ++#### Secondary replicas ++All directory *reads* are serviced from *secondary replicas*, which are at datacenters that are physically located across different geographies. There are many secondary replicas, as data is replicated asynchronously. Directory reads, such as authentication requests, are serviced from datacenters that are close to customers. The secondary replicas are responsible for read scalability. ++### Scalability ++Scalability is the ability of a service to expand to meet increasing performance demands. Write scalability is achieved by partitioning the data. Read scalability is achieved by replicating data from one partition to multiple secondary replicas distributed throughout the world. ++Requests from directory applications are routed to the closest datacenter. Writes are transparently redirected to the primary replica to provide read-write consistency. Secondary replicas significantly extend the scale of partitions because the directories are typically serving reads most of the time. ++Directory applications connect to the nearest datacenters. This connection improves performance, and therefore scaling out is possible. Since a directory partition can have many secondary replicas, secondary replicas can be placed closer to the directory clients. Only internal directory service components that are write-intensive target the active primary replica directly. ++### Continuous availability ++Availability (or uptime) defines the ability of a system to perform uninterrupted. The key to Azure ADΓÇÖs high-availability is that the services can quickly shift traffic across multiple geographically distributed datacenters. Each datacenter is independent, which enables de-correlated failure modes. Through this high availability design, Azure AD requires no downtime for maintenance activities. ++Azure ADΓÇÖs partition design is simplified compared to the enterprise AD design, using a single-master design that includes a carefully orchestrated and deterministic primary replica failover process. ++#### Fault tolerance ++A system is more available if it is tolerant to hardware, network, and software failures. For each partition on the directory, a highly available master replica exists: The primary replica. Only writes to the partition are performed at this replica. This replica is being continuously and closely monitored, and writes can be immediately shifted to another replica (which becomes the new primary) if a failure is detected. During failover, there could be a loss of write availability typically of 1-2 minutes. Read availability isn't affected during this time. ++Read operations (which outnumber writes by many orders of magnitude) only go to secondary replicas. Since secondary replicas are idempotent, loss of any one replica in a given partition is easily compensated by directing the reads to another replica, usually in the same datacenter. ++#### Data durability ++A write is durably committed to at least two datacenters prior to it being acknowledged. This happens by first committing the write on the primary, and then immediately replicating the write to at least one other datacenter. This write action ensures that a potential catastrophic loss of the datacenter hosting the primary doesn't result in data loss. ++Azure AD maintains a zero [Recovery Time Objective (RTO)](https://en.wikipedia.org/wiki/Recovery_time_objective) to not lose data on failovers. This includes: ++* Token issuance and directory reads +* Allowing only about 5 minutes RTO for directory writes ++### Datacenters ++Azure ADΓÇÖs replicas are stored in datacenters located throughout the world. For more information, see [Azure global infrastructure](https://azure.microsoft.com/global-infrastructure/). ++Azure AD operates across datacenters with the following characteristics: ++* Authentication, Graph, and other AD services reside behind the Gateway service. The Gateway manages load balancing of these services. It will fail over automatically if any unhealthy servers are detected using transactional health probes. Based on these health probes, the Gateway dynamically routes traffic to healthy datacenters. +* For *reads*, the directory has secondary replicas and corresponding front-end services in an active-active configuration operating in multiple datacenters. If a datacenter fails, traffic is automatically routed to a different datacenter. +* For *writes*, the directory will fail over the primary replica across datacenters via planned (new primary is synchronized to old primary) or emergency failover procedures. Data durability is achieved by replicating any commit to at least two datacenters. ++#### Data consistency ++The directory model is one of eventual consistencies. One typical problem with distributed asynchronously replicating systems is that the data returned from a ΓÇ£particularΓÇ¥ replica may not be up-to-date. ++Azure AD provides read-write consistency for applications targeting a secondary replica by routing its writes to the primary replica, and synchronously pulling the writes back to the secondary replica. ++Application writes using the Microsoft Graph API of Azure AD are abstracted from maintaining affinity to a directory replica for read-write consistency. The Microsoft Graph API service maintains a logical session, which has affinity to a secondary replica used for reads; affinity is captured in a ΓÇ£replica tokenΓÇ¥ that the service caches using a distributed cache in the secondary replica datacenter. This token is then used for subsequent operations in the same logical session. To continue using the same logical session, subsequent requests must be routed to the same Azure AD datacenter. It isn't possible to continue a logical session if the directory client requests are being routed to multiple Azure AD datacenters; if this happens then the client has multiple logical sessions that have independent read-write consistencies. ++ >[!NOTE] + >Writes are immediately replicated to the secondary replica to which the logical session's reads were issued. ++#### Service-level backup ++Azure AD implements daily backup of directory data and can use these backups to restore data if there is any service-wide issue. + +The directory also implements soft deletes instead of hard deletes for selected object types. The tenant administrator can undo any accidental deletions of these objects within 30 days. For more information, see the [API to restore deleted objects](/graph/api/directory-deleteditems-restore). ++#### Metrics and monitors ++Running a high availability service requires world-class metrics and monitoring capabilities. Azure AD continually analyzes and reports key service health metrics and success criteria for each of its services. There is also continuous development and tuning of metrics and monitoring and alerting for each scenario, within each Azure AD service and across all services. ++If any Azure AD service isn't working as expected, action is immediately taken to restore functionality as quickly as possible. The most important metric Azure AD tracks is how quickly live site issues can be detected and mitigated for customers. We invest heavily in monitoring and alerts to minimize time to detect (TTD Target: <5 minutes) and operational readiness to minimize time to mitigate (TTM Target: <30 minutes). ++#### Secure operations ++Using operational controls such as multi-factor authentication (MFA) for any operation, and auditing of all operations. In addition, using a just-in-time elevation system to grant necessary temporary access for any operational task-on-demand on an ongoing basis. For more information, see [The Trusted Cloud](https://azure.microsoft.com/support/trust-center). ++## Next steps ++[Azure Active Directory developer's guide](../develop/index.yml) |
active-directory | Auth Header Based | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-header-based.md | + + Title: Header-based authentication with Azure Active Directory +description: Architectural guidance on achieving header-based authentication with Azure Active Directory. ++++++++ Last updated : 01/10/2023+++++++# Header-based authentication with Azure Active Directory ++Legacy applications commonly use Header-based authentication. In this scenario, a user (or message originator) authenticates to an intermediary identity solution. The intermediary solution authenticates the user and propagates the required Hypertext Transfer Protocol (HTTP) headers to the destination web service. Azure Active Directory (AD) supports this pattern via its Application Proxy service, and integrations with other network controller solutions. ++In our solution, Application Proxy provides remote access to the application, authenticates the user, and passes headers required by the application. ++## Use when ++Remote users need to securely single sign-on (SSO) into to on-premises applications that require header-based authentication. ++![Architectural image header-based authentication](./media/authentication-patterns/header-based-auth.png) ++## Components of system ++* **User**: Accesses legacy applications served by Application Proxy. ++* **Web browser**: The component that the user interacts with to access the external URL of the application. ++* **Azure AD**: Authenticates the user. ++* **Application Proxy service**: Acts as reverse proxy to send request from the user to the on-premises application. It resides in Azure AD and can also enforce any conditional access policies. ++* **Application Proxy connector**: Installed on-premises on Windows servers to provide connectivity to the applications. It only uses outbound connections. Returns the response to Azure AD. ++* **Legacy applications**: Applications that receive user requests from Application Proxy. The legacy application receives the required HTTP headers to set up a session and return a response. ++## Implement header-based authentication with Azure AD ++* [Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md) ++* [Header-based authentication for single sign-on with Application Proxy and PingAccess](../app-proxy/application-proxy-configure-single-sign-on-with-headers.md) ++* [Secure legacy apps with app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) |
active-directory | Auth Kcd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-kcd.md | + + Title: Kerberos constrained delegation with Azure Active Directory +description: Architectural guidance on achieving Kerberos constrained delegation with Azure Active Directory. +++++++ Last updated : 03/01/2023++++++# Windows authentication - Kerberos constrained delegation with Azure Active Directory ++Based on Service Principle Names, Kerberos Constrained Delegation (KCD) provides constrained delegation between resources. It requires domain administrators to create the delegations and is limited to a single domain. You can use resource-based KCD to provide Kerberos authentication for a web application that has users in multiple domains within an Active Directory forest. ++Azure Active Directory Application Proxy can provide single sign-on (SSO) and remote access to KCD-based applications that require a Kerberos ticket for access and Kerberos Constrained Delegation (KCD). ++To enable SSO to your on-premises KCD applications that use integrated Windows authentication (IWA), give Application Proxy connectors permission to impersonate users in Active Directory. The Application Proxy connector uses this permission to send and receive tokens on the users' behalf. ++## When to use KCD ++Use KCD when there's a need to provide remote access, protect with pre-authentication, and provide SSO to on-premises IWA applications. ++![Diagram of architecture](./media/authentication-patterns/kcd-auth.png) ++## Components of system ++* **User**: Accesses legacy application that Application Proxy serves. +* **Web browser**: The component that the user interacts with to access the external URL of the application. +* **Azure AD**: Authenticates the user. +* **Application Proxy service**: Acts as reverse proxy to send requests from the user to the on-premises application. It sits in Azure AD. Application Proxy can enforce conditional access policies. +* **Application Proxy connector**: Installed on Windows on premises servers to provide connectivity to the application. Returns the response to Azure AD. Performs KCD negotiation with Active Directory, impersonating the user to get a Kerberos token to the application. +* **Active Directory**: Sends the Kerberos token for the application to the Application Proxy connector. +* **Legacy applications**: Applications that receive user requests from Application Proxy. The legacy applications return the response to the Application Proxy connector. ++## Implement Windows authentication (KCD) with Azure AD ++Explore the following resources to learn more about implementing Windows authentication (KCD) with Azure AD. ++* [Kerberos-based single sign-on (SSO) in Azure Active Directory with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md) describes prerequisites and configuration steps. +* The [Tutorial - Add an on-premises app - Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md) helps you to prepare your environment for use with Application Proxy. ++## Next steps ++* [Azure Active Directory authentication and synchronization protocol overview](auth-sync-overview.md) describes integration with authentication and synchronization protocols. Authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. Synchronization integrations enable you to sync user and group data to Azure AD and then user Azure AD management capabilities. Some sync patterns enable automated provisioning. +* [Understand single sign-on with an on-premises app using Application Proxy](../app-proxy/application-proxy-config-sso-how-to.md) describes how SSO allows your users to access an application without authenticating multiple times. SSO occurs in the cloud against Azure AD and allows the service or Connector to impersonate the user to complete authentication challenges from the application. +* [SAML single sign-on for on-premises apps with Azure Active Directory Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md) describes how you can provide remote access to on-premises applications that are secured with SAML authentication through Application Proxy. |
active-directory | Auth Ldap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-ldap.md | + + Title: LDAP authentication with Azure Active Directory +description: Architectural guidance on achieving LDAP authentication with Azure Active Directory. ++++++++ Last updated : 01/10/2023+++++++# LDAP authentication with Azure Active Directory ++Lightweight Directory Access Protocol (LDAP) is an application protocol for working with various directory services. Directory services, such as Active Directory, [store user and account information](https://www.dnsstuff.com/active-directory-service-accounts), and security information like passwords. The service then allows the information to be shared with other devices on the network. Enterprise applications such as email, customer relationship managers (CRMs), and Human Resources (HR) software can use LDAP to authenticate, access, and find information. ++Azure Active Directory (Azure AD) supports this pattern via Azure AD Domain Services (AD DS). It allows organizations that are adopting a cloud-first strategy to modernize their environment by moving off their on-premises LDAP resources to the cloud. The immediate benefits will be: ++* Integrated with Azure AD. Additions of users and groups, or attribute changes to their objects are automatically synchronized from your Azure AD tenant to AD DS. Changes to objects in on-premises Active Directory are synchronized to Azure AD, and then to AD DS. ++* Simplify operations. Reduces the need to manually keep and patch on-premises infrastructures. ++* Reliable. You get managed, highly available services ++## Use when ++There is a need to for an application or service to use LDAP authentication. ++![Diagram of architecture](./media/authentication-patterns/ldap-auth.png) ++## Components of system ++* **User**: Accesses LDAP-dependent applications via a browser. ++* **Web Browser**: The interface that the user interacts with to access the external URL of the application. ++* **Virtual Network**: A private network in Azure through which the legacy application can consume LDAP services. ++* **Legacy applications**: Applications or server workloads that require LDAP deployed either in a virtual network in Azure, or which have visibility to AD DS instance IPs via networking routes. ++* **Azure AD**: Synchronizes identity information from organizationΓÇÖs on-premises directory via Azure AD Connect. ++* **Azure AD Domain Services (AD DS)**: Performs a one-way synchronization from Azure AD to provide access to a central set of users, groups, and credentials. The AD DS instance is assigned to a virtual network. Applications, services, and VMs in Azure that connect to the virtual network assigned to AD DS can use common AD DS features such as LDAP, domain join, group policy, Kerberos, and NTLM authentication. + > [!NOTE] + > In environments where the organization cannot synchronize password hashes, or users sign-in using smart cards, we recommend that you use a resource forest in AD DS. ++* **Azure AD Connect**: A tool for synchronizing on premises identity information to Microsoft Azure AD. The deployment wizard and guided experiences help you configure prerequisites and components required for the connection, including sync and sign on from Active Directory to Azure AD. ++* **Active Directory**: Directory service that stores [on-premises identity information such as user and account information](https://www.dnsstuff.com/active-directory-service-accounts), and security information like passwords. ++## Implement LDAP authentication with Azure AD ++* [Create and configure an Azure AD DS instance](../../active-directory-domain-services/tutorial-create-instance.md) ++* [Configure virtual networking for an Azure AD DS instance](../../active-directory-domain-services/tutorial-configure-networking.md) ++* [Configure Secure LDAP for an Azure AD DS managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md) ++* [Create an outbound forest trust to an on-premises domain in Azure AD DS](../../active-directory-domain-services/tutorial-create-forest-trust.md) + |
active-directory | Auth Oauth2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-oauth2.md | + + Title: OAUTH 2.0 authentication with Azure Active Directory +description: Architectural guidance on achieving OAUTH 2.0 authentication with Azure Active Directory. ++++++++ Last updated : 01/10/2023+++++++# OAuth 2.0 authentication with Azure Active Directory ++The OAuth 2.0 is the industry protocol for authorization. It allows a user to grant limited access to its protected resources. Designed to work specifically with Hypertext Transfer Protocol (HTTP), OAuth separates the role of the client from the resource owner. The client requests access to the resources controlled by the resource owner and hosted by the resource server. The resource server issues access tokens with the approval of the resource owner. The client uses the access tokens to access the protected resources hosted by the resource server. ++OAuth 2.0 is directly related to OpenID Connect (OIDC). Since OIDC is an authentication and authorization layer built on top of OAuth 2.0, it isn't backwards compatible with OAuth 1.0. Azure Active Directory (Azure AD) supports all OAuth 2.0 flows. ++## Use for: ++Rich client and modern app scenarios and RESTful web API access. ++![Diagram of architecture](./media/authentication-patterns/oauth.png) ++## Components of system ++* **User**: Requests a service from the web application (app). The user is typically the resource owner who owns the data and has the power to allow clients to access the data or resource. ++* **Web browser**: The web browser that the user interacts with is the OAuth client. ++* **Web app**: The web app, or resource server, is where the resource or data resides. It trusts the authorization server to securely authenticate and authorize the OAuth client. ++* **Azure AD**: Azure AD is the authorization server, also known as the Identity Provider (IdP). It securely handles anything to do with the user's information, their access, and the trust relationship. It's responsible for issuing the tokens that grant and revoke access to resources. ++## Implement OAuth 2.0 with Azure AD ++* [Integrating applications with Azure AD](../saas-apps/tutorial-list.md) ++* [OAuth 2.0 and OpenID Connect protocols on the Microsoft Identity Platform](../develop/active-directory-v2-protocols.md) ++* [Application types and OAuth2](../develop/v2-app-types.md) + |
active-directory | Auth Oidc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-oidc.md | + + Title: OpenID Connect authentication with Azure Active Directory +description: Architectural guidance on achieving OpenID Connect authentication with Azure Active Directory. ++++++++ Last updated : 01/10/2023+++++++# OpenID Connect authentication with Azure Active Directory ++OpenID Connect (OIDC) is an authentication protocol based on the OAuth2 protocol (which is used for authorization). OIDC uses the standardized message flows from OAuth2 to provide identity services. ++The design goal of OIDC is "making simple things simple and complicated things possible". OIDC lets developers authenticate their users across websites and apps without having to own and manage password files. This provides the app builder with a secure way to verify the identity of the person currently using the browser or native app that is connected to the application. ++The authentication of the user must take place at an identity provider where the user's session or credentials will be checked. To do that, you need a trusted agent. Native apps usually launch the system browser for that purpose. Embedded views are considered not trusted since there's nothing to prevent the app from snooping on the user password. ++In addition to authentication, the user can be asked for consent. Consent is the user's explicit permission to allow an application to access protected resources. Consent is different from authentication because consent only needs to be provided once for a resource. Consent remains valid until the user or admin manually revokes the grant. ++## Use when ++There is a need for user consent and for web sign in. ++![architectural diagram](./media/authentication-patterns/oidc-auth.png) ++## Components of system ++* **User**: Requests a service from the application. ++* **Trusted agent**: The component that the user interacts with. This trusted agent is usually a web browser. ++* **Application**: The application, or Resource Server, is where the resource or data resides. It trusts the identity provider to securely authenticate and authorize the trusted agent. ++* **Azure AD**: The OIDC provider, also known as the identity provider, securely manages anything to do with the user's information, their access, and the trust relationships between parties in a flow. It authenticates the identity of the user, grants and revokes access to resources, and issues tokens. ++## Implement OIDC with Azure AD ++* [Integrating applications with Azure AD](../saas-apps/tutorial-list.md) ++* [OAuth 2.0 and OpenID Connect protocols on the Microsoft Identity Platform](../develop/active-directory-v2-protocols.md) ++* [Microsoft identity platform and OpenID Connect protocol](../develop/v2-protocols-oidc.md) ++* [Web sign-in with OpenID Connect in Azure Active Directory B2C](../../active-directory-b2c/openid-connect.md) ++* [Secure your application by using OpenID Connect and Azure AD](/training/modules/secure-app-with-oidc-and-azure-ad/) |
active-directory | Auth Password Based Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-password-based-sso.md | + + Title: Password-based authentication with Azure Active Directory +description: Architectural guidance on achieving password-based authentication with Azure Active Directory. ++++++++ Last updated : 01/10/2023+++++++# Password-based authentication with Azure Active Directory ++Password based Single Sign-On (SSO) uses the existing authentication process for the application. When you enable password-based SSO, Azure Active Directory (Azure AD) collects, encrypts, and securely stores user credentials in the directory. Azure AD supplies the username and password to the application when the user attempts to sign in. ++Choose password-based SSO when an application authenticates with a username and password instead of access tokens and headers. Password-based SSO supports any cloud-based application that has an HTML-based sign in page. ++## Use when ++You need to protect with pre-authentication and provide SSO through password vaulting to web apps. ++![architectural diagram](./media/authentication-patterns/password-based-sso-auth.png) +++## Components of system ++* **User**: Accesses formed based application from either My Apps or by directly visiting the site. ++* **Web browser**: The component that the user interacts with to access the external URL of the application. The user accesses the form-based application via the MyApps extension. ++* **MyApps extension**: Identifies the configured password-based SSO application and supplies the credentials to the sign in form. The MyApps extension is installed on the web browser. ++* **Azure AD**: Authenticates the user. ++## Implement password-based SSO with Azure AD ++* [What is password based SSO](../manage-apps/what-is-single-sign-on.md) ++* [Configure password based SSO for cloud applications ](../manage-apps/configure-password-single-sign-on-non-gallery-applications.md) ++* [Configure password-based SSO for on-premises applications with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md) |
active-directory | Auth Passwordless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-passwordless.md | + + Title: Passwordless authentication with Azure Active Directory +description: Microsoft Azure Active Directory (Azure AD) enables integration with passwordless authentication protocols that include certificate-based authentication, passwordless security key sign-in, Windows Hello for Business, and passwordless sign-in with Microsoft Authenticator. +++++++ Last updated : 03/01/2023++++# Passwordless authentication with Azure Active Directory ++Microsoft Azure Active Directory (Azure AD) enables integration with the following passwordless authentication protocols. ++- [Overview of Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md): Azure AD certificate-based authentication (CBA) enables customers to allow or require users to authenticate directly with X.509 certificates against their Azure AD for applications and browser sign-in. This feature enables customers to adopt phishing resistant authentication and authenticate with an X.509 certificate against their Public Key Infrastructure (PKI). +- [Enable passwordless security key sign-in](../authentication/howto-authentication-passwordless-security-key.md): For enterprises that use passwords and have a shared PC environment, security keys provide a seamless way for workers to authenticate without entering a username or password. Security keys provide improved productivity for workers, and have better security. This article explains how to sign in to web-based applications with your Azure AD account using a FIDO2 security key. +- [Windows Hello for Business Overview](/windows/security/identity-protection/hello-for-business/hello-overview): Windows Hello for Business replaces passwords with strong two-factor authentication on devices. This authentication consists of a type of user credential that is tied to a device and uses a biometric or PIN. +- [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md): Microsoft Authenticator can be used to sign in to any Azure AD account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. Windows Hello for Business uses a similar technology. Microsoft Authenticator can be used on any device platform, including mobile. Microsoft Authenticator can be used with any app or website that integrates with Microsoft Authentication Libraries. |
active-directory | Auth Prov Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-prov-overview.md | + + Title: Azure Active Directory synchronization protocol overview +description: Architectural guidance on integrating Azure AD with legacy synchronization protocols ++++++++ Last updated : 2/8/2023+++++++# Azure Active Directory integrations with synchronization protocols ++Microsoft Azure Active Directory (Azure AD) enables integration with many synchronization protocols. The synchronization integrations enable you to sync user and group data to Azure AD, and then user Azure AD management capabilities. Some sync patterns also enable automated provisioning. ++## Synchronization patterns ++The following table presents Azure AD integration with synchronization patterns and their capabilities. Select the name of a pattern to see ++* A detailed description ++* When to use it ++* Architectural diagram ++* Explanation of system components ++* Links for how to implement the integration ++++| Synchronization pattern| Directory synchronization| User provisioning | +| - | - | - | +| [Directory synchronization](sync-directory.md)| ![check mark](./media/authentication-patterns/check.png)| | +| [LDAP Synchronization](sync-ldap.md)| ![check mark](./media/authentication-patterns/check.png)| | +| [SCIM synchronization](sync-scim.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | |
active-directory | Auth Radius | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-radius.md | + + Title: RADIUS authentication with Azure Active Directory +description: Architectural guidance on achieving RADIUS authentication with Azure Active Directory. ++++++++ Last updated : 01/10/2023+++++++# RADIUS authentication with Azure Active Directory ++Remote Authentication Dial-In User Service (RADIUS) is a network protocol that secures a network by enabling centralized authentication and authorization of dial-in users. Many applications still rely on the RADIUS protocol to authenticate users. ++Microsoft Windows Server has a role called the Network Policy Server (NPS), which can act as a RADIUS server and support RADIUS authentication. ++Azure Active Directory (Azure AD) enables Multi-factor authentication with RADIUS-based systems. If a customer wants to apply Azure AD Multi-Factor Authentication to any of the previously mentioned RADIUS workloads, they can install the Azure AD Multi-Factor Authentication NPS extension on their Windows NPS server. ++The Windows NPS server authenticates a user’s credentials against Active Directory, and then sends the Multi-Factor Authentication request to Azure. The user then receives a challenge on their mobile authenticator. Once successful, the client application is allowed to connect to the service. ++## Use when:  ++You need to add Multi-Factor Authentication to applications like +* a Virtual Private Network (VPN) +* WiFi access +* Remote Desktop Gateway (RDG) +* Virtual Desktop Infrastructure (VDI) +* Any others that depend on the RADIUS protocol to authenticate users into the service. ++> [!NOTE] +> Rather than relying on RADIUS and the Azure AD Multi-Factor Authentication NPS extension to apply Azure AD Multi-Factor Authentication to VPN workloads, we recommend that you upgrade your VPN’s to SAML and directly federate your VPN with Azure AD. This gives your VPN the full breadth of Azure AD protection, including Conditional Access, Multi-Factor Authentication, device compliance, and Identity Protection. ++![architectural diagram](./media/authentication-patterns/radius-auth.png) +++## Components of the system  ++* **Client application (VPN client)**: Sends authentication request to the RADIUS client. ++* **RADIUS client**: Converts requests from client application and sends them to RADIUS server that has the NPS extension installed. ++* **RADIUS server**: Connects with Active Directory to perform the primary authentication for the RADIUS request. Upon success, passes the request to Azure AD Multi-Factor Authentication NPS extension. ++* **NPS extension**: Triggers a request to Azure AD Multi-Factor Authentication for a secondary authentication. If successful, NPS extension completes the authentication request by providing the RADIUS server with security tokens that include Multi-Factor Authentication claim, issued by Azure’s Security Token Service. ++* **Azure AD Multi-Factor Authentication**: Communicates with Azure AD to retrieve the user’s details and performs a secondary authentication using a verification method configured by the user. ++## Implement RADIUS with Azure AD  ++* [Provide Azure AD Multi-Factor Authentication capabilities using NPS](../authentication/howto-mfa-nps-extension.md) ++* [Configure the Azure AD Multi-Factor Authentication NPS extension](../authentication/howto-mfa-nps-extension-advanced.md) ++* [VPN with Azure AD Multi-Factor Authentication using the NPS extension](../authentication/howto-mfa-nps-extension-vpn.md) ++ +‎ + |
active-directory | Auth Remote Desktop Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-remote-desktop-gateway.md | + + Title: Remote Desktop Gateway Services with Azure Active Directory +description: Architectural guidance on achieving Remote Desktop Gateway Services with Azure Active Directory. +++++++ Last updated : 03/01/2023++++++# Remote Desktop Gateway Services ++A standard Remote Desktop Services (RDS) deployment includes various [Remote Desktop role services](/windows-server/remote/remote-desktop-services/desktop-hosting-logical-architecture) running on Windows Server. The RDS deployment with Azure Active Directory (Azure AD) Application Proxy has a permanent outbound connection from the server that is running the connector service. Other deployments leave open inbound connections through a load balancer. ++This authentication pattern allows you to offer more types of applications by publishing on premises applications through Remote Desktop Services. It reduces the attack surface of their deployment by using Azure AD Application Proxy. ++## When to use Remote Desktop Gateway Services ++Use Remote Desktop Gateway Services when you need to provide remote access and protect your Remote Desktop Services deployment with pre-authentication. ++![architectural diagram](./media/authentication-patterns/rdp-auth.png) ++## System components ++* **User**: Accesses RDS served by Application Proxy. +* **Web browser**: The component that the user interacts with to access the external URL of the application. +* **Azure AD**: Authenticates the user. +* **Application Proxy service**: Acts as reverse proxy to forward request from the user to RDS. Application Proxy can also enforce any Conditional Access policies. +* **Remote Desktop Services**: Acts as a platform for individual virtualized applications, providing secure mobile and remote desktop access. It provides end users with the ability to run their applications and desktops from the cloud. ++## Implement Remote Desktop Gateway services with Azure AD ++Explore the following resources to learn more about implementing Remote Desktop Gateway services with Azure AD. ++* [Publish Remote Desktop with Azure Active Directory Application Proxy](../app-proxy/application-proxy-integrate-with-remote-desktop-services.md) describes how Remote Desktop Service and Azure AD Application Proxy work together to improve productivity of workers who are away from the corporate network. +* The [Tutorial - Add an on-premises app - Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md) helps you to prepare your environment for use with Application Proxy. ++## Next steps ++* [Azure Active Directory authentication and synchronization protocol overview](auth-sync-overview.md) describes integration with authentication and synchronization protocols. Authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. Synchronization integrations enable you to sync user and group data to Azure AD and then user Azure AD management capabilities. Some sync patterns enable automated provisioning. +* [Remote Desktop Services architecture](/windows-server/remote/remote-desktop-services/desktop-hosting-logical-architecture) describes configurations for deploying Remote Desktop Services to host Windows apps and desktops for end-users. |
active-directory | Auth Saml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-saml.md | + + Title: SAML authentication with Azure Active Directory +description: Architectural guidance on achieving SAML authentication with Azure Active Directory ++++++++ Last updated : 01/10/2023+++++++# SAML authentication with Azure Active Directory ++Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between an identity provider and a service provider. SAML is an XML-based markup language for security assertions, which are statements that service providers use to make access-control decisions. ++The SAML specification defines three roles: ++* The principal, generally a user +* The identity provider (IdP) +* The service provider (SP) +++## Use when ++There's a need to provide a single sign-on (SSO) experience for an enterprise SAML application. ++While one of most important use cases that SAML addresses is SSO, especially by extending SSO across security domains, there are other use cases (called profiles) as well. ++![architectural diagram for SAML](./media/authentication-patterns/saml-auth.png) ++## Components of system ++* **User**: Requests a service from the application. ++* **Web browser**: The component that the user interacts with. ++* **Web app**: Enterprise application that supports SAML and uses Azure AD as IdP. ++* **Token**: A SAML assertion (also known as SAML tokens) that carries sets of claims made by the IdP about the principal (user). It contains authentication information, attributes, and authorization decision statements. ++* **Azure AD**: Enterprise cloud IdP that provides SSO and Multi-factor authentication for SAML apps. It synchronizes, maintains, and manages identity information for users while providing authentication services to relying applications. ++## Implement SAML authentication with Azure AD ++* [Tutorials for integrating SaaS applications using Azure Active Directory](../saas-apps/tutorial-list.md) ++* [Configuring SAML based single sign-on for non-gallery applications](../manage-apps/add-application-portal.md) ++* [How Azure AD uses the SAML protocol](../develop/active-directory-saml-protocol-reference.md) |
active-directory | Auth Ssh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-ssh.md | + + Title: SSH authentication with Azure Active Directory +description: Get architectural guidance on achieving SSH integration with Azure Active Directory. ++++++++ Last updated : 01/10/2023++++++# SSH authentication with Azure Active Directory ++Secure Shell (SSH) is a network protocol that provides encryption for operating network services securely over an unsecured network. It's commonly used in systems like Unix and Linux. SSH replaces the Telnet protocol, which doesn't provide encryption in an unsecured network. ++Azure Active Directory (Azure AD) provides a virtual machine (VM) extension for Linux-based systems that run on Azure. It also provides a client extension that integrates with the [Azure CLI](/cli/azure/) and the OpenSSH client. ++You can use SSH authentication with Active Directory when you're: ++* Working with Linux-based VMs that require remote command-line sign-in. ++* Running remote commands in Linux-based systems. ++* Securely transferring files in an unsecured network. ++## Components of the system  ++The following diagram shows the process of SSH authentication with Azure AD: ++![Diagram of Azure AD with the SSH protocol.](./media/authentication-patterns/ssh-auth.png) ++The system includes the following components: ++* **User**: The user starts the Azure CLI and the SSH client to set up a connection with the Linux VMs. The user also provides credentials for authentication. ++* **Azure CLI**: The user interacts with the Azure CLI to start a session with Azure AD, request short-lived OpenSSH user certificates from Azure AD, and start the SSH session. ++* **Web browser**: The user opens a browser to authenticate the Azure CLI session. The browser communicates with the identity provider (Azure AD) to securely authenticate and authorize the user. ++* **OpenSSH client**: The Azure CLI (or the user) uses the OpenSSH client to start a connection to the Linux VM. ++* **Azure AD**: Azure AD authenticates the identity of the user and issues short-lived OpenSSH user certificates to the Azure CLI client. ++* **Linux VM**: The Linux VM accepts the OpenSSH user certificate and provides a successful connection. ++## Next steps ++* To implement SSH with Azure AD, see [Log in to a Linux VM by using Azure AD credentials](../devices/howto-vm-sign-in-azure-ad-linux.md). |
active-directory | Auth Sync Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-sync-overview.md | + + Title: Azure Active Directory authentication and synchronization protocol overview +description: Architectural guidance on integrating Azure AD with legacy authentication protocols and sync patterns ++++++++ Last updated : 2/8/2023+++++++# Azure Active Directory integrations with authentication protocols ++Microsoft Azure Active Directory (Azure AD) enables integration with many authentication protocols. The authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. ++## Legacy authentication protocols ++The following table presents authentication Azure AD integration with legacy authentication protocols and their capabilities. Select the name of an authentication protocol to see ++* A detailed description ++* When to use it ++* Architectural diagram ++* Explanation of system components ++* Links for how to implement the integration ++ ++| Authentication protocol| Authentication| Authorization| Multi-factor Authentication| Conditional Access | +| - |- | - | - | - | +| [Header-based authentication](auth-header-based.md)|![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +| [LDAP authentication](auth-ldap.md)| ![check mark](./media/authentication-patterns/check.png)| | | | +| [OAuth 2.0 authentication](auth-oauth2.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +| [OIDC authentication](auth-oidc.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +| [Password based SSO authentication](auth-password-based-sso.md )| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +| [RADIUS authentication]( auth-radius.md)| ![check mark](./media/authentication-patterns/check.png)| | ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +| [Remote Desktop Gateway services](auth-remote-desktop-gateway.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +| [Secure Shell (SSH)](auth-ssh.md) | ![check mark](./media/authentication-patterns/check.png)| | ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +| [SAML authentication](auth-saml.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +| [Windows Authentication - Kerberos Constrained Delegation](auth-kcd.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) | +++++ |
active-directory | Automate Provisioning To Applications Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/automate-provisioning-to-applications-introduction.md | + + Title: Automate identity provisioning to applications introduction +description: Learn to design solutions to automatically provision identities in hybrid environments to provide application access. +++++++ Last updated : 09/23/2022+++ - it-pro + - seodec18 + - kr2b-contr-experiment ++++# Introduction ++The article helps architects, Microsoft partners, and IT professionals with information addressing identity [provisioning](https://www.gartner.com/en/information-technology/glossary/user-provisioning) needs in their organizations, or the organizations they're working with. The content focuses on automating user provisioning for access to applications across all systems in your organization. ++Employees in an organization rely on many applications to perform their work. These applications often require IT admins or application owners to provision accounts before an employee can start accessing them. Organizations also need to manage the lifecycle of these accounts and keep them up to date with the latest information and remove accounts when users don't require them anymore. ++The Azure AD provisioning service automates your identity lifecycle and keeps identities in sync across trusted source systems (like HR systems) and applications that users need access to. It enables you to bring users into Azure AD and provision them into the various applications that they require. The provisioning capabilities are foundational building blocks that enable rich governance and lifecycle workflows. For [hybrid](../hybrid/whatis-hybrid-identity.md) scenarios, the Azure AD agent model connects to on-premises or IaaS systems, and includes components such as the Azure AD provisioning agent, Microsoft Identity Manager (MIM), and Azure AD Connect. ++Thousands of organizations are running Azure AD cloud-hosted services, with its hybrid components delivered on-premises, for provisioning scenarios. Microsoft invests in cloud-hosted and on-premises functionality, including MIM and Azure AD Connect sync, to help organizations provision users in their connected systems and applications. This article focuses on how organizations can use Azure AD to address their provisioning needs and make clear which technology is most right for each scenario. ++![Typical deployment of MIM](media/automate-user-provisioning-to-applications-introduction/typical-mim-deployment.png) ++ Use the following table to find content specific to your scenario. For example, if you want employee and contractor identities management from an HR system to Active Directory Domain Services (AD DS) or Azure Active Directory (Azure AD), follow the link to *Connect identities with your system of record*. ++| What | From | To | Read | +| - | - | - | - | +| Employees and contractors| HR systems| AD and Azure AD| [Connect identities with your system of record](automate-provisioning-to-applications-solutions.md) | +| Existing AD users and groups| AD DS| Azure AD| [Synchronize identities between Azure AD and Active Directory](automate-provisioning-to-applications-solutions.md) | +| Users, groups| Azure AD| SaaS and on-prem apps| [Automate provisioning to non-Microsoft applications](../governance/entitlement-management-organization.md) | +| Access rights| Azure AD Identity Governance| SaaS and on-prem apps| [Entitlement management](../governance/entitlement-management-overview.md) | +| Existing users and groups| AD, SaaS and on-prem apps| Identity governance (so I can review them)| [Azure AD Access reviews](../governance/access-reviews-overview.md) | +| Non-employee users (with approval)| Other cloud directories| SaaS and on-prem apps| [Connected organizations](../governance/entitlement-management-organization.md) | +| Users, groups| Azure AD| Managed AD domain| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | ++## Example topologies ++Organizations vary greatly in the applications and infrastructure that they rely on to run their business. Some organizations have all their infrastructure in the cloud, relying solely on SaaS applications, while others have invested deeply in on-premises infrastructure over several years. The three topologies below depict how Microsoft can meet the needs of a cloud only customer, hybrid customer with basic provisioning requirements, and a hybrid customer with advanced provisioning requirements. ++### Cloud only ++In this example, the organization has a cloud HR system such as Workday or SuccessFactors, uses Microsoft 365 for collaboration, and SaaS apps such as ServiceNow and Zoom. ++![Cloud only deployment](media/automate-user-provisioning-to-applications-introduction/cloud-only-identity-management.png) ++1. The Azure AD provisioning service imports users from the cloud HR system and creates an account in Azure AD, based on business rules that the organization defines. ++1. The user complete sets up the suitable authentication methods, such as the authenticator app, Fast Identity Online 2 (FIDO2)/Windows Hello for Business (WHfB) keys via [Temporary Access Pass](../authentication/howto-authentication-temporary-access-pass.md) and then signs into Teams. This Temporary Access Pass was automatically generated for the user through Azure AD Life Cycle Workflows. ++1. The Azure AD provisioning service creates accounts in the various applications that the user needs, such as ServiceNow and Zoom. The user is able to request the necessary devices they need and start chatting with their teams. ++### Hybrid-basic ++In this example, the organization has a mix of cloud and on-premises infrastructure. In addition to the systems mentioned above, the organization relies on SaaS applications and on-premises applications that are both AD integrated and non-AD integrated. ++![Hybrid deployment model](media/automate-user-provisioning-to-applications-introduction/hybrid-basic.png) ++1. The Azure AD provisioning service imports the user from Workday and creates an account in AD DS, enabling the user to access AD-integrated applications. ++2. Azure AD Connect Cloud Sync provisions the user into Azure AD, which enables the user to access SharePoint Online and their OneDrive files. ++3. The Azure AD provisioning service detects a new account was created in Azure AD. It then creates accounts in the SaaS and on-premises applications the user needs access to. ++### Hybrid-advanced ++In this example, the organization has users spread across multiple on-premises HR systems and cloud HR. They have large groups and device synchronization requirements. ++![Advanced hybrid deployment model](media/automate-user-provisioning-to-applications-introduction/hybrid-advanced.png) ++1. MIM imports user information from each HR stem. MIM determines which users are needed for those employees in different directories. MIM provisions those identities in AD DS. ++2. Azure AD Connect Sync then synchronizes those users and groups to Azure AD and provides users access to their resources. ++## Next steps ++* [Solutions to automate user provisioning to applications](automate-provisioning-to-applications-solutions.md) |
active-directory | Automate Provisioning To Applications Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/automate-provisioning-to-applications-solutions.md | + + Title: Solutions to automate identity provisioning to applications +description: Learn to design solutions to automatically provision identities based on various scenarios. +++++++ Last updated : 09/29/2022+++ - it-pro + - seodec18 + - kr2b-contr-experiment ++++# Solutions ++This article presents solutions that enable you to: ++* Connect identities with your system of record +* Synchronize identities between Active Directory Domain Services (AD DS) and Azure Active Directory (Azure AD) +* Automate provisioning of users into non-Microsoft applications ++## Connect identities with your system of record ++In most designs, the human resources (HR) system is the source-of-authority for newly created digital identities. The HR system is often the starting point for many provisioning processes. For example, if a new user joins a company, they have a record in the HR system. That user likely needs an account to access Microsoft 365 services such as Teams and SharePoint, or non-Microsoft applications. ++### Synchronizing identities with cloud HR ++The Azure AD provisioning service enables organizations to [bring identities from popular HR systems](../app-provisioning/what-is-hr-driven-provisioning.md) (examples: [Workday](../saas-apps/workday-inbound-tutorial.md) and [SuccessFactors](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)), into Azure AD directly, or into AD DS. This provisioning capability enables new hires to access the resources they need from the first day of work. ++### On-premises HR + joining multiple data sources ++To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms both on-premises and in the cloud. ++MIM offers [rule extension](/previous-versions/windows/desktop/forefront-2010/ms698810(v=vs.100)?redirectedfrom=MSDN) and [workflow capabilities](https://microsoft.github.io/MIMWAL/) features for advanced scenarios requiring data transformation and consolidation from multiple sources. These connectors, rule extensions, and workflow capabilities enable organizations to aggregate user data in the MIM metaverse to form a single identity for each user. The identity can be [provisioned into downstream systems](/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms) such as AD DS. ++![Systems of record model](media/automate-user-provisioning-to-applications-solutions/system-of-record.png) ++## Synchronize identities between Active Directory Domain Services (AD DS) and Azure AD ++As customers move applications to the cloud, and integrate with Azure AD, users often need accounts in Azure AD, and AD to access the applications for their work. Here are five common scenarios in which objects need to be synchronized between AD and Azure AD. ++The scenarios are divided by the direction of synchronization needed, and are listed, one through five. Use the table following the scenarios to determine what technical solution provides the synchronization. ++Use the numbered sections in the next two section to cross reference the following table. ++**Synchronize identities from AD DS into Azure AD** ++1. For users in AD that need access to Office 365 or other applications that are connected to Azure AD, Azure AD Connect cloud sync is the first solution to explore. It provides a lightweight solution to create users in Azure AD, manage password rests, and synchronize groups. Configuration and management are primarily done in the cloud, minimizing your on-premises footprint. It provides high-availability and automatic failover, ensuring password resets and synchronization continue, even if there's an issue with on-premises servers. ++1. For complex, large-scale AD to Azure AD sync needs such as synchronizing groups over 50,000 and device sync, customers can use Azure AD Connect sync to meet their needs. ++**Synchronize identities from Azure AD into AD DS** ++As customers transition identity management to the cloud, more users and groups are created directly in Azure AD. However, they still need a presence on-premises in AD DS to access various resources. ++3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](../external-identities/hybrid-cloud-to-on-premises.md). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises. ++1. When a group is created in Azure AD, it can be automatically synchronized to AD DS using [Azure AD Connect sync](../hybrid/how-to-connect-group-writeback-v2.md). ++1. When users need access to cloud apps that still rely on legacy access protocols (for example, LDAP and Kerberos/NTLM), [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) synchronizes identities between Azure AD and a managed AD domain. ++|No.| What | From | To | Technology | +| - | - | - | - | - | +| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](../cloud-sync/what-is-cloud-sync.md) | +| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](../hybrid/whatis-azure-ad-connect.md) | +| 3 |Groups| Azure AD| AD DS| [Azure AD Connect Sync](../hybrid/how-to-connect-group-writeback-v2.md) | +| 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario), [PowerShell](https://github.com/Azure-Samples/B2B-to-AD-Sync)| +| 5 |Users, groups| Azure AD| Managed AD| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | ++The table depicts common scenarios and the recommended technology. ++## Automate provisioning users into non-Microsoft applications ++After identities are in Azure AD through HR-provisioning or Azure AD Connect cloud sync / Azure AD Connect sync, the employee can use the identity to access Teams, SharePoint, and Microsoft 365 applications. However, employees still need access to many Microsoft applications to perform their work. ++![Automation decision matrix](media/automate-user-provisioning-to-applications-solutions/automate-provisioning-decision-matrix.png) ++### Automate provisioning to apps and clouds that support the SCIM standard ++Azure AD supports the System for Cross-Domain Identity Management ([SCIM 2.0](https://aka.ms/scimoverview)) standard and integrates with hundreds of popular SaaS applications such as [Dropbox](../saas-apps/dropboxforbusiness-provisioning-tutorial.md) and [Atlassian](../saas-apps/atlassian-cloud-provisioning-tutorial.md) or other clouds such as [Amazon Web Services (AWS)](../saas-apps/aws-single-sign-on-provisioning-tutorial.md), [Google Cloud](../saas-apps/g-suite-provisioning-tutorial.md). Application developers can use the System for Cross-Domain Identity Management (SCIM) user management API to automate provisioning users and groups between Azure AD and your application. ++![SCIM standard](media/automate-user-provisioning-to-applications-solutions/automate-provisioning-scim-standard.png) ++In addition to the pre-integrated gallery applications, Azure AD supports provisioning to SCIM enabled line of business applications, whether hosted [on-premises](../app-provisioning/on-premises-scim-provisioning.md) or in the cloud. The Azure AD provisioning service creates users and groups in these applications, and manages updates such as when a user is promoted or leaves the company). ++[Learn more about provisioning to SCIM enabled applications](../app-provisioning/use-scim-to-provision-users-and-groups.md) ++### Automate provisioning to SQL and LDAP based applications ++ Many applications don't support the SCIM standard, and customers have historically used connectors developed for MIM to connect to them. The Azure AD provisioning service supports reusing connectors developed for MIM and provisioning users into applications that rely on an LDAP user store or a SQL database. ++[Learn more about on-premises application provisioning](../app-provisioning/user-provisioning.md) ++### Use integrations developed by partners ++Many applications may not yet support SCIM or rely on SQL / LDAP databases. Microsoft partners have developed SCIM gateways that allow you to synchronize users between Azure AD and various systems such as mainframes, HR systems, and legacy databases. In the image below, the SCIM Gateways are built and managed by partners. ++![Agent with SCIM gateway](media/automate-user-provisioning-to-applications-solutions/provisioning-agent-with-scim-gateway.png) ++[Learn more about partner driven integrations](../app-provisioning/partner-driven-integrations.md) ++### Manage local app passwords ++Many applications have a local authentication store and a UI that only checks the userΓÇÖs supplied credentials against that store. As a result, these applications can't support Multi Factor Authentication (MFA) through Azure AD and pose a security risk. Microsoft recommends enabling single sign-on and MFA for all your applications. Based on our studies, your account is more than 99.9% less likely to be compromised if you [use MFA](https://aka.ms/securitysteps). However, in cases where the application canΓÇÖt externalize authentication, customers can use MIM to sync password changes to these applications. ++![Provision access from org data](media/automate-user-provisioning-to-applications-solutions/provision-access-based-on-org-data.png) ++[Learn more about the MIM password change notification service](/microsoft-identity-manager/infrastructure/mim2016-password-management) ++### Define and provision access for a user based on organizational data ++MIM enables you to import organizational data such as job codes and locations. That information can then be used to automatically set up access rights for that user. ++![Manage local app passwords](media/automate-user-provisioning-to-applications-solutions/manage-local-app-passwords.png) ++### Automate common business workflows ++After users are provisioned into Azure AD, use Lifecycle Workflows (LCW) to automate appropriate actions at key moments in a userΓÇÖs lifecycle such as joiner, mover, and leaver. These custom workflows can be triggered by Azure AD LCW automatically, or on demand to enable or disable accounts, generate Temporary Access Passes, update Teams and/or group membership, send automated emails, and trigger a Logic App. This can help organizations ensure: ++* **Joiner**: When a user joins the organization, they're ready to go on day one. They have the correct access to the information and applications they need. They have the required hardware necessary to do their job. ++* **Leaver**: When users leave the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner. ++[Learn more about Azure AD Lifecycle Workflows](../governance/what-are-lifecycle-workflows.md) ++> [!Note] +> For scenarios not covered by LCW, customers can leverage the extensibility of [Logic Applications](../..//logic-apps/logic-apps-overview.md). ++### Reconcile changes made directly in the target system ++Organizations often need a complete audit trail of what users have access to applications containing data subject to regulation. To provide an audit trail, any access provided to a user directly must be traceable through the system of record. MIM provides the reconciliation capabilities to detect changes made directly in a target system and roll back the changes. In addition to detecting changes in target applications, MIM can import identities from third party applications to Azure AD. These applications often augment the set of user records that originated in the HR system. ++### Next steps ++1. Automate provisioning with any of your applications that are in the [Azure AD app gallery](../saas-apps/tutorial-list.md), support [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md), [SQL](../app-provisioning/on-premises-sql-connector-configure.md), or [LDAP](../app-provisioning/on-premises-ldap-connector-configure.md). +2. Evaluate [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md) for synchronization between AD DS and Azure AD +3. Use the [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for complex provisioning scenarios |
active-directory | B2c Deployment Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/b2c-deployment-plans.md | + + Title: Azure Active Directory B2C deployment plans +description: Azure Active Directory B2C deployment guide for planning, implementation, and monitoring ++++ Last updated : 01/17/2023++++++# Azure Active Directory B2C deployment plans ++Azure Active Directory B2C (Azure AD B2C) is an identity and access management solution that can ease integration with your infrastructure. Use the following guidance to help understand requirements and compliance throughout an Azure AD B2C deployment. ++## Plan an Azure AD B2C deployment ++### Requirements ++- Assess the primary reason to turn off systems + - See, [What is Azure Active Directory B2C?](../../active-directory-b2c/overview.md) +- For a new application, plan the design of the Customer Identity Access Management (CIAM) system + - See, [Planning and design](../../active-directory-b2c/best-practices.md#planning-and-design) +- Identify customer locations and create a tenant in the corresponding datacenter + - See, [Tutorial: Create an Azure Active Directory B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md) +- Confirm your application types and supported technologies: + - [Overview of the Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) + - [Develop with open source languages, frameworks, databases, and tools in Azure](https://azure.microsoft.com/free/open-source/search/?OCID=AID2200277_SEM_f63bcafc4d5f1d7378bfaa2085b249f9:G:s&ef_id=f63bcafc4d5f1d7378bfaa2085b249f9:G:s&msclkid=f63bcafc4d5f1d7378bfaa2085b249f9). + - For back-end services, use the [client credentials](../develop/msal-authentication-flows.md#client-credentials) flow +- To migrate from an identity provider (IdP): + - [Seamless migration](../../active-directory-b2c/user-migration.md#seamless-migration) + - Go to [azure-ad-b2c-user-migration](https://github.com/azure-ad-b2c/user-migration) +- Select protocols + - If you use Kerberos, Microsoft Windows NT LAN Manager (NTLM), and Web Services Federation (WS-Fed), see the video, [Azure Active Directory: Application and identity migration to Azure AD B2C](https://www.bing.com/videos/search?q=application+migration+in+azure+ad+b2c&docid=608034225244808069&mid=E21B87D02347A8260128E21B87D02347A8260128&view=detail&FORM=VIRE) ++After migration, your applications can support modern identity protocols such as OAuth 2.0 and OpenID Connect (OIDC). ++### Stakeholders ++Technology project success depends on managing expectations, outcomes, and responsibilities. ++- Identify the application architect, technical program manager, and owner +- Create a distribution list (DL) to communicate with the Microsoft account or engineering teams + - Ask questions, get answers, and receive notifications +- Identify a partner or resource outside your organization to support you ++Learn more: [Include the right stakeholders](deployment-plans.md) ++### Communications ++Communicate proactively and regularly with your users about pending and current changes. Inform them about how the experience changes, when it changes, and provide a contact for support. ++### Timelines ++Help set realistic expectations and make contingency plans to meet key milestones: ++- Pilot date +- Launch date +- Dates that affect delivery +- Dependencies ++## Implement an Azure AD B2C deployment ++* **Deploy applications and user identities** - Deploy client application and migrate user identities +* **Client application onboarding and deliverables** - Onboard the client application and test the solution +* **Security** - Enhance the identity solution security +* **Compliance** - Address regulatory requirements +* **User experience** - Enable a user-friendly service ++### Deploy authentication and authorization ++* Before your applications interact with Azure AD B2C, register them in a tenant you manage + * See, [Tutorial: Create an Azure Active Directory B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md) +* For authorization, use the Identity Experience Framework (IEF) sample user journeys + * See, [Azure Active Directory B2C: Custom CIAM User Journeys](https://github.com/azure-ad-b2c/samples#local-account-policy-enhancements) +* Use policy-based control for cloud-native environments + * Go to openpolicyagent.org to learn about [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) ++Learn more with the Microsoft Identity PDF, [Gaining expertise with Azure AD B2C](https://aka.ms/learnaadb2c), a course for developers. ++### Checklist for personas, permissions, delegation, and calls ++* Identify the personas that access to your application +* Define how you manage system permissions and entitlements today, and in the future +* Confirm you have a permission store and if there are permissions to add to the directory +* Define how you manage delegated administration + * For example, your customers' customers management +* Verify your application calls an API Manager (APIM) + * There might be a need to call from the IdP before the application is issued a token ++### Deploy applications and user identities ++Azure AD B2C projects start with one or more client applications. ++* [The new App registrations experience for Azure Active Directory B2C](../../active-directory-b2c/app-registrations-training-guide.md) + * Refer to [Azure Active Directory B2C code samples](../../active-directory-b2c/integrate-with-app-code-samples.md) for implementation +* Set up your user journey based on custom user flows + * [Comparing user flows and custom policies](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies) + * [Add an identity provider to your Azure Active Directory B2C tenant](../../active-directory-b2c/add-identity-provider.md) + * [Migrate users to Azure AD B2C](../../active-directory-b2c/user-migration.md) + * [Azure Active Directory B2C: Custom CIAM User Journeys](https://github.com/azure-ad-b2c/samples) for advanced scenarios ++### Application deployment checklist ++* Applications included in the CIAM deployment +* Applications in use + * For example, web applications, APIs, single-page apps (SPAs), or native mobile applications +* Authentication in use: + * For example, forms federated with SAML, or federated with OIDC + * If OIDC, confirm the response type: code or id_token +* Determine where front-end and back-end applications are hosted: on-premises, cloud, or hybrid-cloud +* Confirm the platforms or languages in use: + * For example ASP.NET, Java, and Node.js + * See, [Quickstart: Set up sign in for an ASP.NET application using Azure AD B2C](../../active-directory-b2c/quickstart-web-app-dotnet.md) +* Verify where user attributes are stored + * For example, Lightweight Directory Access Protocol (LDAP) or databases ++### User identity deployment checklist ++* Confirm the number of users accessing applications +* Determine the IdP types needed: + * For example, Facebook, local account, and Active Directory Federation Services (AD FS) + * See, [Active Directory Federation Services](/windows-server/identity/active-directory-federation-services) +* Outline the claim schema required from your application, Azure AD B2C, and IdPs if applicable + * See, [ClaimsSchema](../../active-directory-b2c/claimsschema.md) +* Determine the information to collect during sign-in and sign-up + * [Set up a sign-up and sign-in flow in Azure Active Directory B2C](../../active-directory-b2c/add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow) ++### Client application onboarding and deliverables ++Use the following checklist for onboarding an application ++|Area|Description| +||| +|Application target user group | Select among end customers, business customers, or a digital service. </br>Determine a need for employee sign-in.| +|Application business value| Understand the business need and/or goal to determine the best Azure AD B2C solution and integration with other client applications.| +|Your identity groups| Cluster identities into groups with requirements, such as business-to-consumer (B2C), business-to-business (B2B) business-to-employee (B2E), and business-to-machine (B2M) for IoT device sign-in and service accounts.| +|Identity provider (IdP)| See, [Select an identity provider](../../active-directory-b2c/add-identity-provider.md#select-an-identity-provider). For example, for a customer-to-customer (C2C) mobile app use an easy sign-in process. </br>B2C with digital services has compliance requirements. </br>Consider email sign-in. | +|Regulatory constraints | Determine a need for remote profiles or privacy policies. | +|Sign-in and sign-up flow | Confirm email verification or email verification during sign-up. </br>For check-out processes, see [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). </br>See the video, [Azure AD: Azure AD B2C user migration using Microsoft Graph API](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=4). | +|Application and authentication protocol| Implement client applications such as Web application, single-page application (SPA), or native. </br>Authentication protocols for client application and Azure AD B2C: OAuth, OIDC, and SAML. </br>See the video, [Azure AD: Protecting Web APIs with Azure AD](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9).| +| User migration | Confirm if you'll [migrate users to Azure AD B2C](../../active-directory-b2c/user-migration.md): Just-in-time (JIT) migration and bulk import/export. </br>See the video, [Azure Active Directory: Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2).| ++Use the following checklist for delivery. ++|Area| Description| +||| +|Protocol information| Gather the base path, policies, and metadata URL of both variants. </br>Specify attributes such as sample sign-in, client application ID, secrets, and redirects.| +|Application samples | See, [Azure Active Directory B2C code samples](../../active-directory-b2c/integrate-with-app-code-samples.md).| +|Penetration testing | Inform your operations team about pen tests, then test user flows including the OAuth implementation. </br>See, [Penetration testing](../../security/fundamentals/pen-testing.md) and [Penetration testing rules of engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement). +| Unit testing | Unit test and generate tokens. </br>See, [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md). </br>If you reach the Azure AD B2C token limit, see [Azure AD B2C: File Support Requests](../../active-directory-b2c/find-help-open-support-ticket.md). </br>Reuse tokens to reduce investigation on your infrastructure. </br>[Set up a resource owner password credentials flow in Azure Active Directory B2C](../../active-directory-b2c/add-ropc-policy.md?pivots=b2c-user-flow&tabs=app-reg-ga).| +| Load testing | Learn about [Azure AD B2C service limits and restrictions](../../active-directory-b2c/service-limits.md). </br>Calculate the expected authentications and user sign-ins per month. </br>Assess high load traffic durations and business reasons: holiday, migration, and event. </br>Determine expected peak rates for sign-up, traffic, and geographic distribution, for example per second. ++### Security ++Use the following checklist to enhance application security. ++* Authentication method, such as multi-factor authentication (MFA): + * MFA is recommended for users that trigger high-value transactions or other risk events. For example, banking, finance, and check-out processes. + * See, [What authentication and verification methods are available in Azure AD?](../authentication/concept-authentication-methods.md) +* Confirm use of anti-bot mechanisms +* Assess the risk of attempts to create a fraudulent account or sign-in + * See, [Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C](../../active-directory-b2c/partner-dynamics-365-fraud-protection.md) +* Confirm needed conditional postures as part of sign-in or sign-up ++#### Conditional Access and identity protection ++* The modern security perimeter now extends beyond an organization's network. The perimeter includes user and device identity. + * See, [What is Conditional Access?](../conditional-access/overview.md) +* Enhance the security of Azure AD B2C with Azure AD identity protection + * See, [Identity Protection and Conditional Access in Azure AD B2C](../../active-directory-b2c/conditional-access-identity-protection-overview.md) ++### Compliance ++To help comply with regulatory requirements and enhance back-end system security you can use virtual networks (VNets), IP restrictions, Web Application Firewall (WAF), etc. Consider the following requirements: ++* Your regulatory compliance requirements + * For example, Payment Card Industry Data Security Standard (PCI-DSS) + * Go to pcisecuritystandards.org to learn more about the [PCI Security Standards Council](https://www.pcisecuritystandards.org/) +* Data storage into a separate database store + * Determine if this information can't be written into the directory ++### User experience ++Use the following checklist to help define user experience requirements. ++* Identify integrations to extend CIAM capabilities and build seamless end-user experiences + * [Azure Active Directory B2C ISV partners](../../active-directory-b2c/partner-gallery.md) +* Use screenshots and user stories to show the application end-user experience + * For example, screenshots of sign-in, sign-up, sign-up/sign-in (SUSI), profile edit, and password reset +* Look for hints passed through by using queryString parameters in your CIAM solution +* For high user-experience customization, consider a using front-end developer +* In Azure AD B2C, you can customize HTML and CSS + * See, [Guidelines for using JavaScript](../../active-directory-b2c/javascript-and-page-layout.md?pivots=b2c-custom-policy#guidelines-for-using-javascript) +* Implement an embedded experience by using iframe support: + * See, [Embedded sign-up or sign-in experience](../../active-directory-b2c/embedded-login.md?pivots=b2c-custom-policy) + * For a single-page application, use a second sign-in HTML page that loads into the `<iframe>` element ++## Monitoring auditing, and logging ++Use the following checklist for monitoring, auditing, and logging. ++* Monitoring + * [Monitor Azure AD B2C with Azure Monitor](../../active-directory-b2c/azure-monitor.md) + * See the video [Azure Active Directory: Monitoring and reporting Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) +* Auditing and logging + * [Accessing Azure AD B2C audit logs](../../active-directory-b2c/view-audit-logs.md) ++## Resources ++- [Register a Microsoft Graph application](../../active-directory-b2c/microsoft-graph-get-started.md) +- [Manage Azure AD B2C with Microsoft Graph](../../active-directory-b2c/microsoft-graph-operations.md) +- [Deploy custom policies with Azure Pipelines](../../active-directory-b2c/deploy-custom-policies-devops.md) +- [Manage Azure AD B2C custom policies with Azure PowerShell](../../active-directory-b2c/manage-custom-policies-powershell.md) ++## Next steps ++[Recommendations and best practices for Azure Active Directory B2C](../../active-directory-b2c/best-practices.md) |
active-directory | Backup Authentication System Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/backup-authentication-system-apps.md | + + Title: Application requirements for the backup authentication system +description: How to configure your application to allow for backup authentication system support. +++++ Last updated : 06/02/2023+++++++++# Application requirements for the backup authentication system ++The Azure AD backup authentication system provides resilience to applications that use supported protocols and flows. For more information about the backup authentication system, see the article [Azure AD's backup authentication system](backup-authentication-system.md). ++## Application requirements for protection ++Applications must communicate with a supported hostname for the given Azure environment and use protocols currently supported by the backup authentication system. Use of authentication libraries, such as the [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md), ensures that you're using authentication protocols supported by the backup authentication system. ++### Hostnames supported by the backup authentication system + +| Azure environment | Supported hostname | +| | | +| Azure Commercial | login.microsoftonline.com | +| Azure Government | login.microsoftonline.us | ++### Authentication protocols supported by the backup authentication system ++#### OAuth 2.0 and OpenID Connect (OIDC) ++##### Common guidance ++All applications using the OAuth 2.0 and/or OIDC protocols should adhere to the following practices to ensure resilience: ++- Your application uses MSAL or strictly adheres to the OpenID Connect & OAuth2 specifications. Microsoft recommends using MSAL libraries appropriate to your platform and use case. Using these libraries ensures the use of APIs and call patterns are supportable by the backup authentication system. +- Your application uses a fixed set of scopes instead of [dynamic consent](../develop/scopes-oidc.md) when acquiring access tokens. +- Your application doesn't use the [Resource Owner Password Credentials Grant](../develop/v2-oauth-ropc.md). **This grant type won't be supported** by the backup authentication system for any client type. Microsoft strongly recommends switching to alternative grant flows for better security and resilience. +- Your application doesn't rely upon the [UserInfo endpoint](../develop/userinfo.md). Switching to using an ID token instead reduces latency by eliminating up to two network requests, and use existing support for ID token resilience within the backup authentication system. ++##### Native applications ++Native applications are public client applications that run directly on desktop or mobile devices and not in a web browser. They're registered as public clients in their application registration on the Microsoft Entra or Azure portal. ++Native applications are protected by the backup authentication system when all the following are true: ++1. Your application persists the token cache for at least three days. Applications should use the deviceΓÇÖs token cache location or the [token cache serialization API](../develop/msal-net-token-cache-serialization.md) to persist the token cache even when the user closes the application. +1. Your application makes use of the MSAL [AcquireTokenSilent API](../develop/msal-net-acquire-token-silently.md) to retrieve tokens using cached Refresh Tokens. The use of the [AcquireTokenInteractive API](../develop/scenario-desktop-acquire-token-interactive.md) may fail to acquire a token from the backup authentication system if user interaction is required. ++The backup authentication system doesn't currently support the [device authorization grant](../develop/v2-oauth2-device-code.md). ++##### Single-page web applications ++Single-page web applications (SPAs) have limited support in the backup authentication system. SPAs that use the [implicit grant flow](../develop/v2-oauth2-implicit-grant-flow.md) and request only OpenID Connect ID tokens are protected. Only apps that either use MSAL.js 1.x or implement the implicit grant flow directly can use this protection, as MSAL.js 2.x doesn't support the implicit flow. ++The backup authentication system doesn't currently support the [authorization code flow with Proof Key for Code Exchange](../develop/v2-oauth2-auth-code-flow.md). ++##### Web applications & services ++The backup authentication system doesn't currently support web applications and services that are configured as confidential clients. Protection for the [authorization code grant flow](../develop/v2-oauth2-auth-code-flow.md) and subsequent token acquisition using refresh tokens and client secrets or [certificate credentials](../develop/active-directory-certificate-credentials.md) isn't currently supported. The OAuth 2.0 [on-behalf-of flow](../develop/v2-oauth2-on-behalf-of-flow.md) isn't currently supported. ++#### SAML 2.0 single sign-on (SSO) ++The backup authentication system partially supports the SAML 2.0 SSO protocol. Flows that use the SAML 2.0 Identity Provider (IdP) Initiated flow are protected by the backup authentication system. Applications that use the [Service Provider (SP) Initiated flow](../develop/single-sign-on-saml-protocol.md), aren't currently protected by the backup authentication system. ++### Workload identity authentication protocols supported by the backup authentication system ++#### OAuth 2.0 ++##### Managed identity ++Applications that use Managed Identities to acquire Azure Active Directory access tokens are protected. Microsoft recommends the use of user-assigned managed identities in most scenarios, however this protection applies to both [user and system-assigned managed identities](../managed-identities-azure-resources/overview.md). ++##### Service principal ++The backup authentication system doesn't currently support service principal-based Workload identity authentication using the [client credentials grant flow](../develop/v2-oauth2-client-creds-grant-flow.md). Microsoft recommends using the version of MSAL appropriate to your platform so your application is protected by the backup authentication system when the protection becomes available. ++## Next steps ++- [Azure AD's backup authentication system](backup-authentication-system.md) +- [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) +- [Introduction to the backup authentication system](https://azure.microsoft.com/blog/advancing-service-resilience-in-azure-active-directory-with-its-backup-authentication-service/) +- [Resilience Defaults for Conditional Access](../conditional-access/resilience-defaults.md) |
active-directory | Backup Authentication System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/backup-authentication-system.md | + + Title: Azure AD's backup authentication system +description: Increasing the resilience of the authentication plane with the backup authentication system. +++++ Last updated : 06/02/2023+++++++++# Azure AD's backup authentication system ++Users and organizations around the world depend on the high availability of Azure Active Directory (Azure AD) authentication of users and services 24 hours a day, seven days a week. We promise a 99.99% Service Level availability for authentication, and we continuously seek to improve it by enhancing the resilience of our authentication service. To further improve resilience during outages, we implemented a backup system in 2021. ++The Azure AD backup authentication system is made up of multiple backup services that work together to increase authentication resilience if there's an outage. This system transparently and automatically handles authentications for supported applications and services if the primary Azure AD service is unavailable or degraded. It adds an extra layer of resilience on top of the multiple levels of existing redundancy. This resilience is described in the blog post [Advancing service resilience in Azure Active Directory with its backup authentication service](https://azure.microsoft.com/blog/advancing-service-resilience-in-azure-active-directory-with-its-backup-authentication-service/). This system syncs authentication metadata when the system is healthy and uses that to enable users to continue to access applications during outages of the primary service while still enforcing policy controls. ++During an outage of the primary service, users are able to continue working with their applications, as long as they accessed them in the last three days from the same device, and no blocking policies exist that would curtail their access: ++In addition to Microsoft applications, we support: ++- Native email clients on iOS and Android. +- SaaS applications available in the app gallery, like ADP, Atlassian, AWS, GoToMeeting, Kronos, Marketo, SAP, Trello, Workday, and more. +- Selected line of business applications, based on their authentication patterns. ++Service to service authentication that relies on Azure AD managed identities or are built on Azure services, like virtual machines, cloud storage, Azure AI services, and App Services, receives increased resilience from the back up authentication system. ++Microsoft is continuously expanding the number of supported scenarios. ++## Which non-Microsoft workloads are supported? ++The backup authentication system automatically provides incremental resilience to tens of thousands of supported non-Microsoft applications based on their authentication patterns. See the appendix for a list of the most [common non-Microsoft applications and their coverage status](#appendix). For an in depth explanation of which authentication patterns are supported, see the article [Understanding Application Support for the backup authentication system](backup-authentication-system-apps.md) article. ++- Native applications using the OAuth 2.0 protocol to access resource applications, such as popular non-Microsoft e-mail and IM clients like: Apple Mail, Aqua Mail, Gmail, Samsung Email, and Spark. +- Line of business web applications configured to authenticate with OpenID Connect using only ID tokens. +- Web applications authenticating with the SAML protocol, when configured for IDP-Initiated Single Sign On (SSO) like: ADP, Atlassian Cloud, AWS, GoToMeeting, Kronos, Marketo, Palo Alto Networks, SAP Cloud Identity Trello, Workday, and Zscaler. ++### Non-Microsoft application types that aren't protected ++The following auth patterns aren't currently supported: ++- Web applications that authenticate using Open ID Connect and request access tokens +- Web applications that use the SAML protocol for authentication, when configured as SP-Initiated SSO ++## What makes a user supportable by the backup authentication system? ++During an outage, a user can authenticate using the backup authentication system if the following conditions are met: ++1. The user has successfully authenticated using the same app and device in the last three days. +1. The user isn't required to authenticate interactively +1. The user is accessing a resource as a member of their home tenant, rather than exercising a B2B or B2C scenario. +1. The user isn't subject to Conditional Access policies that limit the backup authentication system, like disabling [resilience defaults](../conditional-access/resilience-defaults.md). +1. The user hasn't been subject to a revocation event, such as a credential change since their last successful authentication. ++### How does interactive authentication and user activity affect resilience? ++The backup authentication system relies on metadata from a prior authentication to reauthenticate the user during an outage. For this reason, a user must have authenticated in the last three days using the same app on the same device for the backup service to be effective. Users who are inactive or haven't yet authenticated to a given app can't use the backup authentication system for that application. ++### How do Conditional Access policies affect resilience? ++Certain policies can't be evaluated in real-time by the backup authentication system and must rely on prior evaluations of these policies. Under outage conditions, the service uses a prior evaluation by default to maximize resilience. For example, access that is conditioned on a user having a particular role (like Application Administrator) continues during an outage based on the role the user had during that latest authentication. If the outage-only use of a previous evaluation needs to be restricted, tenant administrators can choose a strict evaluation of all Conditional Access policies, even under outage conditions, by disabling resilience defaults. This decision should be taken with care because disabling [resilience defaults](../conditional-access/resilience-defaults.md) for a given policy disables those users from using backup authentication. Resilience defaults must be re-enabled before an outage occurs for the backup system to provide resilience. ++Certain other types of policies don't support use of the backup authentication system. Use of the following policies reduce resilience: ++- Use of the [sign-in frequency control](../conditional-access/concept-conditional-access-session.md#sign-in-frequency) as part of a Conditional Access policy. +- Use of the [authentication methods policy](../conditional-access/concept-conditional-access-grant.md#require-authentication-strength). +- Use of [classic Conditional Access policies](../conditional-access/policy-migration.md). ++## Workload identity resilience in the backup authentication system ++In addition to user authentication, the backup authentication system provides resilience for [managed identities](../managed-identities-azure-resources/overview.md) and other key Azure infrastructure by offering a regionally isolated authentication service that is redundantly layered with the primary authentication service. This system enables the infrastructure authentication within an Azure region to be resilient to issues that may occur in another region or within the larger Azure Active Directory service. This system complements AzureΓÇÖs cross-region architecture. Building your own applications using MI and following AzureΓÇÖs [best practices for resilience and availability]() ensures your applications are highly resilient. In addition to MI, this regionally resilient backup system protects key Azure infrastructure and services that keep the cloud functional. ++### Summary of infrastructure authentication support ++- Your services built-on the Azure Infrastructure using managed identities are protected by the backup authentication system. +- Azure services authenticating with each other are protected by the backup authentication system. +- Your services built on or off Azure when the identities are registered as Service Principals and not ΓÇ£managed identitiesΓÇ¥ **aren't protected** by the backup authentication system. ++## Cloud environments that support the backup authentication system ++The backup authentication system is supported in all cloud environments except Azure China 21vianet. The types of identities supported vary by cloud, as described in the following table. ++| Azure environment | Identities protected | +| | | +| Azure Commercial | Users, managed identities | +| Azure Government | Users, managed identities | +| Azure Government Secret | managed identities | +| Azure Government Top Secret | managed identities | +| Azure China | Not available | ++## Appendix ++### Popular non-Microsoft native client apps and app gallery applications ++| App Name | Protected | Why Not protected? | +| | | | +| ABBYY FlexiCapture 12 | No | SAML SP-initiated | +| Adobe Experience Manager | No | SAML SP-initiated | +| Adobe Identity Management (OIDC) | No | OIDC with Access Token | +| ADP | Yes | Protected | +| Apple Business Manager | No | SAML SP-initiated | +| Apple Internet Accounts | Yes | Protected | +| Apple School Manager | No | OIDC with Access Token | +| Aqua Mail | Yes | Protected | +| Atlassian Cloud | Yes \* | Protected | +| Blackboard Learn | No | SAML SP-initiated | +| Box | No | SAML SP-initiated | +| Brightspace by Desire2Leam | No | SAML SP-initiated | +| Canvas | No | SAML SP-initiated | +| Ceridian Dayforce HCM | No | SAML SP-initiated | +| Cisco AnyConnect | No | SAML SP-initiated | +| Cisco Webex | No | SAML SP-initiated | +| Citrix ADC SAML Connector forAzure AD | No | SAML SP-initiated | +| Clever | No | SAML SP-initiated | +| Cloud Drive Mapper | Yes | Protected | +| Cornerstone Single Sign-on | No | SAML SP-initiated | +| Docusign | No | SAML SP-initiated | +| Druva | No | SAML SP-initiated | +| F5 BIG-IP ARM Azure AD integration | No | SAML SP-initiated | +| FortiGate SSL VPN | No | SAML SP-initiated | +| Freshworks | No | SAML SP-initiated | +| Gmail | Yes | Protected | +| Google Cloud / G Suite Connector by Microsoft | No | SAML SP-initiated | +| HubSpot Sales | No | SAML SP-initiated | +| Kronos | Yes \* | Protected | +| Madrasati App | No | SAML SP-initiated | +| OpenAthens | No | SAML SP-initiated | +| Oracle Fusion ERP | No | SAML SP-initiated | +| Palo Alto Networks - GlobalProtect | No | SAML SP-initiated | +| Polycom - Skype for Business Certified Phone | Yes | Protected | +| Salesforce | No | SAML SP-initiated | +| Samsung Email | Yes | Protected | +| SAP Cloud Platform Identity Authentication | No | SAML SP-initiated | +| SAP Concur | Yes \* | SAML SP-initiated | +| SAP Concur Travel and Expense | Yes \* | Protected | +| SAP Fiori | No | SAML SP-initiated | +| SAP NetWeaver | No | SAML SP-initiated | +| SAP SuccessFactors | No | SAML SP-initiated | +| Service Now | No | SAML SP-initiated | +| Slack | No | SAML SP-initiated | +| Smartsheet | No | SAML SP-initiated | +| Spark | Yes | Protected | +| UKG pro | Yes \* | Protected | +| VMware Boxer | Yes | Protected | +| walkMe | No | SAML SP-initiated | +| Workday | No | SAML SP-initiated | +| Workplace from Facebook | No | SAML SP-initiated | +| Zoom | No | SAML SP-initiated | +| Zscaler | Yes \* | Protected | +| Zscaler Private Access (ZPA) | No | SAML SP-initiated | +| Zscaler ZSCloud | No | SAML SP-initiated | ++> [!NOTE] +> \* Apps configured to authenticate with the SAML protocol are protected when using IDP-Initiated authentication. Service Provider (SP) initiated SAML configurations aren't supported ++### Azure resources and their status ++| resource | Azure resource name | Status | +| | | | +| microsoft.apimanagement | API Management service in Azure Government and China regions | Protected | +| microsoft.app | App Service | Protected | +| microsoft.appconfiguration | Azure App Configuration | Protected | +| microsoft.appplatform | Azure App Service | Protected | +| microsoft.authorization | Azure Active Directory | Protected | +| microsoft.automation | Automation Service | Protected | +| microsoft.avs | Azure VMware Solution | Protected | +| microsoft.batch | Azure Batch | Protected | +| microsoft.cache | Azure Cache for Redis | Protected | +| microsoft.cdn | Azure Content Delivery Network (CDN) | Not protected | +| microsoft.chaos | Azure Chaos Engineering | Protected | +| microsoft.cognitiveservices | Azure AI services APIs and Containers | Protected | +| microsoft.communication | Azure Communication Services | Not protected | +| microsoft.compute | Azure Virtual Machines | Protected | +| microsoft.containerinstance | Azure Container Instances | Protected | +| microsoft.containerregistry | Azure Container Registry | Protected | +| microsoft.containerservice | Azure Container Service (deprecated) | Protected | +| microsoft.dashboard | Azure Dashboards | Protected | +| microsoft.databasewatcher | Azure SQL Database Automatic Tuning | Protected | +| microsoft.databox | Azure Data Box | Protected | +| microsoft.databricks | Azure Databricks | Not protected | +| microsoft.datacollaboration | Azure Data Share | Protected | +| microsoft.datadog | Datadog | Protected | +| microsoft.datafactory | Azure Data Factory | Protected | +| microsoft.datalakestore | Azure Data Lake Storage Gen1 and Gen2 | Not protected | +| microsoft.dataprotection | Microsoft Cloud App Security Data Protection API | Protected | +| microsoft.dbformysql | Azure Database for MySQL | Protected | +| microsoft.dbforpostgresql | Azure Database for PostgreSQL | Protected | +| microsoft.delegatednetwork | Delegated Network Management service | Protected | +| microsoft.devcenter | Microsoft Store for Business and Education | Protected | +| microsoft.devices | Azure IoT Hub and IoT Central | Not protected | +| microsoft.deviceupdate | Windows 10 IoT Core Services Device Update | Protected | +| microsoft.devtestlab | Azure DevTest Labs | Protected | +| microsoft.digitaltwins | Azure Digital Twins | Protected | +| microsoft.documentdb | Azure Cosmos DB | Protected | +| microsoft.eventgrid | Azure Event Grid | Protected | +| microsoft.eventhub | Azure Event Hubs | Protected | +| microsoft.healthbot | Health Bot Service | Protected | +| microsoft.healthcareapis | FHIR API for Azure API for FHIR and Microsoft Cloud for Healthcare solutions | Protected | +| microsoft.hybridcontainerservice | Azure Arc enabled Kubernetes | Protected | +| microsoft.hybridnetwork | Azure Virtual WAN | Protected | +| microsoft.insights | Application Insights and Log Analytics | Not protected | +| microsoft.iotcentral | IoT Central | Protected | +| microsoft.kubernetes | Azure Kubernetes Service (AKS) | Protected | +| microsoft.kusto | Azure Data Explorer (Kusto) | Protected | +| microsoft.loadtestservice | Visual Studio Load Testing Service | Protected | +| microsoft.logic | Azure Logic Apps | Protected | +| microsoft.machinelearningservices | Machine Learning Services on Azure | Protected | +| microsoft.managedidentity | Managed identities for Microsoft Resources | Protected | +| microsoft.maps | Azure Maps | Protected | +| microsoft.media | Azure Media Services | Protected | +| microsoft.migrate | Azure Migrate | Protected | +| microsoft.mixedreality | Mixed Reality services including Remote Rendering, Spatial Anchors, and Object Anchors | Not protected | +| microsoft.netapp | Azure NetApp Files | Protected | +| microsoft.network | Azure Virtual Network | Protected | +| microsoft.openenergyplatform | Open Energy Platform (OEP) on Azure | Protected | +| microsoft.operationalinsights | Azure Monitor Logs | Protected | +| microsoft.powerplatform | Microsoft Power Platform | Protected | +| microsoft.purview | Azure Purview (formerly Azure Data Catalog) | Protected | +| microsoft.quantum | Microsoft Quantum Development Kit | Protected | +| microsoft.recommendationsservice | Azure AI services Recommendations API | Protected | +| microsoft.recoveryservices | Azure Site Recovery | Protected | +| microsoft.resourceconnector | Azure Resource Connector | Protected | +| microsoft.scom | System Center Operations Manager (SCOM) | Protected | +| microsoft.search | Azure Cognitive Search | Not protected | +| microsoft.security | Azure Security Center | Not protected | +| microsoft.securitydetonation | Microsoft Defender for Endpoint Detonation Service | Protected | +| microsoft.servicebus | Service Bus messaging service and Event Grid Domain Topics | Protected | +| microsoft.servicefabric | Azure Service Fabric | Protected | +| microsoft.signalrservice | Azure SignalR Service | Protected | +| microsoft.solutions | Azure Solutions | Protected | +| microsoft.sql | SQL Server on Virtual Machines and SQL Managed Instance on Azure | Protected | +| microsoft.storage | Azure Storage | Protected | +| microsoft.storagecache | Azure Storage Cache | Protected | +| microsoft.storagesync | Azure File Sync | Protected | +| microsoft.streamanalytics | Azure Stream Analytics | Not protected | +| microsoft.synapse | Synapse Analytics (formerly SQL DW) and Synapse Studio (formerly SQL DW Studio) | Protected | +| microsoft.usagebilling | Azure Usage and Billing Portal | Not protected | +| microsoft.videoindexer | Video Indexer | Protected | +| microsoft.voiceservices | Azure Communication Services - Voice APIs | Not protected | +| microsoft.web | Web Apps | Protected | ++## Next steps ++- [Application requirements for the backup authentication system](backup-authentication-system-apps.md) +- [Introduction to the backup authentication system](https://azure.microsoft.com/blog/advancing-service-resilience-in-azure-active-directory-with-its-backup-authentication-service/) +- [Resilience Defaults for Conditional Access](../conditional-access/resilience-defaults.md) +- [Azure Active Directory SLA performance reporting](../reports-monitoring/reference-azure-ad-sla-performance.md) |
active-directory | Deployment Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/deployment-plans.md | + + Title: Azure Active Directory deployment plans +description: Guidance on Azure Active Directory deployment, such as authentication, devices, hybrid scenarios, governance, and more. +++++++ Last updated : 01/17/2023++++++# Azure Active Directory deployment plans ++Use the following guidance to help deploy Azure Active Directory (Azure AD). Learn about business value, planning considerations, and operational procedures. You can use a browser Print to PDF function to create offline documentation. ++## Your stakeholders ++When beginning your deployment plans, include your key stakeholders. Identify and document stakeholders, roles, responsibilities. Titles and roles can differ from one organization to another, however the ownership areas are similar. ++|Role |Responsibility | +|-|-| +|Sponsor|An enterprise senior leader with authority to approve and/or assign budget and resources. The sponsor is the connection between managers and the executive team.| +|End user|The people for whom the service is implemented. Users can participate in a pilot program.| +|IT Support Manager|Provides input on the supportability of proposed changes | +|Identity architect or Azure Global Administrator|Defines how the change aligns with identity management infrastructure| +|Application business owner |Owns the affected application(s), which might include access management. Provides input on the user experience. +|Security owner|Confirms the change plan meets security requirements| +|Compliance manager|Ensures compliance with corporate, industry, or governmental requirements| ++### RACI ++RACI is an acronym derived from four key responsibilities: ++* **Responsible** +* **Accountable** +* **Consulted** +* **Informed** ++Use these terms to clarify and define roles and responsibilities in your project, and for other cross-functional or departmental projects and processes. ++## Authentication ++Use the following list to plan for authentication deployment. ++* **Azure AD multi-factor authentication (MFA)** - Using admin-approved authentication methods, Azure AD MFA helps safeguard access to your data and applications while meeting the demand for a simple sign-in process: + * See the video, [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM) + * See, [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md) +* **Conditional Access** - Implement automated access-control decisions for users to access cloud apps, based on conditions: + * See, [What is Conditional Access?](../conditional-access/overview.md) + * See, [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) +* **Azure AD self-service password reset (SSPR)** - Help users reset a password without administrator intervention: + * See, [Passwordless authentication options for Azure AD](../authentication/concept-authentication-passwordless.md) + * See, [Plan an Azure Active Directory self-service password reset deployment](../authentication/howto-sspr-deployment.md) +* **Passwordless authentication** - Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys: + * See, [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md) + * See, [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md) ++## Applications and devices ++Use the following list to help deploy applications and devices. ++* **Single sign-on (SSO)** - Enable user access to apps and resources while signing in once, without being required to enter credentials again: + * See, [What is SSO in Azure AD?](../manage-apps/what-is-single-sign-on.md) + * See, [Plan a SSO deployment](../manage-apps/plan-sso-deployment.md) +* **My Apps portal** - A web-based portal to discover and access applications. Enable user productivity with self-service, for instance requesting access to groups, or managing access to resources on behalf of others. + * See, [My Apps portal overview](../manage-apps/myapps-overview.md) +* **Devices** - Evaluate device integration methods with Azure AD, choose the implementation plan, and more. + * See, [Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md) ++## Hybrid scenarios ++The following list describes features and services for productivity gains in hybrid scenarios. ++* **Active Directory Federation Services (AD FS)** - Migrate user authentication from federation to cloud with pass-through authentication or password hash sync: + * See, [What is federation with Azure AD?](../hybrid/whatis-fed.md) + * See, [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md) +* **Azure AD Application Proxy** - Enable employees to be productive at any place or time, and from a device. Learn about software as a service (SaaS) apps in the cloud and corporate apps on-premises. Azure AD Application Proxy enables access without virtual private networks (VPNs) or demilitarized zones (DMZs): + * See, [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy.md) + * See, [Plan an Azure AD Application Proxy deployment](../app-proxy/application-proxy-deployment-plan.md) +* **Seamless single sign-on (Seamless SSO)** - Use Seamless SSO for user sign-in, on corporate devices connected to a corporate network. Users don't need to enter passwords to sign in to Azure AD, and usually don't need to enter usernames. Authorized users access cloud-based apps without extra on-premises components: + * See, [Azure Active Directory SSO: Quickstart](../hybrid/how-to-connect-sso-quick-start.md) + * See, [Azure Active Directory Seamless SSO: Technical deep dive](../hybrid/how-to-connect-sso-how-it-works.md) ++## Users ++* **User identities** - Learn about automation to create, maintain, and remove user identities in cloud apps, such as Dropbox, Salesforce, ServiceNow, and more. + * See, [Plan an automatic user provisioning deployment in Azure Active Directory](../app-provisioning/plan-auto-user-provisioning.md) +* **Identity governance** - Create identity governance and enhance business processes that rely on identity data. With HR products, such as Workday or Successfactors, manage employee and contingent-staff identity lifecycle with rules. These rules map Joiner-Mover-Leaver processes, such as New Hire, Terminate, Transfer, to IT actions such as Create, Enable, Disable. + * See, [Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) +* **Azure AD B2B collaboration** - Improve external-user collaboration with secure access to applications: + * See, [B2B collaboration overview](../external-identities/what-is-b2b.md) + * See, [Plan an Azure Active Directory B2B collaboration deployment](secure-external-access-resources.md) ++## Identity Governance and reporting ++Use the following list to learn about identity governance and reporting. Items in the list refer to Microsoft Entra. ++Learn more: [Secure access for a connected world—meet Microsoft Entra](https://www.microsoft.com/en-us/security/blog/?p=114039) ++* **Privileged identity management (PIM)** - Manage privileged administrative roles across Azure AD, Azure resources, and other Microsoft Online Services. Use it for just-in-time access, request approval workflows, and fully integrated access reviews to help prevent malicious activities: + * See, [Start using Privileged Identity Management](../privileged-identity-management/pim-getting-started.md) + * See, [Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md) +* **Reporting and monitoring** - Your Azure AD reporting and monitoring solution design has dependencies and constraints: legal, security, operations, environment, and processes. + * See, [Azure Active Directory reporting and monitoring deployment dependencies](../reports-monitoring/plan-monitoring-and-reporting.md) +* **Access reviews** - Understand and manage access to resources: + * See, [What are access reviews?](../governance/access-reviews-overview.md) + * See, [Plan a Microsoft Entra access reviews deployment](../governance/deploy-access-reviews.md) +* **Identity governance** - Meet your compliance and risk management objectives for access to critical applications. Learn how to enforce accurate access. + * See, [Govern access for applications in your environment](../governance/identity-governance-applications-prepare.md) ++## Best practices for a pilot ++Use pilots to test with a small group, before making a change for larger groups, or everyone. Ensure each use case in your organization is tested. ++### Pilot: Phase 1 ++In your first phase, target IT, usability, and other users who can test and provide feedback. Use this feedback to gain insights on potential issues for support staff, and to develop communications and instructions you send to all users. ++### Pilot: Phase 2 ++Widen the pilot to larger groups of users by using dynamic membership, or by manually adding users to the targeted group(s). ++Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md) |
active-directory | Govern Service Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/govern-service-accounts.md | + + Title: Governing Azure Active Directory service accounts +description: Principles and procedures for managing the lifecycle of service accounts in Azure Active Directory. +++++++ Last updated : 02/09/2023+++++++# Governing Azure Active Directory service accounts ++There are three types of service accounts in Azure Active Directory (Azure AD): managed identities, service principals, and user accounts employed as service accounts. When you create service accounts for automated use, they're granted permissions to access resources in Azure and Azure AD. Resources can include Microsoft 365 services, software as a service (SaaS) applications, custom applications, databases, HR systems, and so on. Governing Azure AD service account is managing creation, permissions, and lifecycle to ensure security and continuity. ++Learn more: ++* [Securing managed identities](service-accounts-managed-identities.md) +* [Securing service principals](service-accounts-principal.md) ++> [!NOTE] +> We do not recommend user accounts as service accounts because they are less secure. This includes on-premises service accounts synced to Azure AD, because they aren't converted to service principals. Instead, we recommend managed identities, or service principals, and the use of Conditional Access. ++Learn more: [What is Conditional Access?](../conditional-access/overview.md) ++## Plan your service account ++Before creating a service account, or registering an application, document the service account key information. Use the information to monitor and govern the account. We recommend collecting the following data and tracking it in your centralized Configuration Management Database (CMDB). ++| Data| Description| Details | +| - | - | - | +| Owner| User or group accountable for managing and monitoring the service account| Grant the owner permissions to monitor the account and implement a way to mitigate issues. Issue mitigation is done by the owner, or by request to an IT team. | +| Purpose| How the account is used| Map the service account to a service, application, or script. Avoid creating multi-use service accounts. | +| Permissions (Scopes)| Anticipated set of permissions| Document the resources it accesses and permissions for those resources | +| CMDB Link| Link to the accessed resources, and scripts in which the service account is used| Document the resource and script owners to communicate the effects of change | +| Risk assessment| Risk and business effect, if the account is compromised|Use the information to narrow the scope of permissions and determine access to information | +| Period for review| The cadence of service account reviews, by the owner| Review communications and reviews. Document what happens if a review is performed after the scheduled review period. | +| Lifetime| Anticipated maximum account lifetime| Use this measurement to schedule communications to the owner, disable, and then delete the accounts. Set an expiration date for credentials that prevents them from rolling over automatically. | +| Name| Standardized account name| Create a naming convention for service accounts to search, sort, and filter them | +++## Principle of least privileges +Grant the service account permissions needed to perform tasks, and no more. If a service account needs high-level permissions, for example a Global Administrator, evaluate why and try to reduce permissions. ++We recommend the following practices for service account privileges. ++### Permissions ++* Don't assign built-in roles to service accounts + * See, [oAuth2PermissionGrant resource type](/graph/api/resources/oauth2permissiongrant) +* The service principal is assigned a privileged role + * [Create and assign a custom role in Azure Active Directory](../roles/custom-create.md) +* Don't include service accounts as members of any groups with elevated permissions + * See, [Get-AzureADDirectoryRoleMember](/powershell/module/azuread/get-azureaddirectoryrolemember): + +>`Get-AzureADDirectoryRoleMember`, and filter for objectType "Service Principal", or use</br> +>`Get-AzureADServicePrincipal | % { Get-AzureADServiceAppRoleAssignment -ObjectId $_ }` ++* See, [Introduction to permissions and consent](../develop/v2-permissions-and-consent.md) to limit the functionality a service account can access on a resource +* Service principals and managed identities can use OAuth 2.0 scopes in a delegated context impersonating a signed-on user, or as service account in the application context. In the application context, no one is signed in. +* Confirm the scopes service accounts request for resources + * If an account requests Files.ReadWrite.All, evaluate if it needs File.Read.All + * [Microsoft Graph permissions reference](/graph/permissions-reference) +* Ensure you trust the application developer, or API, with the requested access ++### Duration ++* Limit service account credentials (client secret, certificate) to an anticipated usage period +* Schedule periodic reviews of service account usage and purpose + * Ensure reviews occur prior to account expiration ++After you understand the purpose, scope, and permissions, create your service account, use the instructions in the following articles. ++* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md?tabs=dotnet) +* [Create an Azure Active Directory application and service principal that can access resources](../develop/howto-create-service-principal-portal.md) ++Use a managed identity when possible. If you can't use a managed identity, use a service principal. If you can't use a service principal, then use an Azure AD user account. ++## Build a lifecycle process ++A service account lifecycle starts with planning, and ends with permanent deletion. The following sections cover how you monitor, review permissions, determine continued account usage, and ultimately deprovision the account. ++### Monitor service accounts ++Monitor your service accounts to ensure usage patterns are correct, and that the service account is used. ++#### Collect and monitor service account sign-ins ++Use one of the following monitoring methods: ++* Azure AD Sign-In Logs in the Azure portal +* Export the Azure AD Sign-In Logs to + * [Azure Storage documentation](../../storage/index.yml) + * [Azure Event Hubs documentation](../../event-hubs/index.yml), or + * [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md) ++Use the following screenshot to see service principal sign-ins. ++![Screenshot of service principal sign-ins.](./media/govern-service-accounts/service-accounts-govern-1.png) ++#### Sign-in log details ++Look for the following details in sign-in logs. ++* Service accounts not signed in to the tenant +* Changes in sign-in service account patterns ++We recommend you export Azure AD sign-in logs, and then import them into a security information and event management (SIEM) tool, such as Microsoft Sentinel. Use the SIEM tool to build alerts and dashboards. ++### Review service account permissions ++Regularly review service account permissions and accessed scopes to see if they can be reduced or eliminated. ++* See, [Get-AzureADServicePrincipalOAuth2PermissionGrant](/powershell/module/azuread/get-azureadserviceprincipaloauth2permissiongrant) + * [Script to list all delegated permissions and application permissions in Azure AD](https://gist.github.com/psignoret/41793f8c6211d2df5051d77ca3728c09) scopes for service account +* See, [Azure AD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) and confirm validity +* Don't set service principal credentials to **Never expire** +* Use certificates or credentials stored in Azure Key Vault, when possible + * [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md) ++The free PowerShell sample collects service principal OAuth2 grants and credential information, records them in a comma-separated values (CSV) file, and a Power BI sample dashboard. For more information, see [Azure AD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment). ++### Recertify service account use ++Establish a regular review process to ensure service accounts are regularly reviewed by owners, security team, or IT team. ++The process includes: ++* Determine service account review cycle, and document it in your CMDB +* Communications to owner, security team, IT team, before a review +* Determine warning communications, and their timing, if the review is missed +* Instructions if owners fail to review or respond + * Disable, but don't delete, the account until the review is complete +* Instructions to determine dependencies. Notify resource owners of effects ++The review includes the owner and an IT partner, and they certify: ++* Account is necessary +* Permissions to the account are adequate and necessary, or a change is requested +* Access to the account, and its credentials, are controlled +* Account credentials are accurate: credential type and lifetime +* Account risk score hasn't changed since the previous recertification +* Update the expected account lifetime, and the next recertification date ++### Deprovision service accounts ++Deprovision service accounts under the following circumstances: ++* Account script or application is retired +* Account script or application function is retired. For example, access to a resource. +* Service account is replaced by another service account +* Credentials expired, or the account is non-functional, and there aren't complaints ++Deprovisioning includes the following tasks: ++After the associated application or script is deprovisioned: ++* [Sign-in logs in Azure AD](../reports-monitoring/concept-sign-ins.md) and resource access by the service account + * If the account is active, determine how it's being used before continuing +* For a managed service identity, disable service account sign-in, but don't remove it from the directory +* Revoke service account role assignments and OAuth2 consent grants +* After a defined period, and warning to owners, delete the service account from the directory ++## Next steps ++* [Securing cloud-based service accounts](secure-service-accounts.md) +* [Securing managed identities](service-accounts-managed-identities.md) +* [Securing service principal](service-accounts-principal.md) |
active-directory | Monitor Sign In Health For Resilience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/monitor-sign-in-health-for-resilience.md | + + Title: Monitor application sign-in health for resilience in Azure Active Directory +description: Create queries and notifications to monitor the sign-in health of your applications. +++++++ Last updated : 06/16/2023++++++# Monitoring application sign-in health for resilience ++To increase infrastructure resilience, set up monitoring of application sign-in health for your critical applications. You can receive an alert when an impacting incident occurs. This article walks through setting up the App sign-in health workbook to monitor for disruptions to your users' sign-ins. ++You can configure alerts based on the App sign-in health workbook. This workbook enables administrators to monitor authentication requests for applications in their tenants. It provides these key capabilities: ++- Configure the workbook to monitor all or individual apps with near real-time data. +- Configure alerts for authentication pattern changes so that you can investigate and respond. +- Compare trends over a period of time. Week over week is the workbook's default setting. ++> [!NOTE] +> See all available workbooks and the prerequisites for using them in [How to use Azure Monitor workbooks for reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md). ++During an impacting event, two things may happen: ++- The number of sign-ins for an application may abruptly drop when users can't sign in. +- The number of sign-in failures may increase. ++## Prerequisites ++- An Azure AD tenant. +- A user with global administrator or security administrator role for the Azure AD tenant. +- A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). +- Azure AD logs integrated with Azure Monitor logs. Learn how to [Integrate Azure AD Sign- in Logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) ++## Configure the App sign-in health workbook ++To access workbooks in the **Azure portal**, select **Azure Active Directory**, select **Workbooks**. The following screenshot shows the Workbooks Gallery in the Azure portal. +++Workbooks appear under **Usage**, **Conditional Access**, and **Troubleshoot**. The App sign in health workbook appears in the **Health** section. After you use a workbook, it may appear in the **Recently modified workbooks** section. ++You can use the App sign-in health workbook to visualize what is happening with your sign-ins. As shown in the following screenshot, the workbook presents two graphs. +++In the preceding screenshot, there are two graphs: ++- **Hourly usage (number of successful users)**. Comparing your current number of successful users to a typical usage period helps you to spot a drop in usage that may require investigation. A drop-in successful usage rate can help detect performance and utilization issues that the failure rate can't detect. For example, when users can't reach your application to attempt to sign in, there's a drop in usage but no failures. See the sample query for this data in the next section of this article. +- **Hourly failure rate**. A spike in failure rate may indicate an issue with your authentication mechanisms. Failure rate measures only appear when users can attempt to authenticate. When users can't gain access to make the attempt, there are no failures. ++## Configure the query and alerts ++You create alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can configure an alert that notifies a specific group when the usage or failure rate exceeds a specified threshold. ++Use the following instructions to create email alerts based on the queries reflected in the graphs. The sample scripts send an email notification when: ++- The successful usage drops by 90% from the same hour two days ago, as shown in the preceding hourly usage graph example. +- The failure rate increases by 90% from the same hour two days ago, as shown in the preceding hourly failure rate graph example. ++To configure the underlying query and set alerts, complete the following steps using the sample query as the basis for your configuration. The query structure description appears at the end of this section. Learn how to create, view, and manage log alerts using Azure Monitor in [Manage log alerts](../../azure-monitor/alerts/alerts-log.md). ++1. In the workbook, select **Edit** as shown in the following screenshot. Select the **query icon** in the upper right corner of the graph. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/edit-workbook.png" alt-text="Screenshot showing edit workbook."::: ++2. View the query log as shown in the following screenshot. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/query-log.png" alt-text="Screenshot showing the query log."::: ++3. Copy one of the following sample scripts for a new Kusto query. ++ - [Kusto query for increase in failure rate](#kusto-query-for-increase-in-failure-rate) + - [Kusto query for drop in usage](#kusto-query-for-drop-in-usage) ++4. Paste the query in the window. Select **Run**. Look for the **Completed** message and the query results as shown in the following screenshot. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/run-query.png" alt-text="Screenshot showing the run query results."::: ++5. Highlight the query. Select **+ New alert rule**. + + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/new-alert-rule.png" alt-text="Screenshot showing the new alert rule screen."::: ++6. Configure alert conditions. As shown in the following example screenshot, in the **Condition** section, under **Measurement**, select **Table rows** for **Measure**. Select **Count** for **Aggregation type**. Select **2 days** for **Aggregation granularity**. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/configure-alerts.png" alt-text="Screenshot showing configure alerts screen."::: + + - **Table rows**. You can use the number of rows returned to work with events such as Windows event logs, Syslog, and application exceptions. + - **Aggregation type**. Data points applied with Count. + - **Aggregation granularity**. This value defines the period that works with **Frequency of evaluation**. ++7. In **Alert Logic**, configure the parameters as shown in the example screenshot. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/alert-logic.png" alt-text="Screenshot showing alert logic screen."::: + + - **Threshold value**: 0. This value alerts on any results. + - **Frequency of evaluation**: 1 hour. This value sets the evaluation period to once per hour for the previous hour. ++8. In the **Actions** section, configure settings as shown in the example screenshot. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/create-alert-rule.png" alt-text="Screenshot showing the Create an alert rule screen."::: + + - Select **Select action group** and add the group for which you want alert notifications. + - Under **Customize actions**, select **Email alerts**. + - Add a **subject line**. ++9. In the **Details** section, configure settings as shown in the example screenshot. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/details-section.png" alt-text="Screenshot showing the Details section."::: + + - Add a **Subscription** name and a description. + - Select the **Resource group** to which you want to add the alert. + - Select the default **Severity**. + - Select **Enable upon creation** if you want it to immediately go live. Otherwise, select **Mute actions**. ++10. In the **Review + create** section, configure settings as shown in the example screenshot. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/review-create.png" alt-text="Screenshot showing the Review + create section."::: ++11. Select **Save**. Enter a name for the query. For **Save as**, select **Query**. For **Category**, select **Alert**. Again, select **Save**. ++ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/save-query.png" alt-text="Screenshot showing the save query button."::: ++### Refine your queries and alerts ++To modify your queries and alerts for maximum effectiveness: ++- Always test alerts. +- Modify alert sensitivity and frequency to receive important notifications. Admins can become desensitized to alerts and miss something important if they get too many. +- In administrator's email clients, add the email from which alerts come to the allowed senders list. This approach prevents missed notifications due to a spam filter on their email clients. +- [By design](https://github.com/MicrosoftDocs/azure-docs/issues/22637), alert queries in Azure Monitor can only include results from the past 48 hours. ++## Sample scripts ++### Kusto query for increase in failure rate ++In the following query, we detect increasing failure rates. As necessary, you can adjust the ratio at the bottom. It represents the percent change in traffic in the last hour as compared to yesterday's traffic at same time. A 0.5 result indicates a 50% difference in the traffic. ++```kusto +let today = SigninLogs +| where TimeGenerated > ago(1h) // Query failure rate in the last hour +| project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure") +// Optionally filter by a specific application +//| where AppDisplayName == **APP NAME** +| summarize success = countif(status == "success"), failure = countif(status == "failure") by bin(TimeGenerated, 1h) // hourly failure rate +| project TimeGenerated, failureRate = (failure * 1.0) / ((failure + success) * 1.0) +| sort by TimeGenerated desc +| serialize rowNumber = row_number(); +let yesterday = SigninLogs +| where TimeGenerated between((ago(1h) ΓÇô totimespan(1d))..(now() ΓÇô totimespan(1d))) // Query failure rate at the same time yesterday +| project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure") +// Optionally filter by a specific application +//| where AppDisplayName == **APP NAME** +| summarize success = countif(status == "success"), failure = countif(status == "failure") by bin(TimeGenerated, 1h) // hourly failure rate at same time yesterday +| project TimeGenerated, failureRateYesterday = (failure * 1.0) / ((failure + success) * 1.0) +| sort by TimeGenerated desc +| serialize rowNumber = row_number(); +today +| join (yesterday) on rowNumber // join data from same time today and yesterday +| project TimeGenerated, failureRate, failureRateYesterday +// Set threshold to be the percent difference in failure rate in the last hour as compared to the same time yesterday +// Day variable is the number of days since the previous Sunday. Optionally ignore results on Sat, Sun, and Mon because large variability in traffic is expected. +| extend day = dayofweek(now()) +| where day != time(6.00:00:00) // exclude Sat +| where day != time(0.00:00:00) // exclude Sun +| where day != time(1.00:00:00) // exclude Mon +| where abs(failureRate ΓÇô failureRateYesterday) > 0.5 +``` +### Kusto query for drop in usage ++In the following query, we compare traffic in the last hour to yesterday's traffic at the same time. We exclude Saturday, Sunday, and Monday because we expect large variability in the previous day's traffic at the same time. ++As necessary, you can adjust the ratio at the bottom. It represents the percent change in traffic in the last hour as compared to yesterday's traffic at same time. A 0.5 result indicates a 50% difference in the traffic. Adjust these values to fit your business operation model. ++```kusto +Let today = SigninLogs // Query traffic in the last hour +| where TimeGenerated > ago(1h) +| project TimeGenerated, AppDisplayName, UserPrincipalName +// Optionally filter by AppDisplayName to scope query to a single application +//| where AppDisplayName contains "Office 365 Exchange Online" +| summarize users = dcount(UserPrincipalName) by bin(TimeGenerated, 1hr) // Count distinct users in the last hour +| sort by TimeGenerated desc +| serialize rn = row_number(); +let yesterday = SigninLogs // Query traffic at the same hour yesterday +| where TimeGenerated between((ago(1h) ΓÇô totimespan(1d))..(now() ΓÇô totimespan(1d))) // Count distinct users in the same hour yesterday +| project TimeGenerated, AppDisplayName, UserPrincipalName +// Optionally filter by AppDisplayName to scope query to a single application +//| where AppDisplayName contains "Office 365 Exchange Online" +| summarize usersYesterday = dcount(UserPrincipalName) by bin(TimeGenerated, 1hr) +| sort by TimeGenerated desc +| serialize rn = row_number(); +today +| join // Join data from today and yesterday together +( +yesterday +) +on rn +// Calculate the difference in number of users in the last hour compared to the same time yesterday +| project TimeGenerated, users, usersYesterday, difference = abs(users ΓÇô usersYesterday), max = max_of(users, usersYesterday) +| extend ratio = (difference * 1.0) / max // Ratio is the percent difference in traffic in the last hour as compared to the same time yesterday +// Day variable is the number of days since the previous Sunday. Optionally ignore results on Sat, Sun, and Mon because large variability in traffic is expected. +| extend day = dayofweek(now()) +| where day != time(6.00:00:00) // exclude Sat +| where day != time(0.00:00:00) // exclude Sun +| where day != time(1.00:00:00) // exclude Mon +| where ratio > 0.7 // Threshold percent difference in sign-in traffic as compared to same hour yesterday +``` ++## Create processes to manage alerts ++After you set up queries and alerts, create business processes to manage the alerts. ++- Who monitors the workbook and when? +- When alerts occur, who investigates them? +- What are the communication needs? Who creates the communications and who receives them? +- When an outage occurs, what business processes apply? ++## Next steps ++[Learn more about workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md) |
active-directory | Multi Tenant Common Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-common-considerations.md | + + Title: Common considerations for multi-tenant user management in Azure Active Directory +description: Learn about the common design considerations for user access across Azure Active Directory tenants with guest accounts +++++++ Last updated : 04/19/2023+++++# Common considerations for multi-tenant user management ++This article is the third in a series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. The following articles in the series provide more information as described. ++- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. +- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated. +- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants. ++The guidance helps to you achieve a consistent state of user lifecycle management. Lifecycle management includes provisioning, managing, and deprovisioning users across tenants using the available Azure tools that include [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md). ++Synchronization requirements are unique to your organization's specific needs. As you design a solution to meet your organization's requirements, the following considerations in this article will help you identify your best options. ++- Cross-tenant synchronization +- Directory object +- Azure AD Conditional Access +- Additional access control +- Office 365 ++## Cross-tenant synchronization ++[Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) can address collaboration and access challenges of multi-tenant organizations. The following table shows common synchronization use cases. You can use both cross-tenant synchronization and customer development to satisfy use cases when considerations are relevant to more than one collaboration pattern. ++| Use case | Cross-tenant sync | Custom development | +| - | - | - | +| User lifecycle management | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | +| File sharing and app access | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | +| Support sync to/from sovereign clouds | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | +| Control sync from resource tenant | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | +| Sync Group objects | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | +| Sync Manager links | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | +| Attribute level Source of Authority | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | +| Azure AD write-back to AD | | ![Checkmark icon](media/multi-tenant-user-management-scenarios/checkmark.svg) | ++## Directory object considerations ++### Inviting an external user with UPN versus SMTP Address ++Azure AD B2B expects that a user's **UserPrincipalName** (UPN) is the primary SMTP (Email) address for sending invitations. When the user's UPN is the same as their primary SMTP address, B2B works as expected. However, if the UPN is different than the external user's primary SMTP address, it may fail to resolve when a user accepts an invitation, which may be a challenge if you don't know the user's real UPN. You need to discover and use the UPN when sending invitations for B2B. ++The [Microsoft Exchange Online](#microsoft-exchange-online) section of this article explains how to change the default primary SMTP on external users. This technique is useful if you want all email and notifications for an external to flow to the real primary SMTP address as opposed to the UPN. It may be a requirement if the UPN isn't route-able for mail flow. ++### Converting an external user's UserType ++When you use the console to manually create an invitation for an external user account, it creates the user object with a guest user type. Using other techniques to create invitations enable you to set the user type to something other than an external guest account. For example, when using the API, you can configure whether the account is an external member account or an external guest account. ++- Some of the [limits on guest functionality can be removed](../external-identities/user-properties.md#guest-user-permissions). +- You can [convert guest accounts to member user type.](../external-identities/user-properties.md#can-azure-ad-b2b-users-be-added-as-members-instead-of-guests) ++If you convert from an external guest user to an external member user account, there might be issues with how Exchange Online handles B2B accounts. You can't mail-enable accounts that you invited as external member users. To mail-enable an external member account, use the following best approach. ++- Invite the cross-org users as external guest user accounts. +- Show the accounts in the GAL. +- Set the UserType to Member. ++When you use this approach, the accounts show up as MailUser objects in Exchange Online and across Office 365. Also, note there's a timing challenge. Make sure the user is visible in the GAL by checking both Azure AD user ShowInAddressList property aligns with the Exchange Online PowerShell HiddenFromAddressListsEnabled property (that are reverse of each other). The [Microsoft Exchange Online](#microsoft-exchange-online) section of this article provides more information on changing visibility. ++It's possible to convert a member user to a guest user, which is useful for internal users that you want to restrict to guest-level permissions. Internal guest users are users that aren't employees of your organization but for whom you manage their users and credentials. It may allow you to avoid licensing the internal guest user. ++### Issues with using mail contact objects instead of external users or members ++You can represent users from another tenant using a traditional GAL synchronization. If you perform a GAL synchronization rather than using Azure AD B2B collaboration, it creates a mail contact object. ++- A mail contact object and a mail-enabled external member or guest user can't coexist in the same tenant with the same email address at the same time. +- If a mail contact object exists for the same mail address as the invited external user, it creates the external user but isn't mail-enabled. +- If the mail-enabled external user exists with the same mail, an attempt to create a mail contact object throws an exception at creation time. ++> [!NOTE] +> Using mail contacts requires Active Directory Directory Services (AD DS) or Exchange Online PowerShell. Microsoft Graph doesn't provide an API call for managing contacts. ++The following table displays the results of mail contact objects and external user states. ++| Existing state | Provisioning scenario | Effective result | +| - | - | - | +| None | Invite B2B Member | Non-mail-enabled member user. See important note above. | +| None | Invite B2B Guest | Mail-enable external user. | +| Mail contact object exists | Invite B2B Member | Error. Conflict of Proxy Addresses. | +| Mail contact object exists | Invite B2B Guest | Mail-contact and Non-Mail enabled external user. See important note above. | +| Mail-enabled external guest user | Create mail contact object | Error | +| Mail-enabled external member user exists | Create mail-contact | Error | ++Microsoft recommends using Azure AD B2B collaboration (instead of traditional GAL synchronization) to create: ++- External users that you enable to show in the GAL. +- External member users that show in the GAL by default but aren't mail-enabled. ++You can choose to use the mail contact object to show users in the GAL. This approach integrates a GAL without providing other permissions because mail contacts aren't security principals. ++Follow this recommended approach to achieve the goal: ++- Invite guest users. +- Unhide them from the GAL. +- Disable them by [blocking them from sign-in](/powershell/module/azuread/set-azureaduser). ++A mail contact object can't convert to a user object. Therefore, properties associated with a mail contact object can't transfer (such as group memberships and other resource access). Using a mail contact object to represent a user comes with the following challenges. ++- **Office 365 Groups.** Office 365 Groups support policies governing the types of users allowed to be members of groups and interact with content associated with groups. For example, a group may not allow guest users to join. These policies can't govern mail contact objects. +- **Azure AD Self-service group management (SSGM).** Mail contact objects aren't eligible to be members in groups using the SSGM feature. You may need more tools to manage groups with recipients represented as contacts instead of user objects. +- **Azure AD Identity Governance, Access Reviews.** You can use the access reviews feature to review and attest to membership of Office 365 group. Access reviews are based on user objects. Members represented by mail contact objects are out of scope for access reviews. +- **Azure AD Identity Governance, Entitlement Management (EM).** When you use EM to enable self-service access requests for external users in the company's EM portal, it creates a user object the time of request. It doesn't support mail contact objects. ++## Azure AD conditional access considerations ++The state of the user, device, or network in the user's home tenant doesn't convey to the resource tenant. Therefore, an external user might not satisfy conditional access (CA) policies that use the following controls. ++Where allowed, you can override this behavior with [Cross-Tenant Access Settings (CTAS)](../external-identities/cross-tenant-access-overview.md) that honor MFA and device compliance from the home tenant. ++- **Require multi-factor authentication.** Without CTAS configured, an external user must register/respond to MFA in the resource tenant (even if MFA was satisfied in the home tenant), which results in multiple MFA challenges. If they need to reset their MFA proofs, they might not be aware of the multiple MFA proof registrations across tenants. The lack of awareness might require the user to contact an administrator in the home tenant, resource tenant, or both. +- **Require device to be marked as compliant.** Without CTAS configured, device identity isn't registered in the resource tenant, so the external user can't access resources that require this control. +- **Require Hybrid Azure AD Joined device.** Without CTAS configured, device identity isn't registered in the resource tenant (or on-premises Active Directory connected to resource tenant), so the external user can't access resources that require this control. +- **Require approved client app or Require app protection policy.** Without CTAS configured, external users can't apply the resource tenant Intune Mobile App Management (MAM) policy because it also requires device registration. Resource tenant Conditional Access (CA) policy, using this control, doesn't allow home tenant MAM protection to satisfy the policy. Exclude external users from every MAM-based CA policy. ++Additionally, while you can use the following CA conditions, be aware of the possible ramifications. ++- **Sign-in risk and user risk.** User behavior in their home tenant determines, in part, the sign-in risk and user risk. The home tenant stores the data and risk score. If resource tenant policies block an external user, a resource tenant admin might not be able to enable access. [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md) explains how Identity Protection detects compromised credentials for Azure AD users. +- **Locations.** The named location definitions in the resource tenant determine the scope of the policy. The scope of the policy doesn't evaluate trusted locations managed in the home tenant. If your organization wants to share trusted locations across tenants, define the locations in each tenant where you define the resources and conditional access policies. ++## Other access control considerations ++The following are considerations for configuring access control. ++- Define [access control policies](../external-identities/authentication-conditional-access.md) to control access to resources. +- Design CA policies with external users in mind. +- Create policies specifically for external users. +- If your organization is using the [**all users** dynamic group](../external-identities/use-dynamic-groups.md) condition in your existing CA policy, this policy affects external users because they are in scope of **all users**. +- Create dedicated CA policies for external accounts. ++### Require user assignment ++If an application has the **User assignment required?** property set to **No**, external users can access the application. Application admins must understand access control impacts, especially if the application contains sensitive information. [Restrict your Azure AD app to a set of users in an Azure AD tenant](../develop/howto-restrict-your-app-to-a-set-of-users.md) explains how registered applications in an Azure Active Directory (Azure AD) tenant are, by default, available to all users of the tenant who successfully authenticate. ++### Terms and conditions ++[Azure AD terms of use](../conditional-access/terms-of-use.md) provides a simple method that organizations can use to present information to end users. You can use terms of use to require external users to approve terms of use before accessing your resources. ++### Licensing considerations for guest users with Azure AD Premium features ++Azure AD External Identities pricing is based on monthly active users (MAU). The number of active users is the count of unique users with authentication activity within a calendar month. [Billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md) describes how pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. ++## Office 365 considerations ++The following information addresses Office 365 in the context of this paper's scenarios. Detailed information is available at [Microsoft 365 inter-tenant collaboration 365 inter-tenant collaboration](/office365/enterprise/office-365-inter-tenant-collaboration) describes options that include using a central location for files and conversations, sharing calendars, using IM, audio/video calls for communication, and securing access to resources and applications. ++### Microsoft Exchange Online ++Exchange online limits certain functionality for external users. You can lessen the limits by creating external member users instead of external guest users. Support for external users has the following limitations. ++- You can assign an Exchange Online license to an external user. However, you can't issue to them a token for Exchange Online. The results are that they can't access the resource. + - External users can't use shared or delegated Exchange Online mailboxes in the resource tenant. + - You can assign an external user to a shared mailbox but they can't access it. +- You need to unhide external users to include them in the GAL. By default, they're hidden. + - Hidden external users are created at invite time. The creation is independent of whether the user has redeemed their invitation. So, if all external users are unhidden, the list includes user objects of external users who haven't redeemed an invitation. Based on your scenario, you may or may not want the objects listed. + - External users may be unhidden using [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell-v2). You can execute the [Set-MailUser](/powershell/module/exchange/set-mailuser) PowerShell cmdlet to set the **HiddenFromAddressListsEnabled** property to a value of **\$false**. + +For example: ++```Set-MailUser [ExternalUserUPN] -HiddenFromAddressListsEnabled:\$false\``` ++Where **ExternalUserUPN** is the calculated **UserPrincipalName.** ++For example: ++```Set-MailUser externaluser1_contoso.com#EXT#@fabricam.onmicrosoft.com\ -HiddenFromAddressListsEnabled:\$false``` ++- External users may be unhidden using [Azure AD PowerShell](/powershell/module/azuread). You can execute the [Set-AzureADUser](/powershell/module/azuread/set-azureaduser) PowerShell cmdlet to set the **ShowInAddressList** property to a value of **\$true.** + +For example: ++```Set-AzureADUser -ObjectId [ExternalUserUPN] -ShowInAddressList:\$true\``` ++Where **ExternalUserUPN** is the calculated **UserPrincipalName.** ++For example: ++```Set-AzureADUser -ObjectId externaluser1_contoso.com#EXT#@fabricam.onmicrosoft.com\ -ShowInAddressList:\$true``` ++- There's a timing delay when you update attributes and must perform additional automation afterwards, which is a result of the backend sync that occurs between Azure AD and Exchange Online. Make sure the user is visible in the GAL by checking that the Azure AD user property **ShowInAddressList** aligns with the Exchange Online PowerShell property **HiddenFromAddressListsEnabled** (that are reverse of each other) before continuing operations. +- You can only set updates to Exchange-specific properties (such as the **PrimarySmtpAddress**, **ExternalEmailAddress**, **EmailAddresses**, and **MailTip**) using [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell-v2). The Exchange Online Admin Center doesn't allow you to modify the attributes using the GUI. ++As shown above, you can use the [Set-MailUser](/powershell/module/exchange/set-mailuser) PowerShell cmdlet for mail-specific properties. There are user properties that you can modify with the [Set-User](/powershell/module/exchange/set-user) PowerShell cmdlet. You can modify most properties with the Azure AD Graph APIs. ++One of the most useful features of **Set-MailUser** is the ability to manipulate the **EmailAddresses** property. This multi-valued attribute may contain multiple proxy addresses for the external user (such as SMTP, X500, SIP). By default, an external user has the primary SMTP address stamped correlating to the **UserPrincipalName** (UPN). If you want to change the primary SMTP and/or add SMTP addresses, you can set this property. You can't use the Exchange Admin Center; you must use Exchange Online PowerShell. [Add or remove email addresses for a mailbox in Exchange Online](/exchange/recipients-in-exchange-online/manage-user-mailboxes/add-or-remove-email-addresses) shows different ways to modify a multivalued property such as **EmailAddresses.** ++### Microsoft SharePoint Online ++SharePoint Online has its own service-specific permissions depending on whether the user (internal or external) is of type member or guest in the Azure Active Directory tenant. [Office 365 external sharing and Azure Active Directory B2B collaboration](../external-identities/o365-external-user.md) describes how you can enable integration with SharePoint and OneDrive to share files, folders, list items, document libraries, and sites with people outside your organization, while using Azure B2B for authentication and management. ++After you enable external sharing in SharePoint Online, the ability to search for guest users in the SharePoint Online people picker is **OFF** by default. This setting prohibits guest users from being discoverable when they're hidden from the Exchange Online GAL. You can enable guest users to become visible in two ways (not mutually exclusive): ++- You can enable the ability to search for guest users in these ways: + - Modify the **ShowPeoplePickerSuggestionsForGuestUsers** setting at the tenant and site collection level. + - Set the feature using the [Set-SPOTenant](/powershell/module/sharepoint-online/Set-SPOTenant) and [Set-SPOSite](/powershell/module/sharepoint-online/set-sposite) [SharePoint Online PowerShell](/powershell/sharepoint/sharepoint-online/connect-sharepoint-online) cmdlets. +- Guest users that are visible in the Exchange Online GAL are also visible in the SharePoint Online people picker. The accounts are visible regardless of the setting for **ShowPeoplePickerSuggestionsForGuestUsers**. ++### Microsoft Teams ++Microsoft Teams has features to limit access and based on user type. Changes to user type can affect content access and features available. Microsoft Teams will require users to change their context using the tenant switching mechanism of their Teams client when working in Teams outside their home tenant. ++The tenant switching mechanism for Microsoft Teams might require users to manually switch the context of their Teams client when working in Teams outside their home tenant. ++You can enable Teams users from another entire external domain to find, call, chat, and set up meetings with your users with Teams Federation. [Manage external meetings and chat with people and organizations using Microsoft identities](/microsoftteams/manage-external-access) describes how you can allow users in your organization to chat and meet with people outside the organization who are using Microsoft as an identity provider. ++### Licensing considerations for guest users in Teams ++When you use Azure B2B with Office 365 workloads, key considerations include instances in which guest users (internal or external) don't have the same experience as member users. ++- **Microsoft Groups.** [Adding guests to Office 365 Groups](https://support.office.com/article/adding-guests-to-office-365-groups-bfc7a840-868f-4fd6-a390-f347bf51aff6) describes how guest access in Microsoft 365 Groups lets you and your team collaborate with people from outside your organization by granting them access to group conversations, files, calendar invitations, and the group notebook. +- **Microsoft Teams.** [Team owner, member, and guest capabilities in Teams](https://support.office.com/article/team-owner-member-and-guest-capabilities-in-teams-d03fdf5b-1a6e-48e4-8e07-b13e1350ec7b) describes the guest account experience in Microsoft Teams. You can enable a full fidelity experience in Teams by using external member users. Office 365 recently clarified its licensing policy for multi-tenant organizations. Users licensed in their home tenant may access resources in another tenant within the same legal entity. You can grant access using the external members setting with no extra licensing fees. The setting applies for SharePoint, OneDrive for Business, Teams, and Groups. +- **Identity Governance features.** Entitlement Management and access reviews may require other licenses for external users. +- **Other products.** Products such as Dynamics CRM may require licensing in every tenant in which a user is represented. ++## Next steps ++- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. +- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated. +- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants. |
active-directory | Multi Tenant Common Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-common-solutions.md | + + Title: Common solutions for multi-tenant user management in Azure Active Directory +description: Learn about common solutions used to configure user access across Azure Active Directory tenants with guest accounts +++++++ Last updated : 04/19/2023+++++# Common solutions for multi-tenant user management ++This article is the fourth in a series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. The following articles in the series provide more information as described. ++- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series. +- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated. +- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365. ++The guidance helps to you achieve a consistent state of user lifecycle management. Lifecycle management includes provisioning, managing, and deprovisioning users across tenants using the available Azure tools that include [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md). ++Microsoft recommends a single tenant wherever possible. If single tenancy doesn't work for your scenario, reference the following solutions that Microsoft customers have successfully implemented for these challenges: ++- Automatic user lifecycle management and resource allocation across tenants +- Sharing on-premises apps across tenants ++## Automatic user lifecycle management and resource allocation across tenants ++A customer acquires a competitor with whom they previously had close business relationships. The organizations want to maintain their corporate identities. ++### Current state ++Currently, the organizations are synchronizing each other's users as mail contact objects so that they show in each other's directories. Each resource tenant has enabled mail contact objects for all users in the other tenant. Across tenants, no access to applications is possible. ++### Goals ++The customer has the following goals. ++- Every user appears in each organization's GAL. + - User account lifecycle changes in the home tenant automatically reflected in the resource tenant GAL. + - Attribute changes in home tenants (such as department, name, SMTP address) automatically reflected in resource tenant GAL and the home GAL. +- Users can access applications and resources in the resource tenant. +- Users can self-serve access requests to resources. ++### Solution architecture ++The organizations use a point-to-point architecture with a synchronization engine such as Microsoft Identity Manager (MIM). The following diagram illustrates an example of point-to-point architecture for this solution. ++ Diagram Title: Point-to-point architecture solution. On the left, a box labeled Company A contains internal users and external users. On the right, a box labeled Company B contains internal users and external users. Between Company A and Company B, sync engine interactions go from Company A to Company B and from Company B to Company A. ++Each tenant admin performs the following steps to create the user objects. ++1. Ensure that their user database is up to date. +1. [Deploy and configure MIM](/microsoft-identity-manager/microsoft-identity-manager-deploy). + 1. Address existing contact objects. + 1. Create external member user objects for the other tenant's internal member users. + 1. Synchronize user object attributes. +1. Deploy and configure [Entitlement Management](../governance/entitlement-management-overview.md) access packages. + 1. Resources to be shared. + 1. Expiration and access review policies. ++## Sharing on-premises apps across tenants ++A customer with multiple peer organizations needs to share on-premises applications from one of the tenants. ++### Current state ++Peer organizations are synchronizing external users in a mesh topology, enabling resource allocation to cloud applications across tenants. The customer offers following functionality. ++- Share applications in Azure AD. +- Automated user lifecycle management in resource tenant on home tenant (reflecting add, modify, and delete). ++The following diagram illustrates this scenario, where only internal users in Company A access Company A's on-premises apps. ++ Diagram Title: Mesh topology. On the top left, a box labeled Company A contains internal users and external users. On the top right, a box labeled Company B contains internal users and external users. On the bottom left, a box labeled Company C contains internal users and external users. On the bottom right, a box labeled Company D contains internal users and external users. Between Company A and Company B and between Company C and Company D, sync engine interactions go between the companies on the left and the companies on the right. ++### Goals ++Along with the current functionality, they want to offer the following. ++- Provide access to Company A's on-premises resources for the external users. +- Apps with SAML authentication. +- Apps with Integrated Windows Authentication and Kerberos. ++### Solution architecture ++Company A provides SSO to on-premises apps for its own internal users using Azure Application Proxy as illustrated in the following diagram. ++ Diagram Title: Azure Application Proxy architecture solution. On the top left, a box labeled https: //sales.constoso.com contains a globe icon to represent a website. Below it, a group of icons represent the User and are connected by an arrow from the User to the website. On the top right, a cloud shape labeled Azure Active Directory contains an icon labeled Application Proxy Service. An arrow connects the website to the cloud shape. On the bottom right, a box labeled DMZ has the subtitle On-premises. An arrow connects the cloud shape to the DMZ box, splitting in two to point to icons labeled Connector. Below the Connector icon on the left, an arrow points down and splits in two to point to icons labeled App 1 and App 2. Below the Connector icon on the right, an arrow points down to an icon labeled App 3. ++Admins in tenant A perform the following steps to enable their external users to access the same on-premises applications. ++1. [Configure access to SAML apps](../external-identities/hybrid-cloud-to-on-premises.md#access-to-saml-apps). +1. [Configure access to other applications](../external-identities/hybrid-cloud-to-on-premises.md#access-to-iwa-and-kcd-apps). +1. Create on-premises users through [MIM](../external-identities/hybrid-cloud-to-on-premises.md#create-b2b-guest-user-objects-through-mim) or [PowerShell](https://www.microsoft.com/download/details.aspx?id=51495). ++The following articles provide additional information about B2B collaboration. ++- [Grant B2B users in Azure AD access to your on-premises resources](../external-identities/hybrid-cloud-to-on-premises.md) describes how you can provide B2B users access to on-premises apps. +- [Azure Active Directory B2B collaboration for hybrid organizations](../external-identities/hybrid-organizations.md) describes how you can give your external partners access to apps and resources in your organization. ++## Next steps ++- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. +- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated. +- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365. |
active-directory | Multi Tenant User Management Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-user-management-introduction.md | + + Title: Configuring multi-tenant user management in Azure Active Directory +description: Learn about the different patterns used to configure user access across Azure Active Directory tenants with guest accounts +++++++ Last updated : 04/19/2023+++++# Multi-tenant user management introduction ++This article is the first in a series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. The following articles in the series provide more information as described. ++- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated. +- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365. +- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants. ++The guidance helps to you achieve a consistent state of user lifecycle management. Lifecycle management includes provisioning, managing, and deprovisioning users across tenants using the available Azure tools that include [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md). ++Provisioning users into a single Azure Active Directory (Azure AD) tenant provides a unified view of resources and a single set of policies and controls. This approach enables consistent user lifecycle management. ++Microsoft recommends a single tenant when possible. Having multiple tenants can result in unique cross-tenant collaboration and management requirements. When consolidation to a single Azure AD tenant isn't possible, multi-tenant organizations may span two or more Azure AD tenants for reasons that include the following. ++- Mergers +- Acquisitions +- Divestitures +- Collaboration across public, sovereign, and regional clouds +- Political or organizational structures that prohibit consolidation to a single Azure AD tenant ++## Azure AD B2B collaboration ++Azure AD B2B collaboration (B2B) enables you to securely share your company's applications and services with external users. When users can come from any organization, B2B helps you maintain control over access to your IT environment and data. ++You can use B2B collaboration to provide external access for your organization's users to access multiple tenants that you manage. Traditionally, B2B external user access can authorize access to users that your own organization doesn't manage. However, external user access can manage access across multiple tenants that your organization manages. ++An area of confusion with Azure AD B2B collaboration surrounds the [properties of a B2B guest user](../external-identities/user-properties.md). The difference between internal versus external user accounts and member versus guest user types contributes to confusion. Initially, all internal users are member users with **UserType** attribute set to *Member* (member users). An internal user has an account in your Azure AD that is authoritative and authenticates to the tenant where the user resides. A member user is a licensed user with default [member-level permissions](../fundamentals/users-default-permissions.md) in the tenant. Treat member users as employees of your organization. ++You can invite an internal user of one tenant into another tenant as an external user. An external user signs in with an external Azure AD account, social identity, or other external identity provider. External users authenticate outside the tenant to which you invite the external user. At the B2B first release, all external users were of **UserType** *Guest* (guest users). Guest users have [restricted permissions](../fundamentals/users-default-permissions.md) in the tenant. For example, guest users can't enumerate the list of all users nor groups in the tenant directory. ++For the **UserType** property on users, B2B supports flipping the bit from internal to external, and vice versa, which contributes to the confusion. ++You can change an internal user from member user to guest user. For example, you can have an unlicensed internal guest user with guest-level permissions in the tenant, which is useful when you provide a user account and credentials to a person that isn't an employee of your organization. ++You can change an external user from guest user to member user, giving member-level permissions to the external user. Making this change is useful when you manage multiple tenants for your organization and need to give member-level permissions to a user across all tenants. This need may occur regardless of whether the user is internal or external in any given tenant. Member users may require more [licenses](../external-identities/external-identities-pricing.md). ++Most documentation for B2B refers to an external user as a guest user. It conflates the **UserType** property in a way that assumes all guest users are external. When documentation calls out a guest user, it assumes that it's an external guest user. This article specifically and intentionally refers to external versus internal and member user versus guest user. ++## Cross-tenant synchronization ++[Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) enables multi-tenant organizations to provide seamless access and collaboration experiences to end users, leveraging existing B2B external collaboration capabilities. The feature doesn't allow cross-tenant synchronization across Microsoft sovereign clouds (such as Microsoft 365 US Government GCC High, DOD or Office 365 in China). See [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md#cross-tenant-synchronization) for help with automated and custom cross-tenant synchronization scenarios. ++Watch Arvind Harinder talk about the cross-tenant sync capability in Azure AD (embedded below). ++> [!VIDEO https://www.youtube.com/embed/7B-PQwNfGBc] ++The following conceptual and how-to articles provide information about Azure AD B2B collaboration and cross-tenant synchronization. ++### Conceptual articles ++- [B2B best practices](../external-identities/b2b-fundamentals.md) features recommendations for providing the smoothest experience for users and administrators. +- [B2B and Office 365 external sharing](../external-identities/o365-external-user.md) explains the similarities and differences among sharing resources through B2B, Office 365, and SharePoint/OneDrive. +- [Properties on an Azure AD B2B collaboration user](../external-identities/user-properties.md) describes the properties and states of the external user object in Azure AD. The description provides details before and after invitation redemption. +- [B2B user tokens](../external-identities/user-token.md) provides examples of the bearer tokens for B2B for an external user. +- [Conditional access for B2B](../external-identities/authentication-conditional-access.md) describes how conditional access and MFA work for external users. +- [Cross-tenant access settings](../external-identities/cross-tenant-access-overview.md) provides granular control over how external Azure AD organizations collaborate with you (inbound access) and how your users collaborate with external Azure AD organizations (outbound access). +- [Cross-tenant synchronization overview](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) explains how to automate creating, updating, and deleting Azure AD B2B collaboration users across tenants in an organization. ++### How-to articles ++- [Use PowerShell to bulk invite Azure AD B2B collaboration users](../external-identities/bulk-invite-powershell.md) describes how to use PowerShell to send bulk invitations to external users. +- [Enforce multifactor authentication for B2B guest users](../external-identities/b2b-tutorial-require-mfa.md) explains how you can use conditional access and MFA policies to enforce tenant, app, or individual external user authentication levels. +- [Email one-time passcode authentication](../external-identities/one-time-passcode.md) describes how the Email one-time passcode feature authenticates external users when they can't authenticate through other means like Azure AD, a Microsoft account (MSA), or Google Federation. ++## Terminology ++The following terms in Microsoft content refer to multi-tenant collaboration in Azure AD. ++- **Resource tenant:** The Azure AD tenant containing the resources that users want to share with others. +- **Home tenant:** The Azure AD tenant containing users that require access to the resources in the resource tenant. +- **Internal user:** An internal user has an account that is authoritative and authenticates to the tenant where the user resides. +- **External user:** An external user has an external Azure AD account, social identity, or other external identity provider to sign in. The external user authenticates somewhere outside the tenant to which you have invited the external user. +- **Member user:** An internal or external member user is a licensed user with default member-level permissions in the tenant. Treat member users as employees of your organization. +- **Guest user:** An internal or external guest user has restricted permissions in the tenant. Guest users aren't employees of your organization (such as users for partners). Most B2B documentation refers to B2B Guests, which primarily refers to external guest user accounts. +- **User lifecycle management:** The process of provisioning, managing, and deprovisioning user access to resources. +- **Unified GAL:** Each user in each tenant can see users from each organization in their Global Address List (GAL). ++## Deciding how to meet your requirements ++Your organization's unique requirements influence your strategy for managing users across tenants. To create an effective strategy, consider the following requirements. ++- Number of tenants +- Type of organization +- Current topologies +- Specific user synchronization needs ++### Common requirements ++Organizations initially focus on requirements that they want in place for immediate collaboration. Sometimes called *Day One* requirements, they focus on enabling end users to smoothly merge without interrupting their ability to generate value. As you define Day One and administrative requirements, consider including the following requirements and needs. ++### Communications requirements ++- **Unified global address list:** Each user can see all other users in the GAL in their home tenant. +- **Free/busy information:** Enable users to discover each other's availability. You can do so with [Organization relationships in Exchange Online](/exchange/sharing/organization-relationships/create-an-organization-relationship). +- **Chat and presence:** Enable users to determine others' presence and initiate instant messaging. Configure through [external access in Microsoft Teams](/microsoftteams/trusted-organizations-external-meetings-chat). +- **Book resources such as meeting rooms:** Enable users to book conference rooms or other resources across the organization. Cross-tenant conference room booking isn't currently available. +- **Single email domain:** Enable all users to send and receive mail from a single email domain (for example, `users@contoso.com`). Sending requires an email address rewrite solution. ++### Access requirements ++- **Document access:** Enable users to share documents from SharePoint, OneDrive, and Teams. +- **Administration:** Allow administrators to manage configuration of subscriptions and services deployed across multiple tenants. +- **Application access:** Allow end users to access applications across the organization. +- **Single Sign On:** Enable users to access resources across the organization without the need to enter more credentials. +### Patterns for account creation ++Microsoft mechanisms for creating and managing the lifecycle of your external user accounts follow three common patterns. You can use these patterns to help define and implement your requirements. Choose the pattern that best aligns with your scenario and then focus on the pattern details. ++| Mechanism | Description | Best when | +| - | - | - | +| [End user-initiated](multi-tenant-user-management-scenarios.md#end-user-initiated-scenario) | Resource tenant admins delegate the ability to invite external users to the tenant, an app, or a resource to users within the resource tenant. You can invite users from the home tenant or they can individually sign up. | Unified Global Address List on Day One not required. | +|[Scripted](multi-tenant-user-management-scenarios.md#scripted-scenario) | Resource tenant administrators deploy a scripted *pull* process to automate discovery and provisioning of external users to support sharing scenarios. | Small number of tenants (such as two). | +|[Automated](multi-tenant-user-management-scenarios.md#automated-scenario)| Resource tenant admins use an identity provisioning system to automate the provisioning and deprovisioning processes. | You need Unified Global Address List across tenants. | + +## Next steps ++- [Multi-tenant user management scenarios](multi-tenant-user-management-scenarios.md) describes three scenarios for which you can use multi-tenant user management features: end user-initiated, scripted, and automated. +- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365. +- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants. +- [Multi-tenant synchronization from Active Directory](../hybrid/plan-connect-topologies.md) describes various on-premises and Azure Active Directory (Azure AD) topologies that use Azure AD Connect sync as the key integration solution. |
active-directory | Multi Tenant User Management Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-user-management-scenarios.md | + + Title: Common scenarios for using multi-tenant user management in Azure Active Directory +description: Learn about common scenarios where guest accounts can be used to configure user access across Azure Active Directory tenants +++++++ Last updated : 04/19/2023++++++# Multi-tenant user management scenarios ++This article is the second in a series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. The following articles in the series provide more information as described. ++- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. +- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365. +- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants. ++The guidance helps to you achieve a consistent state of user lifecycle management. Lifecycle management includes provisioning, managing, and deprovisioning users across tenants using the available Azure tools that include [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md). ++This article describes three scenarios for which you can use multi-tenant user management features. ++- End user-initiated +- Scripted +- Automated ++## End user-initiated scenario ++In end user-initiated scenarios, resource tenant administrators delegate certain abilities to users in the tenant. Administrators enable end users to invite external users to the tenant, an app, or a resource. You can invite users from the home tenant or they can individually sign up. ++For example, a global professional services firm collaborates with subcontractors on projects. Subcontractors (external users) require access to the firm's applications and documents. Firm admins can delegate to its end users the ability to invite subcontractors or configure self-service for subcontractor resource access. ++### Provisioning accounts ++Here are the most widely used ways to invite end users to access tenant resources. ++- [**Application-based invitations.**](../external-identities/o365-external-user.md) Microsoft applications (such as Teams and SharePoint) can enable external user invitations. Configure B2B invitation settings in both Azure AD B2B and in the relevant applications. +- [**MyApps.**](../manage-apps/my-apps-deployment-plan.md) Users can invite and assign external users to applications using MyApps. The user account must have [application self-service sign up](../manage-apps/manage-self-service-access.md) approver permissions. Group owners can invite external users to their groups. +- [**Entitlement management.**](../governance/entitlement-management-overview.md) Enable admins or resource owners to create access packages with resources, allowed external organizations, external user expiration, and access policies. Publish access packages to enable external user self-service sign-up for resource access. +- [**Azure portal.**](../external-identities/add-users-administrator.md) End users with the [Guest Inviter role](../external-identities/external-collaboration-settings-configure.md) can sign in to the Azure portal and invite external users from the **Users** menu in Azure AD. +- [**Programmatic (PowerShell, Graph API).**](../external-identities/customize-invitation-api.md) End users with the [Guest Inviter role](../external-identities/external-collaboration-settings-configure.md) can use PowerShell or Graph API to invite external users. ++### Redeeming invitations ++When you provision accounts to access a resource, email invitations go to the invited user's email address. ++When an invited user receives an invitation, they can follow the link contained in the email to the redemption URL. In doing so, the invited user can approve or deny the invitation and, if necessary, create an external user account. ++Invited users can also try to directly access the resource, referred to as just-in-time (JIT) redemption, if either of the following scenarios are true. ++- The invited user already has an Azure AD or Microsoft account, or +- Admins have enabled [email one-time passcodes](../external-identities/one-time-passcode.md). ++During JIT redemption, the following considerations may apply. ++- If administrators haven't [suppressed consent prompts](../external-identities/cross-tenant-access-settings-b2b-collaboration.md), the user must consent before accessing the resource. +- PowerShell allows control over whether an email is sent when [using PowerShell](https://microsoft-my.sharepoint.com/powershell/module/azuread/new-azureadmsinvitation?view=azureadps-2.0&preserve-view=true) to invite users. +- You can allow or block invitations to external users from specific organizations by using an [allowlist or a blocklist](../external-identities/allow-deny-list.md). ++For more information, see [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md). ++### Enabling one-time passcode authentication ++In scenarios where you allow for ad hoc B2B, enable [email one-time passcode authentication](../external-identities/one-time-passcode.md). This feature authenticates external users when you can't authenticate them through other means, such as: ++- Azure AD. +- Microsoft account (MSA). +- Gmail account through Google Federation. +- Account from a SAML/WS-Fed IDP through Direct Federation. ++With one-time passcode authentication, there's no need to create a Microsoft account. When the external user redeems an invitation or accesses a shared resource, they receive a temporary code at their email address. They then enter the code to continue signing in. ++### Managing accounts ++In the end user-initiated scenario, the resource tenant administrator manages external user accounts in the resource tenant (not updated based on the updated values in the home tenant). The only visible attributes received include the email address and display name. ++You can configure more attributes on external user objects to facilitate different scenarios (such as entitlement scenarios). You can include populating the address book with contact details. For example, consider the following attributes. ++- **HiddenFromAddressListsEnabled** [ShowInAddressList] +- **FirstName** [GivenName] +- **LastName** [SurName] +- **Title** +- **Department** +- **TelephoneNumber** ++You might set these attributes to add external users to the global address list (GAL) and to people search (such as SharePoint People Picker). Other scenarios may require different attributes (such as setting entitlements and permissions for Access Packages, Dynamic Group Membership, and SAML Claims). ++By default, the GAL hides invited external users. Set external user attributes to be unhidden to include them in the unified GAL. The Microsoft Exchange Online section of [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) describes how you can lessen limits by creating external member users instead of external guest users. ++### Deprovisioning accounts ++End user-initiated scenarios decentralize access decisions, which can create the challenge of deciding when to remove an external user and their associated access. [Entitlement management](../governance/entitlement-management-overview.md) and [access reviews](../governance/manage-guest-access-with-access-reviews.md) let you review and remove existing external users and their resource access. ++When you invite users outside of entitlement management, you must create a separate process to review and manage their access. For example, if you directly invite an external user through SharePoint Online, it isn't in your entitlement management process. ++## Scripted scenario ++In the scripted scenario, resource tenant administrators deploy a scripted pull process to automate discovery and external user provisioning. ++For example, a company acquires a competitor. Each company has a single Azure AD tenant. They want the following Day One scenarios to work without users having to perform any invitation or redemption steps. All users must be able to: ++- Use single sign-on to all provisioned resources. +- Find each other and resources in a unified GAL. +- Determine each other's presence and initiate chat. +- Access applications based on dynamic group membership. ++In this scenario, each organization's tenant is the home tenant for its existing employees while being the resource tenant for the other organization's employees. ++### Provisioning accounts ++With [Delta Query](/graph/delta-query-overview), tenant admins can deploy a scripted pull process to automate discovery and identity provisioning to support resource access. This process checks the home tenant for new users. It uses the B2B Graph APIs to provision new users as external users in the resource tenant as illustrated in the following multi-tenant topology diagram. ++ Diagram Title: Multi-tenant topology diagram. On the left, a box labeled Company A contains internal users and external users. On the right, a box labeled Company B contains internal users and external users. Between Company A and Company B, an interaction goes from Company A to Company B with the label, Script to pull A users to B. Another interaction goes from Company B to Company A with the label, Script to pull B users to A. ++- Tenant administrators prearrange credentials and consent to allow each tenant to read. +- Tenant administrators automate enumeration and pulling scoped users to the resource tenant. +- Use Microsoft Graph API with consented permissions to read and provision users with the invitation API. +- Initial provisioning can read source attributes and apply them to the target user object. ++### Managing accounts ++The resource organization may augment profile data to support sharing scenarios by updating the user's metadata attributes in the resource tenant. However, if ongoing synchronization is necessary, then a synchronized solution might be a better option. ++### Deprovisioning accounts ++[Delta Query](/graph/delta-query-overview) can signal when an external user needs to be deprovisioned. [Entitlement management](../governance/entitlement-management-overview.md) and [access reviews](../governance/manage-guest-access-with-access-reviews.md) can provide a way to review and remove existing external users and their access to resources. ++If you invite users outside of entitlement management, create a separate process to review and manage external user access. For example, if you invite the external user directly through SharePoint Online, it isn't in your entitlement management process. ++## Automated scenario ++Synchronized sharing across tenants is the most complex of the patterns described in this article. This pattern enables more automated management and deprovisioning options than end user-initiated or scripted. ++In automated scenarios, resource tenant admins use an identity provisioning system to automate provisioning and deprovisioning processes. In scenarios within Microsoft's Commercial Cloud instance, we have [cross-tenant synchronization](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/seamless-application-access-and-lifecycle-management-for-multi/ba-p/3728752). In scenarios that span Microsoft Sovereign Cloud instances, you need other approaches because cross-tenant synchronization doesn't yet support cross-cloud. ++For example, within a Microsoft Commercial Cloud instance, a multi-national/regional conglomeration has multiple subsidiaries with the following requirements. ++- Each has their own Azure AD tenant and need to work together. +- In addition to synchronizing new users among tenants, automatically synchronize attribute updates and automate deprovisioning. +- If an employee is no longer at a subsidiary, remove their account from all other tenants during the next synchronization. ++In an expanded, cross-cloud scenario, a Defense Industrial Base (DIB) contractor has a defense-based and commercial-based subsidiary. These have competing regulation requirements: ++- The US defense business resides in a US Sovereign Cloud tenant such as Microsoft 365 US Government GCC High and Azure Government. +- The commercial business resides in a separate Azure AD tenant in Commercial such as an Azure AD environment running on the global Azure cloud. ++To act like a single company deployed into a cross-cloud architecture, all users synchronize to both tenants. This approach enables unified GAL availability across both tenants and may ensure that users automatically synchronized to both tenants include entitlements and restrictions to applications and content. Example requirements include: ++- US employees may have ubiquitous access to both tenants. +- Non-US employees show in the unified GAL of both tenants but don't have access to protected content in the GCC High tenant. ++This scenario requires automatic synchronization and identity management to configure users in both tenants while associating them with the proper entitlement and data protection policies. ++[Cross-cloud B2B](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/collaborate-securely-across-organizational-boundaries-and/ba-p/3094109) requires you to configure [Cross-Tenant Access Settings](../external-identities/cross-cloud-settings.md) for each organization with which you want to collaborate in the remote cloud instance. ++### Provisioning accounts ++This section describes three techniques for automating account provisioning in the automated scenario. ++#### Technique 1: Use the [built-in cross-tenant synchronization capability in Azure AD](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) ++This approach only works when all tenants that you need to synchronize are in the same cloud instance (such as Commercial to Commercial). ++#### Technique 2: Provision accounts with Microsoft Identity Manager ++Use an external Identity and Access Management (IAM) solution such as [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) (MIM) as a synchronization engine. ++This advanced deployment uses MIM as a synchronization engine. MIM calls the [Microsoft Graph API](https://developer.microsoft.com/graph) and [Exchange Online PowerShell](/powershell/exchange/exchange-online/exchange-online-powershell?view=exchange-ps&preserve-view=true). Alternative implementations can include the cloud-hosted [Active Directory Synchronization Service](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (ADSS) managed service offering from [Microsoft Industry Solutions](https://www.microsoft.com/industrysolutions). There are non-Microsoft offerings that you can create from scratch with other IAM offerings (such as SailPoint, Omada, and OKTA). ++You perform a cloud-to-cloud synchronization of identity (users, contacts, and groups) from one tenant to another as illustrated in the following diagram. ++ Diagram Title: Cloud-to-cloud identity synchronization. On the left, a box labeled Company A contains internal users and external users. On the right, a box labeled Company B contains internal users and external users. Between Company A and Company B, sync engine interactions go from Company A to Company B and from Company B to Company A. ++Considerations that are outside the scope of this article include integration of on-premises applications. ++#### Technique 3: Provision accounts with Azure AD Connect ++This technique only applies for complex organizations that manage all identity in traditional Windows Server-based Active Directory Domain Services (AD DS). The approach uses Azure AD Connect as the synchronization engine as illustrated in the following diagram. ++ Diagram Title: Provision accounts with Azure AD Connect. The diagram shows four main components. A box on the left represents the Customer. A cloud shape on the right represents B2B Conversion. At the top center, a box containing a cloud shape represents Microsoft Commercial Cloud. At the bottom center, a box containing a cloud shape represents Microsoft US Government Sovereign Cloud. Inside the Customer box, a Windows Server Active Directory icon connects to two boxes, each with an Azure AD Connect label. The connections are dashed red lines with arrows at both ends and a refresh icon. Inside the Microsoft Commercial Cloud shape is another cloud shape that represents Microsoft Azure Commercial. Inside is another cloud shape that represents Azure Active Directory. To the right of the Microsoft Azure Commercial cloud shape is a box that represents Office 365 with a label, Public Multi-Tenant. A solid red line with arrows at both ends connects the Office 365 box with the Microsoft Azure Commercial cloud shape and a label, Hybrid Workloads. Two dashed lines connect from the Office 365 box to the Azure Active Directory cloud shape. One has an arrow on the end that connects to Azure Active Directory. The other has arrows on both ends. A dashed line with arrows on both ends connects the Azure Active Directory cloud shape to the top Customer Azure AD Connect box. A dashed line with arrows on both ends connects the Microsoft Commercial Cloud shape to the B2B Conversion cloud shape. Inside the Microsoft US Government Sovereign Cloud box is another cloud shape that represents Microsoft Azure Government. Inside is another cloud shape that represents Azure Active Directory. To the right of the Microsoft Azure Commercial cloud shape is a box that represents Office 365 with a label, US Gov GCC-High L4. A solid red line with arrows at both ends connects the Office 365 box with the Microsoft Azure Government cloud shape and a label, Hybrid Workloads. Two dashed lines connect from the Office 365 box to the Azure Active Directory cloud shape. One has an arrow on the end that connects to Azure Active Directory. The other has arrows on both ends. A dashed line with arrows on both ends connects the Azure Active Directory cloud shape to the bottom Customer Azure AD Connect box. A dashed line with arrows on both ends connects the Microsoft Commercial Cloud shape to the B2B Conversion cloud shape. ++Unlike the MIM technique, all identity sources (users, contacts, and groups) come from traditional Windows Server-based Active Directory Domain Services (AD DS). The AD DS directory is typically an on-premises deployment for a complex organization that manages identity for multiple tenants. Cloud-only identity isn't in scope for this technique. All identity must be in AD DS to include them in scope for synchronization. ++Conceptually, this technique synchronizes a user into a home tenant as an internal member user (default behavior). Alternatively, it may synchronize a user into a resource tenant as an external user (customized behavior). ++Microsoft supports this dual sync user technique with careful considerations to what modifications occur in the Azure AD Connect configuration. For example, if you make modifications to the wizard-driven setup configuration, you need to document the changes if you must rebuild the configuration during a support incident. ++Out of the box, Azure AD Connect can't synchronize an external user. You must augment it with an external process (such as a PowerShell script) to convert the users from internal to external accounts. ++Benefits of this technique include Azure AD Connect synchronizing identity with attributes stored in AD DS. Synchronization may include address book attributes, manager attributes, group memberships, and other hybrid identity attributes into all tenants within scope. It deprovisions identity in alignment with AD DS. It doesn't require a more complex IAM solution to manage the cloud identity for this specific task. ++There's a one-to-one relationship of Azure AD Connect per tenant. Each tenant has its own configuration of Azure AD Connect that you can individually alter to support member and/or external user account synchronization. ++### Choosing the right topology ++Most customers use one of the following topologies in automated scenarios. ++- A mesh topology enables sharing of all resources in all tenants. You create users from other tenants in each resource tenant as external users. +- A single resource tenant topology uses a single tenant (the resource tenant), in which users from other tenants are external users. ++Reference the following table as a decision tree while you design your solution. Following the table, diagrams illustrate both topologies to help you determine which is right for your organization. ++#### Comparison of mesh versus single resource tenant topologies ++| Consideration | Mesh topology | Single resource tenant | +| - | - |-| +| Each company has separate Azure AD tenant with users and resources | Yes | Yes | +| **Resource location and collaboration** | | | +| Shared apps and other resources remain in their current home tenant | Yes | No. You can share only apps and other resources in the resource tenant. You can't share apps and other resources remaining in other tenants. | +| All viewable in individual company's GALs (Unified GAL) | Yes| No | +| **Resource access and administration** | | | +| You can share ALL applications connected to Azure AD among all companies. | Yes | No. Only applications in the resource tenant are shared. You can't share applications remaining in other tenants. | +| Global resource administration | Continue at tenant level. | Consolidated in the resource tenant. | +| Licensing: Office 365 SharePoint Online, unified GAL, Teams access all support guests; however, other Exchange Online scenarios don't. | Continues at tenant level. | Continues at tenant level. | +| Licensing: [Azure AD (premium)](../external-identities/external-identities-pricing.md) | First 50 K Monthly Active Users are free (per tenant). | First 50 K Monthly Active Users are free. | +| Licensing: SaaS apps | Remain in individual tenants, may require licenses per user per tenant. | All shared resources reside in the single resource tenant. You can investigate consolidating licenses to the single tenant if appropriate. | ++#### Mesh topology ++The following diagram illustrates mesh topology. ++ Diagram Title: Mesh topology. On the top left, a box labeled Company A contains internal users and external users. On the top right, a box labeled Company B contains internal users and external users. On the bottom left, a box labeled Company C contains internal users and external users. On the bottom right, a box labeled Company D contains internal users and external users. Between Company A and Company B and between Company C and Company D, sync engine interactions go between the companies on the left and the companies on the right. ++In a mesh topology, every user in each home tenant synchronizes to each of the other tenants, which become resource tenants. ++- You can share any resource within a tenant with external users. +- Each organization can see all users in the conglomerate. In the above diagram, there are four unified GALs, each of which contains the home users and the external users from the other three tenants. ++[Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides information on provisioning, managing, and deprovisioning users in this scenario. ++#### Mesh topology for cross-cloud ++You can use the mesh topology in as few as two tenants, such as in the scenario for a DIB defense contractor straddling a cross-sovereign cloud solution. As with the mesh topology, each user in each home tenant synchronizes to the other tenant, which becomes a resource tenant. In the [Technique 3 section](#technique-3-provision-accounts-with-azure-ad-connect) diagram, the public Commercial tenant internal user synchronizes to the US sovereign GCC High tenant as an external user account. At the same time, the GCC High internal user synchronizes to Commercial as an external user account. ++The diagram also illustrates data storage locations. Data categorization and compliance is outside the scope of this article, but you can include entitlements and restrictions to applications and content. Content may include locations where an internal user's user-owned data resides (such as data stored in an Exchange Online mailbox or OneDrive for Business). The content may be in their home tenant and not in the resource tenant. Shared data may reside in either tenant. You can restrict access to the content through access control and conditional access policies. ++#### Single resource tenant topology ++The following diagram illustrates single resource tenant topology. ++ Diagram Title: Single resource tenant topology. At the top, a box that represents Company A contains three boxes. On the left, a box represents all shared resources. In the middle, a box represents internal users. On the right, a box represents external users. Below the Company A box is a box that represents the sync engine. Three arrows connect the sync engine to Company A. Below the sync engine box, at the bottom of the diagram, are three boxes that represent Company B, Company C, and Company D. An arrow connects each of them to the sync engine box. Inside each of the bottom company boxes is a label, Microsoft Graph API Exchange online PowerShell, and icons that represent internal users. ++In a single resource tenant topology, users and their attributes synchronize to the resource tenant (Company A in the above diagram). ++- All resources shared among the member organizations must reside in the single resource tenant. If multiple subsidiaries have subscriptions to the same SaaS apps, there's an opportunity to consolidate those subscriptions. +- Only the GAL in the resource tenant displays users from all companies. ++### Managing accounts ++This solution detects and syncs attribute changes from source tenant users to resource tenant external users. You can use these attributes to make authorization decisions (such as when you're using dynamic groups). ++### Deprovisioning accounts ++Automation detects object deletion in the source environment and deletes the associated external user object in the target environment. ++[Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides additional information on provisioning, managing, and deprovisioning users in this scenario. ++## Next steps ++- [Multi-tenant user management introduction](multi-tenant-user-management-introduction.md) is the first in the series of articles that provide guidance for configuring and providing user lifecycle management in Azure Active Directory (Azure AD) multi-tenant environments. +- [Common considerations for multi-tenant user management](multi-tenant-common-considerations.md) provides guidance for these considerations: cross-tenant synchronization, directory object, Azure AD Conditional Access, additional access control, and Office 365. +- [Common solutions for multi-tenant user management](multi-tenant-common-solutions.md) when single tenancy doesn't work for your scenario, this article provides guidance for these challenges: automatic user lifecycle management and resource allocation across tenants, sharing on-premises apps across tenants. |
active-directory | Multilateral Federation Baseline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multilateral-federation-baseline.md | + + Title: University multilateral federation baseline design +description: Learn about a baseline design for a multilateral federation solution for universities. +++++++ Last updated : 04/01/2023++++++# Baseline architecture overview ++Microsoft often speaks with research universities that operate in hybrid environments in which applications are either cloud based or hosted on-premises. In both cases, applications can use various authentication protocols. In some cases, these protocols are reaching end of life or aren't providing the required level of security. ++[![Diagram of a typical university architecture, including cloud and on-premises areas with trust, synchronization, and credential validation paths.](media/multilateral-federation-baseline/typical-baseline-environment.png)](media/multilateral-federation-baseline/typical-baseline-environment.png#lightbox) ++Applications drive much of the need for different authentication protocols and different identity management (IdM) mechanisms. ++In research university environments, research apps often drive IdM requirements. A university might use a federation provider, such as Shibboleth, as a primary identity provider (IdP). If so, Azure Active Directory (Azure AD) is often configured to federate with Shibboleth. If Microsoft 365 apps are also in use, Azure AD enables you to configure integration. ++Applications used in research universities operate in various parts of the overall IT footprint: ++* Research and multilateral federation applications are available through InCommon and eduGAIN. ++* Library applications provide access to electronic journals and other e-content providers. ++* Some applications use legacy authentication protocols such as Central Authentication Service to enable single sign-on. ++* Student and faculty applications often use multiple authentication mechanisms. For example, some are integrated with Shibboleth or other federation providers, whereas others are integrated with Azure AD. ++* Microsoft 365 applications are integrated with Azure AD. ++* Windows Server Active Directory might be in use and synchronized with Azure AD. ++* Lightweight Directory Access Protocol (LDAP) is in use at many universities that might have an external LDAP directory or identity registry. These registries are often used to house confidential attributes, role hierarchy information, and even certain types of users, such as applicants. ++* On-premises Active Directory, or an external LDAP directory, is often used to enable single-credential sign-in for non-web applications and various non-Microsoft operating system sign-ins. ++## Baseline architecture challenges ++Baseline architectures often evolve over time, introducing complexity and rigidness to the design and the ability to update. Some of the challenges with using the baseline architecture include: ++* **Hard to react to new requirements**: Having a complex environment makes it hard to quickly adapt and keep up with the most recent regulations and requirements. For example, if you have apps in lots of locations, and these apps are connected in different ways with different IdMs, you have to decide where to locate multifactor authentication (MFA) services and how to enforce MFA. ++ Higher education also experiences fragmented service ownership. The people responsible for key services such as enterprise resource planning, learning management systems, division, and department solutions might resist efforts to change or modify the systems that they operate. ++* **Can't take advantage of all Microsoft 365 capabilities for all apps** (for example, Intune, Conditional Access, passwordless): Many universities want to move toward the cloud and use their existing investments in Azure AD. However, with a different federation provider as their primary IdP, universities can't take advantage of all the Microsoft 365 capabilities for the rest of their apps. ++* **Complexity of a solution**: There are many components to manage. Some components are in the cloud, and some are on-premises or in infrastructure as a service (IaaS) instances. Apps are operated in many places. From a user perspective, this experience can be disjointed. For example, users sometime see a Shibboleth sign-in page and other times see an Azure AD sign-in page. ++We present three solutions to solve these challenges, while also addressing the following requirements: ++* Ability to participate in multilateral federations such as InCommon and eduGAIN ++* Ability to support all types of apps (even apps that require legacy protocols) ++* Ability to support external directories and attribute stores ++We present the three solutions in order, from most preferred to least preferred. Each satisfies requirements but introduces tradeoff decisions that are expected in a complex architecture. Based on your requirements and starting point, select the one that best suits your environment. We also provide a decision tree to aid in this decision. ++## Next steps ++See these related articles about multilateral federation: ++[Multilateral federation introduction](multilateral-federation-introduction.md) ++[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) ++[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) ++[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) ++[Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Multilateral Federation Decision Tree | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multilateral-federation-decision-tree.md | + + Title: University multilateral federation decision tree +description: Use this decision tree to help design a multilateral federation solution for universities. +++++++ Last updated : 04/01/2023++++++# Decision tree ++Use this decision tree to determine the multilateral federation solution that's best suited for your environment. ++[![Diagram that shows a decision matrix with key criteria to help choose between three solutions.](media/multilateral-federation-decision-tree/tradeoff-decision-matrix.png)](media/multilateral-federation-decision-tree/tradeoff-decision-matrix.png#lightbox) ++## Migration resources ++The following resources can help with your migration to the solutions covered in this content. ++| Migration resource | Description | Relevant for migrating to... | +| - | - | - | +| [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure Active Directory (Azure AD) | Solution 1, Solution 2, and Solution 3 | +| [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md)| Overview of the Azure AD custom claims provider | Solution 1 | +| [Custom security attributes](../fundamentals/custom-security-attributes-manage.md) | Steps for managing access to custom security attributes | Solution 1 | +| [Azure AD SSO integration with Cirrus Bridge](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md) | Tutorial to integrate Cirrus Bridge with Azure AD | Solution 1 | +| [Cirrus Bridge overview](https://blog.cirrusidentity.com/documentation/azure-bridge-setup-rev-6.0) | Cirrus Identity documentation for configuring Cirrus Bridge with Azure AD | Solution 1 | +| [Configuring Shibboleth as a SAML proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) | Shibboleth article that describes how to use the SAML proxying feature to connect the Shibboleth identity provider (IdP) to Azure AD | Solution 2 | +| [Azure AD Multi-Factor Authentication deployment considerations](../authentication/howto-mfa-getstarted.md) | Guidance for configuring Azure AD Multi-Factor Authentication | Solution 1 and Solution 2 | ++## Next steps ++See these related articles about multilateral federation: ++[Multilateral federation introduction](multilateral-federation-introduction.md) ++[Multilateral federation baseline design](multilateral-federation-baseline.md) ++[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) ++[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) ++[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) |
active-directory | Multilateral Federation Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multilateral-federation-introduction.md | + + Title: University multilateral federation solution design +description: Learn how to design a multilateral federation solution for universities. +++++++ Last updated : 04/01/2023++++++# Introduction to multilateral federation solutions ++Research universities need to collaborate with one another. To accomplish collaboration, they require multilateral federation to enable authentication and access between universities globally. ++## Challenges with multilateral federation solutions ++Universities face many challenges. For example, a university might use one identity management system and a set of protocols. Other universities might use a different set of technologies, depending on their requirements. In general, universities can: ++* Use different identity management systems. ++* Use different protocols. ++* Use customized solutions. ++* Need support for a long history of legacy functionality. ++* Need support for solutions that are built in different IT generations. ++Many universities are also adopting the Microsoft 365 suite of productivity and collaboration tools. These tools rely on Azure Active Directory (Azure AD) for identity management, which enables universities to configure: ++* Single sign-on across multiple applications. ++* Modern security controls, including passwordless authentication, multifactor authentication, adaptive Conditional Access, and identity protection. ++* Enhanced reporting and monitoring. ++Because Azure AD doesn't natively support multilateral federation, this content describes three solutions for federating authentication and access between universities with a typical research university architecture. These scenarios mention non-Microsoft products for illustrative purposes only and to represent the broader class of products. For example, this content uses Shibboleth as an example of a federation provider. ++## Next steps ++See these related articles about multilateral federation: ++[Multilateral federation baseline design](multilateral-federation-baseline.md) ++[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) ++[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) ++[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) ++[Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Multilateral Federation Solution One | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multilateral-federation-solution-one.md | + + Title: 'Solution 1: Azure AD with Cirrus Bridge' +description: This article describes design considerations for using Azure AD with Cirrus Bridge as a multilateral federation solution for universities. +++++++ Last updated : 04/1/2023++++++# Solution 1: Azure AD with Cirrus Bridge ++Solution 1 uses Azure Active Directory (Azure AD) as the primary identity provider (IdP) for all applications. A managed service provides multilateral federation. In this example, Cirrus Bridge is the managed service for integration of Central Authentication Service (CAS) and multilateral federation apps. ++[![Diagram that shows Azure AD integration with various application environments using Cirrus to provide a CAS bridge and a SAML bridge.](media/multilateral-federation-solution-one/azure-ad-cirrus-bridge.png)](media/multilateral-federation-solution-one/cirrus-bridge.png#lightbox) ++If you're also using an on-premises Active Directory instance, you can [configure Active Directory](../hybrid/whatis-hybrid-identity.md) with hybrid identities. Implementing a solution of using Azure AD with Cirrus Bridge provides: ++* **Security Assertion Markup Language (SAML) bridge**: Configure multilateral federation and participation in InCommon and eduGAIN. You can also use the SAML bridge to configure Azure AD Conditional Access policies, app assignment, governance, and other features for each multilateral federation app. ++* **CAS bridge**: Provide protocol translation to support on-premises CAS apps to authenticate with Azure AD. You can use the CAS bridge to configure Azure AD Conditional Access policies, app assignment, and governance for all CAS apps as a whole. ++When you implement Azure AD with Cirrus Bridge, you can take advantage of more capabilities in Azure AD: ++* **Custom claims provider support**: With the [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md), you can use an external attribute store (like an external LDAP directory) to add claims into tokens for individual apps. The custom claims provider uses a custom extension that calls an external REST API to fetch claims from external systems. ++* **Custom security attributes**: You can add custom attributes to objects in the directory and control who can read them. [Custom security attributes](../fundamentals/custom-security-attributes-overview.md) enable you to store more of your attributes directly in Azure AD. ++## Advantages ++Here are some of the advantages of implementing Azure AD with Cirrus Bridge: ++* **Seamless cloud authentication for all apps** ++ * All apps authenticate through Azure AD. ++ * Elimination of all on-premises identity components in a managed service can potentially lower your operational and administrative costs, reduce security risks, and free up resources for other efforts. ++* **Streamlined configuration, deployment, and support model** ++ * [Cirrus Bridge](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md) is registered in the Azure AD app gallery. ++ * You benefit from an established process for configuring and setting up the bridge solution. ++ * Cirrus Identity provides continuous support. ++* **Conditional Access support for multilateral federation apps** ++ * Implementation of Conditional Access controls helps you comply with [NIH](https://auth.nih.gov/CertAuthV3/forms/help/compliancecheckhelp.html) and [REFEDS](https://refeds.org/category/research-and-scholarship) requirements. ++ * This solution is the only architecture that enables you to configure granular Azure AD Conditional Access for both multilateral federation apps and CAS apps. ++* **Use of other Azure AD-related solutions for all apps** ++ * You can use Intune and Azure AD join for device management. ++ * Azure AD join enables you to use Windows Autopilot, Azure AD Multi-Factor Authentication, and passwordless features. Azure AD join supports achieving a Zero Trust posture. ++ > [!NOTE] + > Switching to Azure AD Multi-Factor Authentication might help you save significant costs over other solutions that you have in place. ++## Considerations and trade-offs ++Here are some of the trade-offs of using this solution: ++* **Limited ability to customize the authentication experience**: This scenario provides a managed solution. It might not offer you the flexibility or granularity to build a custom solution by using federation provider products. ++* **Limited third-party MFA integration**: The number of integrations available to third-party MFA solutions might be limited. ++* **One-time integration effort required**: To streamline integration, you need to perform a one-time migration of all student and faculty apps to Azure AD. You also need to set up Cirrus Bridge. ++* **Subscription required for Cirrus Bridge**: The subscription fee for Cirrus Bridge is based on anticipated annual authentication usage of the bridge. ++## Migration resources ++The following resources help with your migration to this solution architecture. ++| Migration resource | Description | +| - | - | +| [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | +| [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md)| Overview of the Azure AD custom claims provider | +| [Custom security attributes](../fundamentals/custom-security-attributes-manage.md) | Steps for managing access to custom security attributes | +| [Azure AD single sign-on integration with Cirrus Bridge](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md) | Tutorial to integrate Cirrus Bridge with Azure AD | +| [Cirrus Bridge overview](https://blog.cirrusidentity.com/documentation/azure-bridge-setup-rev-6.0) | Cirrus Identity documentation for configuring Cirrus Bridge with Azure AD | +| [Azure AD Multi-Factor Authentication deployment considerations](../authentication/howto-mfa-getstarted.md) | Guidance for configuring Azure AD Multi-Factor Authentication | ++## Next steps ++See these related articles about multilateral federation: ++[Multilateral federation introduction](multilateral-federation-introduction.md) ++[Multilateral federation baseline design](multilateral-federation-baseline.md) ++[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) ++[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) ++[Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Multilateral Federation Solution Three | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multilateral-federation-solution-three.md | + + Title: 'Solution 3: Azure AD with AD FS and Shibboleth' +description: This article describes design considerations for using Azure AD with AD FS and Shibboleth as a multilateral federation solution for universities. +++++++ Last updated : 04/01/2023++++++# Solution 3: Azure AD with AD FS and Shibboleth ++In Solution 3, the federation provider is the primary identity provider (IdP). In this example, Shibboleth is the federation provider for the integration of multilateral federation apps, on-premises Central Authentication Service (CAS) apps, and any Lightweight Directory Access Protocol (LDAP) directories. ++[![Diagram that shows a design integrating Shibboleth, Active Directory Federation Services, and Azure Active Directory.](media/multilateral-federation-solution-three/shibboleth-adfs-azure-ad.png)](media/multilateral-federation-solution-three/shibboleth-adfs-azure-ad.png#lightbox) ++In this scenario, Shibboleth is the primary IdP. Participation in multilateral federations (for example, with InCommon) is done through Shibboleth, which natively supports this integration. On-premises CAS apps and the LDAP directory are also integrated with Shibboleth. ++Student apps, faculty apps, and Microsoft 365 apps are integrated with Azure Active Directory (Azure AD). Any on-premises instance of Active Directory is synced with Azure AD. Active Directory Federation Services (AD FS) provides integration with third-party multifactor authentication (MFA). AD FS performs protocol translation and enables certain Azure AD features, such as Azure AD join for device management, Windows Autopilot, and passwordless features. ++## Advantages ++Here are some of the advantages of using this solution: ++* **Customized authentication**: You can customize the experience for multilateral federation apps through Shibboleth. ++* **Ease of execution**: The solution is simple to implement in the short term for institutions that already use Shibboleth as their primary IdP. You need to migrate student and faculty apps to Azure AD and add an AD FS instance. ++* **Minimal disruption**: The solution allows third-party MFA. You can keep existing MFA solutions, such as Duo, in place until you're ready for an update. ++## Considerations and trade-offs ++Here are some of the trade-offs of using this solution: ++* **Higher complexity and security risk**: An on-premises footprint might mean higher complexity for the environment and extra security risks, compared to a managed service. Increased overhead and fees might also be associated with managing on-premises components. ++* **Suboptimal authentication experience**: For multilateral federation and CAS apps, there's no cloud-based authentication mechanism and there might be multiple redirects. ++* **No Azure AD Multi-Factor Authentication support**: This solution doesn't enable Azure AD Multi-Factor Authentication support for multilateral federation or CAS apps. You might miss potential cost savings. ++* **No granular Conditional Access support**: The lack of granular Conditional Access support limits your ability to make granular decisions. ++* **Significant ongoing staff allocation**: IT staff must maintain infrastructure and software for the authentication solution. Any staff attrition might introduce risk. ++## Migration resources ++The following resources can help with your migration to this solution architecture. ++| Migration resource | Description | +| - | - | +| [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | ++## Next steps ++See these related articles about multilateral federation: ++[Multilateral federation introduction](multilateral-federation-introduction.md) ++[Multilateral federation baseline design](multilateral-federation-baseline.md) ++[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) ++[Multilateral federation Solution 2: Azure AD with Shibboleth as a SAML proxy](multilateral-federation-solution-two.md) ++[Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Multilateral Federation Solution Two | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multilateral-federation-solution-two.md | + + Title: 'Solution 2: Azure AD with Shibboleth as a SAML proxy' +description: This article describes design considerations for using Azure AD with Shibboleth as a SAML proxy as a multilateral federation solution for universities. +++++++ Last updated : 04/01/2023++++++# Solution 2: Azure AD with Shibboleth as a SAML proxy ++In Solution 2, Azure Active Directory (Azure AD) acts as the primary identity provider (IdP). The federation provider acts as a Security Assertion Markup Language (SAML) proxy to the Central Authentication Service (CAS) apps and the multilateral federation apps. In this example, [Shibboleth acts as the SAML proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) to provide a reference link. ++[![Diagram that shows Shibboleth used as a SAML proxy provider.](media/multilateral-federation-solution-two/azure-ad-shibboleth-as-sp-proxy.png)](media/multilateral-federation-solution-two/azure-ad-shibboleth-as-sp-proxy.png#lightbox) ++Because Azure AD is the primary IdP, all student and faculty apps are integrated with Azure AD. All Microsoft 365 apps are also integrated with Azure AD. If Azure Active Directory Domain Services is in use, it also is synchronized with Azure AD. ++The SAML proxy feature of Shibboleth integrates with Azure AD. In Azure AD, Shibboleth appears as a non-gallery enterprise application. Universities can get single sign-on (SSO) for their CAS apps and can participate in the InCommon environment. Additionally, Shibboleth provides integration for Lightweight Directory Access Protocol (LDAP) directory services. ++## Advantages ++Advantages of using this solution include: ++* **Cloud authentication for all apps**: All apps authenticate through Azure AD. ++* **Ease of execution**: This solution provides short-term ease of execution for universities that are already using Shibboleth. ++## Considerations and trade-offs ++Here are some of the trade-offs of using this solution: ++* **Higher complexity and security risk**: An on-premises footprint might mean higher complexity for the environment and extra security risks, compared to a managed service. Increased overhead and fees might also be associated with managing on-premises components. ++* **Suboptimal authentication experience**: For multilateral federation and CAS apps, the authentication experience for users might not be seamless because of redirects through Shibboleth. The options for customizing the authentication experience for users are limited. ++* **Limited third-party multifactor authentication (MFA) integration**: The number of integrations available to third-party MFA solutions might be limited. ++* **No granular Conditional Access support**: Without granular Conditional Access support, you have to choose between the least common denominator (optimize for less friction but have limited security controls) or the highest common denominator (optimize for security controls at the expense of user friction). Your ability to make granular decisions is limited. ++## Migration resources ++The following resources can help with your migration to this solution architecture. ++| Migration resource | Description | +| - | - | +| [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD | +| [Configuring Shibboleth as a SAML proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) | Shibboleth article that describes how to use the SAML proxying feature to connect the Shibboleth IdP to Azure AD | +| [Azure AD Multi-Factor Authentication deployment considerations](../authentication/howto-mfa-getstarted.md) | Guidance for configuring Azure AD Multi-Factor Authentication | ++## Next steps ++See these related articles about multilateral federation: ++[Multilateral federation introduction](multilateral-federation-introduction.md) ++[Multilateral federation baseline design](multilateral-federation-baseline.md) ++[Multilateral federation Solution 1: Azure AD with Cirrus Bridge](multilateral-federation-solution-one.md) ++[Multilateral federation Solution 3: Azure AD with AD FS and Shibboleth](multilateral-federation-solution-three.md) ++[Multilateral federation decision tree](multilateral-federation-decision-tree.md) |
active-directory | Ops Guide Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-auth.md | + + Title: Azure Active Directory Authentication management operations reference guide +description: This operations reference guide describes the checks and actions you should take to secure authentication management ++++tags: azuread ++++ Last updated : 08/17/2022++++# Azure Active Directory Authentication management operations reference guide ++This section of the [Azure AD operations reference guide](ops-guide-intro.md) describes the checks and actions you should take to secure and manage credentials, define authentication experience, delegate assignment, measure usage, and define access policies based on enterprise security posture. ++> [!NOTE] +> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their identity practices as Microsoft products and services evolve over time. ++## Key operational processes ++### Assign owners to key tasks ++Managing Azure Active Directory requires the continuous execution of key operational tasks and processes, which may not be part of a rollout project. It's still important you set up these tasks to optimize your environment. The key tasks and their recommended owners include: ++| Task | Owner | +| :- | :- | +| Manage lifecycle of single sign-on (SSO) configuration in Azure AD | IAM Operations Team | +| Design conditional access policies for Azure AD applications | InfoSec Architecture Team | +| Archive sign-in activity in a SIEM system | InfoSec Operations Team | +| Archive risk events in a SIEM system | InfoSec Operations Team | +| Triage and investigate security reports | InfoSec Operations Team | +| Triage and investigate risk events | InfoSec Operations Team | +| Triage and investigate users flagged for risk and vulnerability reports from Azure AD Identity Protection | InfoSec Operations Team | ++> [!NOTE] +> Azure AD Identity Protection requires an Azure AD Premium P2 license. To find the right license for your requirements, see [Comparing generally available features of the Azure AD Free and Azure AD Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). ++As you review your list, you may find you need to either assign an owner for tasks that are missing an owner or adjust ownership for tasks with owners that aren't aligned with the recommendations above. ++#### Owner recommended reading ++- [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md) ++## Credentials management ++### Password policies ++Managing passwords securely is one of the most critical parts of identity and access management and often the biggest target of attacks. Azure AD supports several features that can help prevent an attack from being successful. ++Use the table below to find the recommended solution for mitigating the issue that needs to be addressed: ++| Issue | Recommendation | +| :- | :- | +| No mechanism to protect against weak passwords | Enable Azure AD [self-service password reset (SSPR)](../authentication/concept-sspr-howitworks.md) and [password protection](../authentication/concept-password-ban-bad-on-premises.md) | +| No mechanism to detect leaked passwords | Enable [password hash sync](../hybrid/how-to-connect-password-hash-synchronization.md) (PHS) to gain insights | +| Using AD FS and unable to move to managed authentication | Enable [AD FS Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection) and / or [Azure AD Smart Lockout](../authentication/howto-password-smart-lockout.md) | +| Password policy uses complexity-based rules such as length, multiple character sets, or expiration | Reconsider in favor of [Microsoft Recommended Practices](https://www.microsoft.com/research/publication/password-guidance/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F265143%2Fmicrosoft_password_guidance.pdf) and switch your approach to password management and deploy [Azure AD password protection](../authentication/concept-password-ban-bad.md). | +| Users aren't registered to use multi-factor authentication (MFA) | [Register all user's security information](../identity-protection/howto-identity-protection-configure-mfa-policy.md) so it can be used as a mechanism to verify the user's identity along with their password | +| There is no revocation of passwords based on user risk | Deploy Azure AD [Identity Protection user risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) to force password changes on leaked credentials using SSPR | +| There's no smart lockout mechanism to protect malicious authentication from bad actors coming from identified IP addresses | Deploy cloud-managed authentication with either password hash sync or [pass-through authentication](../hybrid/how-to-connect-pta-quick-start.md) (PTA) | ++#### Password policies recommended reading ++- [Azure AD and AD FS best practices: Defending against password spray attacks - Enterprise Mobility + Security](https://cloudblogs.microsoft.com/enterprisemobility/2018/03/05/azure-ad-and-adfs-best-practices-defending-against-password-spray-attacks/) ++### Enable self-service password reset and password protection ++Users needing to change or reset their passwords is one of the biggest sources of volume and cost of help desk calls. In addition to cost, changing the password as a tool to mitigate a user risk is a fundamental step in improving the security posture of your organization. ++At a minimum, it's recommended you deploy Azure AD [self-service password reset](../authentication/concept-sspr-howitworks.md) (SSPR) and on-premises [password protection](../authentication/howto-password-ban-bad-on-premises-deploy.md) to accomplish: ++- Deflect help desk calls. +- Replace the use of temporary passwords. +- Replace any existing self-service password management solution that relies on an on-premises solution. +- [Eliminate weak passwords](../authentication/concept-password-ban-bad.md) in your organization. ++> [!NOTE] +> For organizations with an Azure AD Premium P2 subscription, it is recommended to deploy SSPR and use it as part of an [Identity Protection User Risk Policy](../identity-protection/howto-identity-protection-configure-risk-policies.md). ++### Strong credential management ++Passwords by themselves aren't secure enough to prevent bad actors from gaining access to your environment. At a minimum, any user with a privileged account must be enabled for multi-factor authentication (MFA). Ideally, you should enable [combined registration](../authentication/concept-registration-mfa-sspr-combined.md) and require all users to register for MFA and SSPR using the [combined registration experience](https://support.microsoft.com/account-billing/set-up-your-security-info-from-a-sign-in-prompt-28180870-c256-4ebf-8bd7-5335571bf9a8). Eventually, we recommend you adopt a strategy to [provide resilience](../authentication/concept-resilient-controls.md) to reduce the risk of lockout due to unforeseen circumstances. ++![Combined user experience flow](./media/ops-guide-auth/ops-img4.png) ++### On-premises outage authentication resiliency ++In addition to the benefits of simplicity and enabling leaked credential detection, Azure AD Password Hash Sync (PHS) and Azure AD MFA allow users to access SaaS applications and Microsoft 365 in spite of on-premises outages due to cyberattacks such as [NotPetya](https://www.microsoft.com/security/blog/2018/02/05/overview-of-petya-a-rapid-cyberattack/). It's also possible to enable PHS while in conjunction with federation. Enabling PHS allows a fallback of authentication when federation services aren't available. ++If your on-premises organization is lacking an outage resiliency strategy or has one that isn't integrated with Azure AD, you should deploy Azure AD PHS and define a disaster recovery plan that includes PHS. Enabling Azure AD PHS will allow users to authenticate against Azure AD should your on-premises Active Directory be unavailable. ++![password hash sync flow](./media/ops-guide-auth/ops-img5.png) ++To better understand your authentication options, see [Choose the right authentication method for your Azure Active Directory hybrid identity solution](../hybrid/choose-ad-authn.md). ++### Programmatic usage of credentials ++Azure AD scripts using PowerShell or applications using the Microsoft Graph API require secure authentication. Poor credential management executing those scripts and tools increase the risk of credential theft. If you're using scripts or applications that rely on hard-coded passwords or password prompts you should first review passwords in config files or source code, then replace those dependencies and use Azure Managed Identities, Integrated-Windows Authentication, or [certificates](../reports-monitoring/tutorial-access-api-with-certificates.md) whenever possible. For applications where the previous solutions aren't possible, consider using [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). ++If you determine that there are service principals with password credentials and you're unsure how those password credentials are secured by scripts or applications, contact the owner of the application to better understand usage patterns. ++Microsoft also recommends you contact application owners to understand usage patterns if there are service principals with password credentials. ++## Authentication experience ++### On-premises authentication ++Federated Authentication with integrated Windows authentication (IWA) or Seamless Single Sign-On (SSO) managed authentication with password hash sync or pass-through authentication is the best user experience when inside the corporate network with line-of-sight to on-premises domain controllers. It minimizes credential prompt fatigue and reduces the risk of users falling prey to phishing attacks. If you're already using cloud-managed authentication with PHS or PTA, but users still need to type in their password when authenticating on-premises, then you should immediately [deploy Seamless SSO](../hybrid/how-to-connect-sso.md). On the other hand, if you're currently federated with plans to eventually migrate to cloud-managed authentication, then you should implement Seamless SSO as part of the migration project. ++### Device trust access policies ++Like a user in your organization, a device is a core identity you want to protect. You can use a device's identity to protect your resources at any time and from any location. Authenticating the device and accounting for its trust type improves your security posture and usability by: ++- Avoiding friction, for example, with MFA, when the device is trusted +- Blocking access from untrusted devices +- For Windows 10 devices, provide [single sign-on to on-premises resources seamlessly](../devices/device-sso-to-on-premises-resources.md). ++You can carry out this goal by bringing device identities and managing them in Azure AD by using one of the following methods: ++- Organizations can use [Microsoft Intune](/intune/what-is-intune) to manage the device and enforce compliance policies, attest device health, and set conditional access policies based on whether the device is compliant. Microsoft Intune can manage iOS devices, Mac desktops (Via JAMF integration), Windows desktops (natively using Mobile Device Management for Windows 10, and co-management with Microsoft Configuration Manager) and Android mobile devices. +- [Hybrid Azure AD join](../devices/hybrid-azuread-join-managed-domains.md) provides management with Group Policies or Microsoft Configuration Manager in an environment with Active Directory domain-joined computers devices. Organizations can deploy a managed environment either through PHS or PTA with Seamless SSO. Bringing your devices to Azure AD maximizes user productivity through SSO across your cloud and on-premises resources while enabling you to secure access to your cloud and on-premises resources with [Conditional Access](../conditional-access/overview.md) at the same time. ++If you have domain-joined Windows devices that aren't registered in the cloud, or domain-joined Windows devices that are registered in the cloud but without conditional access policies, then you should register the unregistered devices and, in either case, [use Hybrid Azure AD join as a control](../conditional-access/require-managed-devices.md) in your conditional access policies. ++![A screenshot of grant in conditional access policy requiring hybrid device](./media/ops-guide-auth/ops-img6.png) ++If you're managing devices with MDM or Microsoft Intune, but not using device controls in your conditional access policies, then we recommend using [Require device to be marked as compliant](../conditional-access/require-managed-devices.md#require-device-to-be-marked-as-compliant) as a control in those policies. ++![A screenshot of grant in conditional access policy requiring device compliance](./media/ops-guide-auth/ops-img7.png) ++#### Device trust access policies recommended reading ++- [How To: Plan your hybrid Azure Active Directory join implementation](../devices/hybrid-azuread-join-plan.md) +- [Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations) ++### Windows Hello for Business ++In Windows 10, [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification) replaces passwords with strong two-factor authentication on PCs. Windows Hello for Business enables a more streamlined MFA experience for users and reduces your dependency on passwords. If you haven't begun rolling out Windows 10 devices, or have only partially deployed them, we recommend you upgrade to Windows 10 and [enable Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-manage-in-organization) on all devices. ++If you would like to learn more about passwordless authentication, see [A world without passwords with Azure Active Directory](../authentication/concept-authentication-passwordless.md). ++## Application authentication and assignment ++### Single sign-on for apps ++Providing a standardized single sign-on mechanism to the entire enterprise is crucial for best user experience, reduction of risk, ability to report, and governance. If you're using applications that support SSO with Azure AD but are currently configured to use local accounts, you should reconfigure those applications to use SSO with Azure AD. Likewise, if you're using any applications that support SSO with Azure AD but are using another Identity Provider, you should reconfigure those applications to use SSO with Azure AD as well. For applications that don't support federation protocols but do support forms-based authentication, we recommend you configure the application to use [password vaulting](../app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md) with Azure AD Application Proxy. ++![AppProxy Password-based Sign-on](./media/ops-guide-auth/ops-img8.png) ++> [!NOTE] +> If you don't have a mechanism to discover unmanaged applications in your organization, we recommend implementing a discovery process using a cloud access security broker solution (CASB) such as [Microsoft Defender for Cloud Apps](https://www.microsoft.com/enterprise-mobility-security/cloud-app-security). ++Finally, if you have an Azure AD app gallery and use applications that support SSO with Azure AD, we recommend [listing the application in the app gallery](../manage-apps/v2-howto-app-gallery-listing.md). ++#### Single sign-on recommended reading ++- [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md) ++### Migration of AD FS applications to Azure AD ++[Migrating apps from AD FS to Azure AD](../manage-apps/migrate-adfs-apps-to-azure.md) enables additional capabilities on security, more consistent manageability, and a better collaboration experience. If you have applications configured in AD FS that support SSO with Azure AD, then you should reconfigure those applications to use SSO with Azure AD. If you have applications configured in AD FS with uncommon configurations unsupported by Azure AD, you should contact the app owners to understand if the special configuration is an absolute requirement of the application. If it isn't required, then you should reconfigure the application to use SSO with Azure AD. ++![Azure AD as the primary identity provider](./media/ops-guide-auth/ops-img9.png) ++> [!NOTE] +> [Azure AD Connect Health for ADFS](../hybrid/how-to-connect-health-adfs.md) can be used to collect configuration details about each application that can potentially be migrated to Azure AD. ++### Assign users to applications ++[Assigning users to applications](../manage-apps/assign-user-or-group-access-portal.md) is best mapped by using groups because they allow greater flexibility and ability to manage at scale. The benefits of using groups include [attribute-based dynamic group membership](../enterprise-users/groups-dynamic-membership.md) and [delegation to app owners](../fundamentals/active-directory-accessmanagement-managing-group-owners.md). Therefore, if you're already using and managing groups, we recommend you take the following actions to improve management at scale: ++- Delegate group management and governance to application owners. +- Allow self-service access to the application. +- Define dynamic groups if user attributes can consistently determine access to applications. +- Implement attestation to groups used for application access using [Azure AD access reviews](../governance/access-reviews-overview.md). ++On the other hand, if you find applications that have assignment to individual users, be sure to implement [governance](../governance/index.yml) around those applications. ++#### Assign users to applications recommended reading ++- [Assign users and groups to an application in Azure Active Directory](../manage-apps/assign-user-or-group-access-portal.md) +- [Delegate app registration permissions in Azure Active Directory](../roles/delegate-app-roles.md) +- [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md) ++## Access policies ++### Named locations ++With [named locations](../conditional-access/location-condition.md) in Azure AD, you can label trusted IP address ranges in your organization. Azure AD uses named locations to: ++- Prevent false positives in risk events. Signing in from a trusted network location lowers a user's sign-in risk. +- Configure [location-based Conditional Access](../conditional-access/location-condition.md). ++![Named location](./media/ops-guide-auth/ops-img10.png) ++Based on priority, use the table below to find the recommended solution that best meets your organization's needs: ++| **Priority** | **Scenario** | **Recommendation** | +| | -- | -- | +| 1 | If you use PHS or PTA and named locations haven't been defined | Define named locations to improve detection of risk events | +| 2 | If you're federated and don't use "insideCorporateNetwork" claim and named locations haven't been defined | Define named locations to improve detection of risk events | +| 3 | If you don't use named locations in conditional access policies and there's no risk or device controls in conditional access policies | Configure the conditional access policy to include named locations | +| 4 | If you're federated and do use "insideCorporateNetwork" claim and named locations haven't been defined | Define named locations to improve detection of risk events | +| 5 | If you're using trusted IP addresses with MFA rather than named locations and marking them as trusted | Define named locations and mark them as trusted to improve detection of risk events | ++### Risk-based access policies ++Azure AD can calculate the risk for every sign-in and every user. Using risk as a criterion in access policies can provide a better user experience, for example, fewer authentication prompts, and better security, for example, only prompt users when they're needed, and automate the response and remediation. ++![Sign-in risk policy](./media/ops-guide-auth/ops-img11.png) ++If you already own Azure AD Premium P2 licenses that support using risk in access policies, but they aren't being used, we highly recommend adding risk to your security posture. ++#### Risk-based access policies recommended reading ++- [How To: Configure the sign-in risk policy](../identity-protection/howto-identity-protection-configure-risk-policies.md) +- [How To: Configure the user risk policy](../identity-protection/howto-identity-protection-configure-risk-policies.md) ++### Client application access policies ++Microsoft Intune Application Management (MAM) provides the ability to push data protection controls such as storage encryption, PIN, remote storage cleanup, etc. to compatible client mobile applications such as Outlook Mobile. In addition, conditional access policies can be created to [restrict access](../conditional-access/app-based-conditional-access.md) to cloud services such as Exchange Online from approved or compatible apps. ++If your employees install MAM-capable applications such as Office mobile apps to access corporate resources such as Exchange Online or SharePoint Online, and you also support BYOD (bring your own device), we recommend you deploy application MAM policies to manage the application configuration in personally owned devices without MDM enrollment and then update your conditional access policies to only allow access from MAM-capable clients. ++![Conditional Access Grant control](./media/ops-guide-auth/ops-img12.png) ++Should employees install MAM-capable applications against corporate resources and access is restricted on Intune Managed devices, then you should consider deploying application MAM policies to manage the application configuration for personal devices, and update Conditional Access policies to only allow access from MAM capable clients. ++### Conditional Access implementation ++Conditional Access is an essential tool for improving the security posture of your organization. Therefore, it is important you follow these best practices: ++- Ensure that all SaaS applications have at least one policy applied +- Avoid combining the **All apps** filter with the **block** control to avoid lockout risk +- Avoid using the **All users** as a filter and inadvertently adding **Guests** +- **Migrate all "legacy" policies to the Azure portal** +- Catch all criteria for users, devices, and applications +- Use Conditional Access policies to [implement MFA](../conditional-access/plan-conditional-access.md), rather than using a **per-user MFA** +- Have a small set of core policies that can apply to multiple applications +- Define empty exception groups and add them to the policies to have an exception strategy +- Plan for [break glass](../roles/security-planning.md#break-glass-what-to-do-in-an-emergency) accounts without MFA controls +- Ensure a consistent experience across Microsoft 365 client applications, for example, Teams, OneDrive, Outlook, etc.) by implementing the same set of controls for services such as Exchange Online and SharePoint Online +- Assignment to policies should be implemented through groups, not individuals +- Do regular reviews of the exception groups used in policies to limit the time users are out of the security posture. If you own Azure AD Premium P2, then you can use access reviews to automate the process ++#### Conditional Access recommended reading ++- [Best practices for Conditional Access in Azure Active Directory](../conditional-access/overview.md) +- [Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations) +- [Azure Active Directory Conditional Access settings reference](../conditional-access/concept-conditional-access-conditions.md) +- [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md) ++## Access surface area ++### Legacy authentication ++Strong credentials such as MFA cannot protect apps using legacy authentication protocols, which make it the preferred attack vector by malicious actors. Locking down legacy authentication is crucial to improve the access security posture. ++Legacy authentication is a term that refers to authentication protocols used by apps like: ++- Older Office clients that don't use modern authentication (for example, Office 2010 client) +- Clients that use mail protocols such as IMAP/SMTP/POP ++Attackers strongly prefer these protocols - in fact, nearly [100% of password spray attacks](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Your-Pa-word-doesn-t-matter/ba-p/731984) use legacy authentication protocols! Hackers use legacy authentication protocols, because they don't support interactive sign-in, which is needed for additional security challenges like multi-factor authentication and device authentication. ++If legacy authentication is widely used in your environment, you should plan to migrate legacy clients to clients that support [modern authentication](/office365/enterprise/modern-auth-for-office-2013-and-2016) as soon as possible. In the same token, if you have some users already using modern authentication but others that still use legacy authentication, you should take the following steps to lock down legacy authentication clients: ++1. Use [Sign-In Activity reports](../reports-monitoring/concept-sign-ins.md) to identify users who are still using legacy authentication and plan remediation: ++ a. Upgrade to modern authentication capable clients to affected users. + + b. Plan a cutover timeframe to lock down per steps below. + + c. Identify what legacy applications have a hard dependency on legacy authentication. See step 3 below. ++2. Disable legacy protocols at the source (for example Exchange Mailbox) for users who aren't using legacy auth to avoid more exposure. +3. For the remaining accounts (ideally non-human identities such as service accounts), use [conditional access to restrict legacy protocols](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Azure-AD-Conditional-Access-support-for-blocking-legacy-auth-is/ba-p/245417) post-authentication. ++#### Legacy authentication recommended reading ++- [Enable or disable POP3 or IMAP4 access to mailboxes in Exchange Server](/exchange/clients/pop3-and-imap4/configure-mailbox-access) ++### Consent grants ++In an illicit consent grant attack, the attacker creates an Azure AD-registered application that requests access to data such as contact information, email, or documents. Users might be granting consent to malicious applications via phishing attacks when landing on malicious websites. ++Below are a list of apps with permissions you might want to scrutinize for Microsoft cloud ++- Apps with app or delegated \*.ReadWrite Permissions +- Apps with delegated permissions can read, send, or manage email on behalf of the user +- Apps that are granted the using the following permissions: ++| Resource | Permission | +| :- | :- | +| Exchange Online | EAS.AccessAsUser.All | +| | EWS.AccessAsUser.All | +| | Mail.Read | +| Microsoft Graph API | Mail.Read | +| | Mail.Read.Shared | +| | Mail.ReadWrite | ++- Apps granted full user impersonation of the signed-in user. For example: ++|Resource | Permission | +| :- | :- | +| Microsoft Graph API| Directory.AccessAsUser.All | +| Azure REST API | user_impersonation | ++To avoid this scenario, you should refer to [detect and remediate illicit consent grants in Office 365](/office365/securitycompliance/detect-and-remediate-illicit-consent-grants) to identify and fix any applications with illicit grants or applications that have more grants than are necessary. Next, [remove self-service altogether](../manage-apps/configure-user-consent.md) and [establish governance procedures](../manage-apps/configure-admin-consent-workflow.md). Finally, schedule regular reviews of app permissions and remove them when they are not needed. ++#### Consent grants recommended reading ++- [Overview of Microsoft Graph permissions](/graph/permissions-overview) +- [Microsoft Graph API permissions](/graph/permissions-reference) ++### User and group settings ++Below are the user and group settings that can be locked down if there isn't an explicit business need: ++#### User settings ++- **External Users** - external collaboration can happen organically in the enterprise with services like Teams, Power BI, SharePoint Online, and Azure Information Protection. If you have explicit constraints to control user-initiated external collaboration, it is recommended you enable external users by using [Azure AD Entitlement management](../governance/entitlement-management-overview.md) or a controlled operation such as through your help desk. If you don't want to allow organic external collaboration for services, you can [block members from inviting external users completely](../external-identities/external-collaboration-settings-configure.md). Alternatively, you can also [allow or block specific domains](../external-identities/allow-deny-list.md) in external user invitations. +- **App Registrations** - when App registrations are enabled, end users can onboard applications themselves and grant access to their data. A typical example of App registration is users enabling Outlook plug-ins, or voice assistants such as Alexa and Siri to read their email and calendar or send emails on their behalf. If the customer decides to turn off App registration, the InfoSec and IAM teams must be involved in the management of exceptions (app registrations that are needed based on business requirements), as they would need to register the applications with an admin account, and most likely require designing a process to operationalize the process. +- **Administration Portal** - organizations can lock down the Azure AD blade in the Azure portal so that non-administrators can't access Azure AD management in the Azure portal and get confused. Go to the user settings in the Azure AD management portal to restrict access: ++![Administration portal restricted access](./media/ops-guide-auth/ops-img13.png) ++> [!NOTE] +> Non-administrators can still access to the Azure AD management interfaces via command-line and other programmatic interfaces. ++#### Group settings ++**Self-Service Group Management / Users can create Security groups / Microsoft 365 groups.** If there's no current self-service initiative for groups in the cloud, customers might decide to turn it off until they're ready to use this capability. ++#### Groups recommended reading ++- [What is Azure Active Directory B2B collaboration?](../external-identities/what-is-b2b.md) +- [Integrating Applications with Azure Active Directory](../develop/quickstart-register-app.md) +- [Apps, permissions, and consent in Azure Active Directory.](../develop/quickstart-register-app.md) +- [Use groups to manage access to resources in Azure Active Directory](../fundamentals/concept-learn-about-groups.md) +- [Setting up self-service application access management in Azure Active Directory](../enterprise-users/groups-self-service-management.md) ++### Traffic from unexpected locations ++Attackers originate from various parts of the world. Manage this risk by using conditional access policies with location as the condition. The [location condition](../conditional-access/location-condition.md) of a Conditional Access policy enables you to block access for locations from where there's no business reason to sign in from. ++![Create a new named location](./media/ops-guide-auth/ops-img14.png) ++If available, use a security information and event management (SIEM) solution to analyze and find patterns of access across regions. If you don't use a SIEM product, or it isn't ingesting authentication information from Azure AD, we recommend you use [Azure Monitor](../../azure-monitor/overview.md) to identify patterns of access across regions. ++## Access usage ++### Azure AD logs archived and integrated with incident response plans ++Having access to sign-in activity, audits and risk events for Azure AD is crucial for troubleshooting, usage analytics, and forensics investigations. Azure AD provides access to these sources through REST APIs that have a limited retention period. A security information and event management (SIEM) system, or equivalent archival technology, is key for long-term storage of audits and supportability. To enable long-term storage of Azure AD Logs, you must either add them to your existing SIEM solution or use [Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md). Archive logs that can be used as part of your incident response plans and investigations. ++#### Logs recommended reading ++- [Azure Active Directory audit API reference](/graph/api/resources/directoryaudit) +- [Azure Active Directory sign-in activity report API reference](/graph/api/resources/signin) +- [Get data using the Azure AD Reporting API with certificates](../reports-monitoring/tutorial-access-api-with-certificates.md) +- [Microsoft Graph for Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-graph-api.md) +- [Office 365 Management Activity API reference](/office/office-365-management-api/office-365-management-activity-api-reference) +- [How to use the Azure Active Directory Power BI Content Pack](../reports-monitoring/howto-use-azure-monitor-workbooks.md) ++## Summary ++There are 12 aspects to a secure Identity infrastructure. This list will help you further secure and manage credentials, define authentication experience, delegate assignment, measure usage, and define access policies based on enterprise security posture. ++- Assign owners to key tasks. +- Implement solutions to detect weak or leaked passwords, improve password management and protection, and further secure user access to resources. +- Manage the identity of devices to protect your resources at any time and from any location. +- Implement passwordless authentication. +- Provide a standardized single sign-on mechanism across the organization. +- Migrate apps from AD FS to Azure AD to enable better security and more consistent manageability. +- Assign users to applications by using groups to allow greater flexibility and ability to manage at scale. +- Configure risk-based access policies. +- Lock down legacy authentication protocols. +- Detect and remediate illicit consent grants. +- Lock down user and group settings. +- Enable long-term storage of Azure AD logs for troubleshooting, usage analytics, and forensics investigations. ++## Next steps ++Get started with the [Identity governance operational checks and actions](ops-guide-govern.md). |
active-directory | Ops Guide Govern | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-govern.md | + + Title: Azure Active Directory governance operations reference guide +description: This operations reference guide describes the checks and actions you should take to secure governance management ++++tags: azuread ++++ Last updated : 08/17/2022++++# Azure Active Directory governance operations reference guide ++This section of the [Azure AD operations reference guide](ops-guide-intro.md) describes the checks and actions you should take to assess and attest the access granted nonprivileged and privileged identities, audit, and control changes to the environment. ++> [!NOTE] +> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their governance practices as Microsoft products and services evolve over time. ++## Key operational processes ++### Assign owners to key tasks ++Managing Azure Active Directory requires the continuous execution of key operational tasks and processes, which may not be part of a rollout project. It's still important you set up these tasks to optimize your environment. The key tasks and their recommended owners include: ++| Task | Owner | +| :- | :- | +| Archive Azure AD audit logs in SIEM system | InfoSec Operations Team | +| Discover applications that are managed out of compliance | IAM Operations Team | +| Regularly review access to applications | InfoSec Architecture Team | +| Regularly review access to external identities | InfoSec Architecture Team | +| Regularly review who has privileged roles | InfoSec Architecture Team | +| Define security gates to activate privileged roles | InfoSec Architecture Team | +| Regularly review consent grants | InfoSec Architecture Team | +| Design Catalogs and Access Packages for applications and resources based for employees in the organization | App Owners | +| Define Security Policies to assign users to access packages | InfoSec team + App Owners | +| If policies include approval workflows, regularly review workflow approvals | App Owners | +| Review exceptions in security policies, such as conditional access policies, using access reviews | InfoSec Operations Team | ++As you review your list, you may find you need to either assign an owner for tasks that are missing an owner or adjust ownership for tasks with owners that aren't aligned with the recommendations above. ++#### Owner recommended reading ++- [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md) ++### Configuration changes testing ++There are changes that require special considerations when testing, from simple techniques such as rolling out a target subset of users to deploying a change in a parallel test tenant. If you haven't implemented a testing strategy, you should define a test approach based on the guidelines in the table below: ++| Scenario| Recommendation | +|-|-| +|Changing the authentication type from federated to PHS/PTA or vice-versa| Use [staged rollout](../hybrid/how-to-connect-staged-rollout.md) to test the impact of changing the authentication type.| +|Rolling out a new conditional access (CA) policy or Identity Protection Policy|Create a new Conditional Access policy and assign to test users.| +|Onboarding a test environment of an application|Add the application to a production environment, hide it from the MyApps panel, and assign it to test users during the quality assurance (QA) phase.| +|Changing of sync rules|Perform the changes in a test Azure AD Connect with the same configuration that is currently in production, also known as staging mode, and analyze CSExport Results. If satisfied, swap to production when ready.| +|Changing of branding|Test in a separate test tenant.| +|Rolling out a new feature|If the feature supports roll out to a target set of users, identify pilot users and build out. For example, self-service password reset and multi-factor authentication can target specific users or groups.| +|Cutover an application from an on-premises Identity provider (IdP), for example, Active Directory, to Azure AD|If the application supports multiple IdP configurations, for example, Salesforce, configure both and test Azure AD during a change window (in case the application introduces HRD page). If the application doesn't support multiple IdPs, schedule the testing during a change control window and program downtime.| +|Update dynamic group rules|Create a parallel dynamic group with the new rule. Compare against the calculated outcome, for example, run PowerShell with the same condition.<br>If test pass, swap the places where the old group was used (if feasible).| +|Migrate product licenses|Refer to [Change the license for a single user in a licensed group in Azure Active Directory](../enterprise-users/licensing-groups-change-licenses.md).| +|Change AD FS rules such as Authorization, Issuance, MFA|Use group claim to target subset of users.| +|Change AD FS authentication experience or similar farm-wide changes|Create a parallel farm with same host name, implement config changes, test from clients using HOSTS file, NLB routing rules, or similar routing.<br>If the target platform doesn't support HOSTS files (for example mobile devices), control change.| ++## Access reviews ++### Access reviews to applications ++Over time, users may accumulate access to resources as they move throughout different teams and positions. It's important that resource owners review the access to applications on a regular basis and remove privileges that are no longer needed throughout the lifecycle of users. Azure AD [access reviews](../governance/access-reviews-overview.md) enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. Resource owners should review users' access on a regular basis to make sure only the right people have continued access. Ideally, you should consider using Azure AD access reviews for this task. ++![Access reviews start page](./media/ops-guide-auth/ops-img15.png) ++> [!NOTE] +> Each user who interacts with access reviews must have a paid Azure AD Premium P2 license. ++### Access reviews to external identities ++It's crucial to keep access to external identities constrained only to resources that are needed, during the time that is needed. Establish a regular automated access review process for all external identities and application access using Azure AD [access reviews](../governance/access-reviews-overview.md). If a process already exists on-premises, consider using Azure AD access reviews. Once an application is retired or no longer used, remove all the external identities that had access to the application. ++> [!NOTE] +> Each user who interacts with access reviews must have a paid Azure AD Premium P2 license. ++## Privileged account management ++### Privileged account usage ++Hackers often target admin accounts and other elements of privileged access to rapidly gain access to sensitive data and systems. Since users with privileged roles tend to accumulate over time, it's important to review and manage admin access on a regular basis and provide just-in-time privileged access to Azure AD and Azure resources. ++If no process exists in your organization to manage privileged accounts, or you currently have admins who use their regular user accounts to manage services and resources, you should immediately begin using separate accounts, for example one for regular day-to-day activities; the other for privileged access and configured with MFA. Better yet, if your organization has an Azure AD Premium P2 subscription, then you should immediately deploy [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md#license-requirements) (PIM). In the same token, you should also review those privileged accounts and [assign less privileged roles](../roles/security-planning.md) if applicable. ++Another aspect of privileged account management that should be implemented is in defining [access reviews](../governance/access-reviews-overview.md) for those accounts, either manually or [automated through PIM](../privileged-identity-management/pim-perform-azure-ad-roles-and-resource-roles-review.md). ++#### Privileged account management recommended reading ++- [Roles in Azure AD Privileged Identity Management](../privileged-identity-management/pim-roles.md) ++### Emergency access accounts ++Organizations must create [emergency accounts](../roles/security-emergency-access.md) to be prepared to manage Azure AD for cases such as authentication outages like: ++- Outage components of authentication infrastructures (AD FS, On-premises AD, MFA service) +- Administrative staff turnover ++To prevent being inadvertently locked out of your tenant because you can't sign in or activate an existing individual user's account as an administrator, you should create two or more emergency accounts and ensure they're implemented and aligned with [Microsoft's best practices](../roles/security-planning.md) and [break glass procedures](../roles/security-planning.md#break-glass-what-to-do-in-an-emergency). ++### Privileged access to Azure EA portal ++The [Azure Enterprise Agreement (Azure EA) portal](https://azure.microsoft.com/blog/create-enterprise-subscription-experience-in-azure-portal-public-preview/) enables you to create Azure subscriptions against a master Enterprise Agreement, which is a powerful role within the enterprise. It's common to bootstrap the creation of this portal before even getting Azure AD in place, so it's necessary to use Azure AD identities to lock it down, remove personal accounts from the portal, ensure that proper delegation is in place, and mitigate the risk of lockout. ++To be clear, if the EA portal authorization level is currently set to "mixed mode", you must remove any [Microsoft accounts](https://support.skype.com/en/faq/FA12059/what-is-a-microsoft-account) from all privileged access in the EA portal and configure the EA portal to use Azure AD accounts only. If the EA portal delegated roles aren't configured, you should also find and implement delegated roles for departments and accounts. ++#### Privileged access recommended reading ++- [Administrator role permissions in Azure Active Directory](../roles/permissions-reference.md) ++## Entitlement management ++[Entitlement management (EM)](../governance/entitlement-management-overview.md) allows app owners to bundle resources and assign them to specific personas in the organization (both internal and external). EM allows self-service sign up and delegation to business owners while keeping governance policies to grant access, set access durations, and allow approval workflows. ++> [!NOTE] +> Azure AD Entitlement Management requires Azure AD Premium P2 licenses. ++## Summary ++There are eight aspects to a secure Identity governance. This list will help you identify the actions you should take to assess and attest the access granted to nonprivileged and privileged identities, audit, and control changes to the environment. ++- Assign owners to key tasks. +- Implement a testing strategy. +- Use Azure AD Access Reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. +- Establish a regular, automated access review process for all types of external identities and application access. +- Establish an access review process to review and manage admin access on a regular basis and provide just-in-time privileged access to Azure AD and Azure resources. +- Provision emergency accounts to be prepared to manage Azure AD for unexpected outages. +- Lock down access to the Azure EA portal. +- Implement Entitlement Management to provide governed access to a collection of resources. ++## Next steps ++Get started with the [Azure AD operational checks and actions](ops-guide-ops.md). |
active-directory | Ops Guide Iam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-iam.md | + + Title: Azure Active Directory Identity and access management operations reference guide +description: This operations reference guide describes the checks and actions you should take to secure identity and access management operations ++++tags: azuread ++++ Last updated : 08/17/2022++++# Azure Active Directory Identity and access management operations reference guide ++This section of the [Azure AD operations reference guide](ops-guide-intro.md) describes the checks and actions you should consider to secure and manage the lifecycle of identities and their assignments. ++> [!NOTE] +> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their identity practices as Microsoft products and services evolve over time. ++## Key operational processes ++### Assign owners to key tasks ++Managing Azure Active Directory requires the continuous execution of key operational tasks and processes that may not be part of a rollout project. It's still important you set up these tasks to maintain your environment. The key tasks and their recommended owners include: ++| Task | Owner | +| :- | :- | +| Define the process how to create Azure subscriptions | Varies by organization | +| Decide who gets Enterprise Mobility + Security licenses | IAM Operations Team | +| Decide who gets Microsoft 365 licenses | Productivity Team | +| Decide who gets other licenses, for example, Dynamics, Visual Studio Codespaces | Application Owner | +| Assign licenses | IAM Operations Team | +| Troubleshoot and remediate license assignment errors | IAM Operations Team | +| Provision identities to applications in Azure AD | IAM Operations Team | ++As you review your list, you may find you need to either assign an owner for tasks that are missing an owner or adjust ownership for tasks with owners that aren't aligned with the recommendations above. ++#### Assigning owners recommended reading ++- [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md) ++## On-premises identity synchronization ++### Identify and resolve synchronization issues ++Microsoft recommends you have a good baseline and understanding of the issues in your on-premises environment that can result in synchronization issues to the cloud. Since automated tools such as [IdFix](/office365/enterprise/prepare-directory-attributes-for-synch-with-idfix) and [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md#why-use-azure-ad-connect-health) can generate a high volume of false positives, we recommend you identify synchronization errors that have been left unaddressed for more than 100 days by cleaning up those objects in error. Long term unresolved synchronization errors can generate support incidents. [Troubleshooting errors during synchronization](../hybrid/tshoot-connect-sync-errors.md) provides an overview of different types of sync errors, some of the possible scenarios that cause those errors and potential ways to fix the errors. ++### Azure AD Connect Sync configuration ++To enable all hybrid experiences, device-based security posture, and integration with Azure AD, it's required that you synchronize user accounts that your employees use to login to their desktops. ++If you don't synchronize the forest users log into, then you should change the synchronization to come from the proper forest. ++#### Synchronization scope and object filtering ++Removing known buckets of objects that aren't required to be synchronized has the following operational benefits: ++- Fewer sources of sync errors +- Faster sync cycles +- Less "garbage" to carry forward from on-premises, for example, pollution of the global address list for on-premises service accounts that aren't relevant in the cloud ++> [!NOTE] +> If you find you are importing many objects that aren't being exported to the cloud, you should filter by OU or specific attributes. ++Examples of objects to exclude are: ++- Service Accounts that aren't used for cloud applications +- Groups that aren't meant to be used in cloud scenarios such as those used to grant access to resources +- Users or contacts that are external identities that are meant to be represented with Azure AD B2B Collaboration +- Computer Accounts where employees aren't meant to access cloud applications from, for example, servers ++> [!NOTE] +> If a single human identity has multiple accounts provisioned from something such as a legacy domain migration, merger, or acquisition, you should only synchronize the account used by the user on a day-to-day basis, for example, what they use to log in to their computer. ++Ideally, you'll want to reach a balance between reducing the number of objects to synchronize and the complexity in the rules. Generally, a combination between OU/container [filtering](../hybrid/how-to-connect-sync-configure-filtering.md) plus a simple attribute mapping to the cloudFiltered attribute is an effective filtering combination. ++> [!IMPORTANT] +> If you use group filtering in production, you should transition to another filtering approach. ++#### Sync failover or disaster recovery ++Azure AD Connect plays a key role in the provisioning process. If the Sync Server goes offline for any reason, changes to on-premises can't be updated in the cloud and can result in access issues for users. Therefore, it's important to define a failover strategy that allows administrators to quickly resume synchronization after the sync server goes offline. Such strategies may fall into the following categories: ++- **Deploy Azure AD Connect Server(s) in Staging Mode** - allows an administrator to "promote" the staging server to production by a simple configuration switch. +- **Use Virtualization** - If the Azure AD connect is deployed in a virtual machine (VM), admins can leverage their virtualization stack to live migrate or quickly redeploy the VM and therefore resume synchronization. ++If your organization is lacking a disaster recovery and failover strategy for Sync, you shouldn't hesitate to deploy Azure AD Connect in Staging Mode. Likewise, if there's a mismatch between your production and staging configuration, you should re-baseline Azure AD Connect staging mode to match the production configuration, including software versions and configurations. ++![A screenshot of Azure AD Connect staging mode configuration](./media/ops-guide-auth/ops-img1.png) ++#### Stay current ++Microsoft updates Azure AD Connect regularly. Stay current to take advantage of the performance improvements, bug fixes, and new capabilities that each new version provides. ++If your Azure AD Connect version is more than six months behind, you should upgrade to the most recent version. ++#### Source anchor ++Using **ms-DS-consistencyguid** as the [source anchor](../hybrid/plan-connect-design-concepts.md) allows an easier migration of objects across forests and domains, which is common in AD Domain consolidation/cleanup, mergers, acquisitions, and divestitures. ++If you're currently using **ObjectGuid** as the source anchor, we recommend you switch to using **ms-DS-ConsistencyGuid**. ++#### Custom rules ++Azure AD Connect custom rules provide the ability to control the flow of attributes between on-premises objects and cloud objects. However, overusing or misusing custom rules can introduce the following risks: ++- Troubleshooting complexity +- Degradation of performance when performing complex operations across objects +- Higher probability of divergence of configuration between the production server and staging server +- Additional overhead when upgrading Azure AD Connect if custom rules are created within the precedence greater than 100 (used by built-in rules) ++If you're using overly complex rules, you should investigate the reasons for the complexity and find opportunities for simplification. Likewise, if you have created custom rules with precedence value over 100, you should fix the rules so they aren't at risk or conflict with the default set. ++Examples of misusing custom rules include: ++- **Compensate for dirty data in the directory** - In this case, it's recommended to work with the owners of the AD team and clean up the data in the directory as a remediation task, and adjust processes to avoid reintroduction of bad data. +- **One-off remediation of individual users** - It's common to find rules that special case outliers, usually because of an issue with a specific user. +- **Overcomplicated "CloudFiltering"** - While reducing the number of objects is a good practice, there's a risk of creating and overcomplicated sync scope using many sync rules. If there's complex logic to include/exclude objects beyond the OU filtering, it's recommended to deal with this logic outside of sync and label the objects with a simple "cloudFiltered" attribute that can flow with a simple Sync Rule. ++#### Azure AD Connect configuration documenter ++The [Azure AD Connect Configuration Documenter](https://github.com/Microsoft/AADConnectConfigDocumenter) is a tool you can use to generate documentation of an Azure AD Connect installation to enable a better understanding of the sync configuration, build confidence in getting things right, and to know what was changed when you applied a new build or configuration of Azure AD Connect or added or updated custom sync rules. The current capabilities of the tool include: ++- Documentation of the complete configuration of Azure AD Connect sync. +- Documentation of any changes in the configuration of two Azure AD Connect sync servers or changes from a given configuration baseline. +- Generation of a PowerShell deployment script to migrate the sync rule differences or customizations from one server to another. ++## Assignment to apps and resources ++### Group-based licensing for Microsoft cloud services ++Azure Active Directory streamlines the management of licenses through [group-based licensing](../fundamentals/licensing-whatis-azure-portal.md) for Microsoft cloud services. This way, IAM provides the group infrastructure and delegated management of those groups to the proper teams in the organizations. There are multiple ways to set up the membership of groups in Azure AD, including: ++- **Synchronized from on-premises** - Groups can come from on-premises directories, which could be a good fit for organizations that have established group management processes that can be extended to assign licenses in Microsoft 365. ++- **Attribute-based / dynamic** - Groups can be created in the cloud based on an expression based on user attributes, for example, Department equals "sales". Azure AD maintains the members of the group, keeping it consistent with the expression defined. Using this kind of group for license assignment enables an attribute-based license assignment, which is a good fit for organizations that have high data quality in their directory. ++- **Delegated ownership** - Groups can be created in the cloud and can be designated owners. This way, you can empower business owners, for example, Collaboration team or BI team, to define who should have access. ++If you're currently using a manual process to assign licenses and components to users, we recommend you implement group-based licensing. If your current process doesn't monitor licensing errors or what is Assigned versus Available, you should define improvements to the process to address licensing errors and monitor licensing assignment. ++Another aspect of license management is the definition of service plans (components of the license) that should be enabled based on job functions in the organization. Granting access to service plans that aren't necessary, can result in users seeing tools in the Office portal that they haven't been trained for or shouldn't be using. It can drive additional help desk volume, unnecessary provisioning, and put your compliance and governance at risk, for example, when provisioning OneDrive for Business to individuals that might not be allowed to share content. ++Use the following guidelines to define service plans to users: ++- Administrators should define "bundles" of service plans to be offered to users based on their role, for instance, white-collar worker versus floor worker. +- Create groups by cluster and assign the license with service plan. +- Optionally, an attribute can be defined to hold the packages for users. ++> [!IMPORTANT] +> Group-based licensing in Azure AD introduces the concept of users in a licensing error state. If you notice any licensing errors, then you should immediately [identify and resolve](../enterprise-users/licensing-groups-resolve-problems.md) any license assignment problems. ++![A screenshot of a computer screen Description automatically generated](./media/ops-guide-auth/ops-img2.png) ++#### Lifecycle management ++If you're currently using a tool, such as [Microsoft Identity Manager](/microsoft-identity-manager/) or third-party system, that relies on an on-premises infrastructure, we recommend you offload assignment from the existing tool, implement group-based licensing and define a group lifecycle management based on [groups](../enterprise-users/licensing-group-advanced.md#use-group-based-licensing-with-dynamic-groups). Likewise, if your existing process doesn't account for new employees or employees that leave the organization, you should deploy group-based licensing based on dynamic groups and define a group membership lifecycle. Finally, if group-based licensing is deployed against on-premises groups that lack lifecycle management, consider using cloud groups to enable capabilities such as delegated ownership or attribute-based dynamic membership. ++### Assignment of apps with "All users" group ++Resource owners may believe that the **All users** group contains only **Enterprise Employees** when they may actually contain both **Enterprise Employees** and **Guests**. As a result, you should take special care when using the **All users** group for application assignment and granting access to resources such as SharePoint content or applications. ++> [!IMPORTANT] +> If the **All users** group is enabled and used for conditional access policies, app or resource assignment, make sure to [secure the group](../external-identities/use-dynamic-groups.md) if you don't want it to include guest users. Furthermore, you should fix your licensing assignments by creating and assigning to groups that contain **Enterprise Employees** only. On the other hand, if you find that the **All users** group is enabled but not being used to grant access to resources, make sure your organization's operational guidance is to intentionally use that group (which includes both **Enterprise Employees** and **Guests**). ++### Automated user provisioning to apps ++[Automated user provisioning](../app-provisioning/user-provisioning.md) to applications is the best way to create a consistent provisioning, deprovisioning, and lifecycle of identities across multiple systems. ++If you're currently provisioning apps in an ad-hoc manner or using things like CSV files, JIT, or an on-premises solution that doesn't address lifecycle management, we recommend you [implement application provisioning](../app-provisioning/user-provisioning.md#how-do-i-set-up-automatic-provisioning-to-an-application) with Azure AD for supported applications and define a consistent pattern for applications that aren't yet supported by Azure AD. ++![Azure AD provisioning service](./media/ops-guide-auth/ops-img3.png) ++### Azure AD Connect delta sync cycle baseline ++It's important to understand the volume of changes in your organization and make sure that it isn't taking too long to have a predictable synchronization time. ++The [default delta sync](../hybrid/how-to-connect-sync-feature-scheduler.md) frequency is 30 minutes. If the delta sync is taking longer than 30 minutes consistently, or there are significant discrepancies between the delta sync performance of staging and production, you should investigate and review the [factors influencing the performance of Azure AD Connect](../hybrid/plan-connect-performance-factors.md). ++#### Azure AD Connect troubleshooting recommended reading ++- [Prepare directory attributes for synchronization with Microsoft 365 by using the IdFix tool](/office365/enterprise/prepare-directory-attributes-for-synch-with-idfix) +- [Azure AD Connect: Troubleshooting Errors during synchronization](../hybrid/tshoot-connect-sync-errors.md) ++## Summary ++There are five aspects to a secure Identity infrastructure. This list will help you quickly find and take the necessary actions to secure and manage the lifecycle of identities and their entitlements in your organization. ++- Assign owners to key tasks. +- Find and resolve synchronization issues. +- Define a failover strategy for disaster recovery. +- Streamline the management of licenses and assignment of apps. +- Automate user provisioning to apps. ++## Next steps ++Get started with the [Authentication management checks and actions](ops-guide-auth.md). |
active-directory | Ops Guide Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-intro.md | + + Title: Azure Active Directory operations reference guide +description: This operations reference guide describes the checks and actions you should take to secure and maintain identity and access management, authentication, governance, and operations ++++tags: azuread ++++ Last updated : 08/17/2022++++# Azure Active Directory operations reference guide ++This operations reference guide describes the checks and actions you should take to secure and maintain the following areas: ++- **[Identity and access management](ops-guide-iam.md)** - ability to manage the lifecycle of identities and their entitlements. +- **[Authentication management](ops-guide-auth.md)** - ability to manage credentials, define authentication experience, delegate assignment, measure usage, and define access policies based on enterprise security posture. +- **[Governance](ops-guide-govern.md)** - ability to assess and attest the access granted nonprivileged and privileged identities, audit, and control changes to the environment. +- **[Operations](ops-guide-ops.md)** - optimize the operations Azure Active Directory (Azure AD). ++Some recommendations here might not be applicable to all customers' environment, for example, AD FS best practices might not apply if your organization uses password hash sync. ++> [!NOTE] +> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their identity practices as Microsoft products and services evolve over time. Recommendations can change when organizations subscribe to a different Azure AD Premium license. ++## Stakeholders ++Each section in this reference guide recommends assigning stakeholders to plan and implement key tasks successfully. The following table outlines the list of all the stakeholders in this guide: ++| Stakeholder | Description | +| :- | :- | +| IAM Operations Team | This team handles managing the day to day operations of the Identity and Access Management system | +| Productivity Team | This team owns and manages the productivity applications such as email, file sharing and collaboration, instant messaging, and conferencing. | +| Application Owner | This team owns the specific application from a business and usually a technical perspective in an organization. | +| InfoSec Architecture Team | This team plans and designs the Information Security practices of an organization. | +| InfoSec Operations Team | This team runs and monitors the implemented Information Security practices of the InfoSec Architecture team. | ++## Next steps ++Get started with the [Identity and access management checks and actions](ops-guide-iam.md). |
active-directory | Ops Guide Ops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-ops.md | + + Title: Azure Active Directory general operations guide reference +description: This operations reference guide describes the checks and actions you should take to secure general operations ++++tags: azuread ++++ Last updated : 08/17/2022++++# Azure Active Directory general operations guide reference ++This section of the [Azure AD operations reference guide](ops-guide-intro.md) describes the checks and actions you should take to optimize the general operations of Azure Active Directory (Azure AD). ++> [!NOTE] +> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their operational practices as Microsoft products and services evolve over time. ++## Key operational processes ++### Assign owners to key tasks ++Managing Azure Active Directory requires the continuous execution of key operational tasks and processes, which may not be part of a rollout project. It's still important you set up these tasks to optimize your environment. The key tasks and their recommended owners include: ++| Task | Owner | +| :- | :- | +| Drive Improvements on Identity Secure Score | InfoSec Operations Team | +| Maintain Azure AD Connect Servers | IAM Operations Team | +| Regularly execute and triage IdFix Reports | IAM Operations Team | +| Triage Azure AD Connect Health Alerts for Sync and AD FS | IAM Operations Team | +| If not using Azure AD Connect Health, then customer has equivalent process and tools to monitor custom infrastructure | IAM Operations Team | +| If not using AD FS, then customer has equivalent process and tools to monitor custom infrastructure | IAM Operations Team | +| Monitor Hybrid Logs: Azure AD App Proxy Connectors | IAM Operations Team | +| Monitor Hybrid Logs: Passthrough Authentication Agents | IAM Operations Team | +| Monitor Hybrid Logs: Password Writeback Service | IAM Operations Team | +| Monitor Hybrid Logs: On-premises password protection gateway | IAM Operations Team | +| Monitor Hybrid Logs: Azure AD MFA NPS Extension (if applicable) | IAM Operations Team | ++As you review your list, you may find you need to either assign an owner for tasks that are missing an owner or adjust ownership for tasks with owners that aren't aligned with the recommendations above. ++#### Owners recommended reading ++- [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md) ++## Hybrid management ++### Recent versions of on-premises components ++Having the most up-to-date versions of on-premises components provides the customer all the latest security updates, performance improvements and functionality that could help to further simplify the environment. Most components have an automatic upgrade setting, which will automate the upgrade process. ++These components include: ++- Azure AD Connect +- Azure AD Application Proxy Connectors +- Azure AD Pass-through authentication agents +- Azure AD Connect Health Agents ++Unless one has been established, you should define a process to upgrade these components and rely on the automatic upgrade feature whenever possible. If you find components that are six or more months behind, you should upgrade as soon as possible. ++#### Hybrid management recommended reading ++- [Azure AD Connect: Automatic upgrade](../hybrid/how-to-connect-install-automatic-upgrade.md) +- [Understand Azure AD Application Proxy connectors | Automatic updates](../app-proxy/application-proxy-connectors.md#automatic-updates) ++### Azure AD Connect Health alert baseline ++Organizations should deploy [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md#what-is-azure-ad-connect-health) for monitoring and reporting of Azure AD Connect and AD FS. Azure AD Connect and AD FS are critical components that can break lifecycle management and authentication and therefore lead to outages. Azure AD Connect Health helps monitor and gain insights into your on-premises identity infrastructure thus ensuring the reliability of your environment. ++![Azure AD Connect Heath architecture](./media/ops-guide-auth/ops-img16.png) ++As you monitor the health of your environment, you must immediately address any high severity alerts, followed by lower severity alerts. ++#### Azure AD Connect Health recommended reading ++- [Azure AD Connect Health Agent Installation](../hybrid/how-to-connect-health-agent-install.md) ++### On-premises agents logs ++Some identity and access management services require on-premises agents to enable hybrid scenarios. Examples include password reset, pass-through authentication (PTA), Azure AD Application Proxy, and Azure AD MFA NPS extension. It's key that the operations team baseline and monitor the health of these components by archiving and analyzing the component agent logs using solutions such as System Center Operations Manager or SIEM. It's equally important your Infosec Operations team or help desk understand how to troubleshoot patterns of errors. ++#### On-premises agents logs recommended reading ++- [Troubleshoot Application Proxy](../app-proxy/application-proxy-troubleshoot.md) +- [Self-service password reset troubleshooting](../authentication/troubleshoot-sspr.md) +- [Understand Azure AD Application Proxy connectors](../app-proxy/application-proxy-connectors.md) +- [Azure AD Connect: Troubleshoot Pass-through Authentication](../hybrid/tshoot-connect-pass-through-authentication.md#collecting-pass-through-authentication-agent-logs) +- [Troubleshoot error codes for the Azure AD MFA NPS extension](../authentication/howto-mfa-nps-extension-errors.md) ++### On-premises agents management ++Adopting best practices can help the optimal operation of on-premises agents. Consider the following best practices: ++- Multiple Azure AD Application proxy connectors per connector group are recommended to provide seamless load balancing and high availability by avoiding single points of failure when accessing the proxy applications. If you presently have only one connector in a connector group that handles applications in production, you should deploy at least two connectors for redundancy. +- Creating and using an app proxy connector group for debugging purposes can be useful for troubleshooting scenarios and when onboarding new on-premises applications. We also recommend installing networking tools such as Message Analyzer and Fiddler in the connector machines. +- Multiple pass-through authentication agents are recommended to provide seamless load balancing and high availability by avoiding single point of failure during the authentication flow. Be sure to deploy at least two pass-through authentication agents for redundancy. ++#### On-premises agents management recommended reading ++- [Understand Azure AD Application Proxy connectors](../app-proxy/application-proxy-connectors.md) +- [Azure AD Pass-through Authentication - quickstart](../hybrid/how-to-connect-pta-quick-start.md#step-4-ensure-high-availability) ++## Management at scale ++### Identity secure score ++The [identity secure score](./../fundamentals/identity-secure-score.md) provides a quantifiable measure of the security posture of your organization. It's key to constantly review and address findings reported and strive to have the highest score possible. The score helps you to: ++- Objectively measure your identity security posture +- Plan identity security improvements +- Review the success of your improvements ++![Secure score](./media/ops-guide-auth/ops-img17.png) ++If your organization currently has no program in place to monitor changes in Identity Secure Score, it is recommended you implement a plan and assign owners to monitor and drive improvement actions. Organizations should remediate improvement actions with a score impact higher than 30 as soon as possible. ++### Notifications ++Microsoft sends email communications to administrators to notify various changes in the service, configuration updates that are needed, and errors that require admin intervention. It's important that customers set the notification email addresses so that notifications are sent to the proper team members who can acknowledge and act upon all notifications. We recommend you add multiple recipients to the [Message Center](/office365/admin/manage/message-center) and request that notifications (including Azure AD Connect Health notifications) be sent to a distribution list or shared mailbox. If you only have one Global Administrator account with an email address, be sure to configure at least two email-capable accounts. ++There are two "From" addresses used by Azure AD: <o365mc@email2.microsoft.com>, which sends Message Center notifications; and <azure-noreply@microsoft.com>, which sends notifications related to: ++- [Azure AD Access Reviews](../governance/access-reviews-overview.md) +- [Azure AD Connect Health](../hybrid/how-to-connect-health-operations.md#enable-email-notifications) +- [Azure AD Identity Protection](../identity-protection/howto-identity-protection-configure-notifications.md) +- [Azure AD Privileged Identity Management](../privileged-identity-management/pim-email-notifications.md) +- [Enterprise App Expiring Certificate Notifications](../manage-apps/manage-certificates-for-federated-single-sign-on.md#add-email-notification-addresses-for-certificate-expiration) +- Enterprise App Provisioning Service Notifications ++Refer to the following table to learn the type of notifications that are sent and where to check for them: ++| Notification source | What is sent | Where to check | +|:-|:-|:-| +| Technical contact | Sync errors | Azure portal - properties blade | +| Message Center | Incident and degradation notices of Identity Services and Microsoft 365 backend services | Office Portal | +| Identity Protection Weekly Digest | Identity Protection Digest | Azure AD Identity Protection blade | +| Azure AD Connect Health | Alert notifications | Azure portal - Azure AD Connect Health blade | +| Enterprise Applications Notifications | Notifications when certificates are about to expire and provisioning errors | Azure portal - Enterprise Application blade (each app has its own email address setting) | ++#### Notifications recommended reading ++- [Change your organization's address, technical contact, and more](/office365/admin/manage/change-address-contact-and-more) ++## Operational surface area ++### AD FS lockdown ++Organizations, which configure applications to authenticate directly to Azure AD benefit from [Azure AD smart lockout](../authentication/concept-sspr-howitworks.md). If you use AD FS in Windows Server 2012 R2, implement AD FS [extranet lockout protection](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-soft-lockout-protection). If you use AD FS on Windows Server 2016 or later, implement [extranet smart lockout](https://support.microsoft.com/help/4096478/extranet-smart-lockout-feature-in-windows-server-2016). At a minimum, we recommend you enable extranet lockout to contain the risk of brute force attacks against on-premises Active Directory. However, if you have AD FS in Windows 2016 or higher, you should also enable extranet smart lockout that will help to mitigate [password spray](https://www.microsoft.com/microsoft-365/blog/2018/03/05/azure-ad-and-adfs-best-practices-defending-against-password-spray-attacks/) attacks. ++If AD FS is only used for Azure AD federation, there are some endpoints that can be turned off to minimize the attack surface area. For example, if AD FS is only used for Azure AD, you should disable WS-Trust endpoints other than the endpoints enabled for **usernamemixed** and **windowstransport**. ++### Access to machines with on-premises identity components ++Organizations should lock down access to the machines with on-premises hybrid components in the same way as your on-premises domain. For example, a backup operator or Hyper-V administrator shouldn't be able to sign in to the Azure AD Connect Server to change rules. ++The Active Directory administrative tier model was designed to protect identity systems using a set of buffer zones between full control of the Environment (Tier 0) and the high-risk workstation assets that attackers frequently compromise. ++![Diagram showing the three layers of the Tier model](./media/ops-guide-auth/ops-img18.png) ++The [tier model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material) is composed of three levels and only includes administrative accounts, not standard user accounts. ++- **Tier 0** - Direct Control of enterprise identities in the environment. Tier 0 includes accounts, groups, and other assets that have direct or indirect administrative control of the Active Directory forest, domains, or domain controllers, and all the assets in it. The security sensitivity of all Tier 0 assets is equivalent as they're all effectively in control of each other. +- **Tier 1** - Control of enterprise servers and applications. Tier 1 assets include server operating systems, cloud services, and enterprise applications. Tier 1 administrator accounts have administrative control of a significant amount of business value that is hosted on these assets. A common example role is server administrators who maintain these operating systems with the ability to impact all enterprise services. +- **Tier 2** - Control of user workstations and devices. Tier 2 administrator accounts have administrative control of a significant amount of business value that is hosted on user workstations and devices. Examples include Help Desk and computer support administrators because they can impact the integrity of almost any user data. ++Lock down access to on-premises identity components such as Azure AD Connect, AD FS, and SQL services the same way as you do for domain controllers. ++## Summary ++There are seven aspects to a secure Identity infrastructure. This list will help you find the actions you should take to optimize the operations for Azure Active Directory (Azure AD). ++- Assign owners to key tasks. +- Automate the upgrade process for on-premises hybrid components. +- Deploy Azure AD Connect Health for monitoring and reporting of Azure AD Connect and AD FS. +- Monitor the health of on-premises hybrid components by archiving and analyzing the component agent logs using System Center Operations Manager or a SIEM solution. +- Implement security improvements by measuring your security posture with Identity Secure Score. +- Lock down AD FS. +- Lock down access to machines with on-premises identity components. ++## Next steps ++Refer to the [Azure AD deployment plans](deployment-plans.md) for implementation details on any capabilities you haven't deployed. |
active-directory | Parallel Identity Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/parallel-identity-options.md | + + Title: 'Parallel and combined identity infrastructure options' +description: This article describes the various options available for organizations to run multiple tenants and multicloud scenarios ++++++ na + Last updated : 08/17/2022++++++# Parallel and combined identity infrastructure options ++Microsoft delivers a range of technologies and solutions to integrate between their different on-premises and cloud components of their identity infrastructure. Often customers are unclear on which technologies are most right and may incorrectly think "the most recent release covers all scenarios of earlier technology releases." ++This article covers scenarios when your company is going through a complex scenario outlined below and looking to combine your identity information. Ideally, an organization with a single HR source, a single Active Directory forest, and a single Azure Active Directory (Azure AD) tenant, all integrated with the same people in each, will have the best identity experience for their Microsoft Online Services. However, in practice, an enterprise customer may not always be in a situation where that is possible. For example, the customer may be going through a merger, or have a need for isolation for some users or applications. A customer who has multiple HR, multiple AD, or multiple Azure AD tenants must decide on whether to combine to fewer instances of each or keep them in parallel. ++Based on our customer feedback, the following are some of the common scenarios and requirements. ++## Scenarios that come up for multicloud and multi-org identities ++- Mergers and acquisitions (M&A) ΓÇô refers to a situation where, usually Company A buys Company B. +- Rebranding ΓÇô A company name or brand change and typically an e-mail domain name change. +- Azure AD or Office 365 tenant consolidation - Companies with more than one Office 365 tenant may want to combine because of compliance or historic requirements. +- Active Directory Domain or forest consolidation - Companies evaluating to perform Active Directory domain or forest consolidation. +- Divestitures ΓÇô Where a division or business group of a company is sold or becomes independent. +- User information privacy ΓÇô Where companies have requirements to keep certain data (attributes) from not being publicly visible and only right delegated groups or users can read, change, and update it. ++## Requirements that stem out from these scenarios ++- Bring all users' and groups' data to a single place, including email and status availability for meeting scheduling by creating a central or **universal directory**. +- Maintain a **single username and credentials** while reducing the need to enter usernames and passwords across all applications by implementing Single Sign On. +- Streamline user on-boarding so it doesn't take weeks or months. +- Prepare the organization for future acquisitions and access management demands. +- Enable and improve cross-company collaboration and productivity. +- Reduce the likelihood of a security breach or data exfiltration with security policies deployed centrally and consistently! ++## Scenarios not covered in this article ++- Partial M&A. For example, an organization buys part of another organization. +- Divesture or splitting organizations +- Renaming organizations. +- Joint ventures or temporary partners ++This article outlines various multicloud or multi-org identity environments including M&A scenarios that Microsoft supports today and outline how an organization might select the right technologies depending upon how they approach consolidation. ++## Consolidation options for a hypothetical M&A scenario ++The following sections cover four main scenarios for a hypothetical M&A scenario: ++Suppose Contoso is an enterprise customer, and their IT has a single (on-premises) HR system, single Active Directory forest, single tenant Azure AD for their apps, running as expected. Users are brought in from their HR system into Active Directory and projected into Azure AD and from there into SaaS apps. This scenario is illustrated with the diagram below, with the arrows showing the flow of identity information. The same model is also applicable to customers with cloud HR system such as Workday or SuccessFactors provisioning Active Directory, not just customers using Microsoft Identity Manager (MIM). ++![single instance of each component](media/parallel-identity-options/identity-combined-1.png) + +Next, Contoso has begun to merge with Litware, which has previously been running their own IT independently. Contoso IT will handle the merger and expects that Contoso's IT will continue to have Contoso's apps remain unchanged, but they want to be able to have Litware's users receive access to them and collaborate within those apps. For Microsoft apps, third-party SaaS, and custom apps, the end state should be that Contoso and Litware users conceptually have access to the same data. ++The first IT decision is how much they wish to combine infrastructure. They could choose to not rely upon any of Litware's identity infrastructure. Or they could consider using Litware's infrastructure and converging over time while minimizing disruption to Litware's environment. In some cases, the customer may wish to keep Litware's existing identity infrastructure independent and not converging it, while still using it to give Litware employee access to Contoso apps. ++If the customer chooses to keep some or all Litware's identity infrastructure, then there are tradeoffs on how much of Litware's Active Directory Domain Services or Azure AD are used to give Litware users access to Contoso resources. This section looks at workable options, based on what Contoso would use for Litware's users: ++- Scenario A - Don't use *any* of Litware's identity infrastructure. +- Scenario B - Use Litware's Active Directory forests, but not Litware's Azure AD (if they've one) +- Scenario C - Use Litware's Azure AD. +- Scenario D - Use Litware's non-Microsoft identity infrastructure (if Litware isn't using Active Directory/Azure AD) ++The following table summarizes each option with the technologies for how the customer could achieve those outcomes, the constraints, and benefits of each. ++| Considerations | A1: Single HR, single IAM & tenant | A2: Separate HR, single IAM, and tenant | B3: Active Directory forest trust, single Azure AD Connect | B4: Azure AD Connect their Active Directory to the single tenant | B5: Azure AD Connect cloud sync their Active Directory | C6: parallel provision multiple tenants into apps | C7: read from their tenant and B2B invite their users | C8: single IAM and B2B users as needed | D9: DF with their non-Azure AD IDP | +|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-| +| Migration effort | High | Medium effort | Lower effort | Low effort | Low effort | None | None | None | None | +| Deployment effort | Less effort | Medium effort | Medium effort | Medium effort | Low | Low | High | High | Very High | +| End-user impact during migration | High | High | Medium | Medium | Medium | None | None | None | None | +| Operating effort | Low cost | Low cost | Low cost | Low cost | Low cost | High | High | High | Very High | +| Privacy and data capabilities (geo location/data boundaries) | None (Major roadblock for geo-location scenarios) | Limited isolation even though challenging | Limited isolation on-prem but not on the cloud | Limited isolation on-prem but not on the cloud | Limited isolation on-prem but not on the cloud | Good isolation both on-prem and on the cloud | Limited isolation both on-prem and cloud | Limited isolation both on-prem and cloud | Isolation both on-prem and on the cloud | +| Isolation (separate delegation and setup different admin models) Note: as defined in source system (HR) | Not possible | Possible | Possible | Possible | Possible | Highly Possible | Highly possible | Highly possible | Possible | +| Collaboration capabilities | Excellent | Excellent | Excellent | Excellent | Excellent | Poor | Average | Average | Poor | +| IT admin model supported (centralized vs. separated) | Centralized | Centralized | Centralized | Centralized | Centralized | Decentralized | Decentralized | Decentralized | Actively Decentralized | +| Limitations | No isolation | Limited isolation | Limited isolation | Limited isolation | Limited isolation. No writeback capabilities | Won't work for Microsoft Online Services apps. Highly dependent on app capability | Requires apps to be B2B aware | Requires apps to be B2B aware | Require apps to be B2B aware. Uncertainty in how it all works together | ++Table details ++- The employee effort tries to predict the required expertise and extra work required to implement the solution in an organization. +- Operating effort tries to predict the cost and effort it takes to keep the solution running. +- Privacy and data capabilities show if the solution allows support for geo location and data boundaries. +- Isolation shows if this solution supplies the ability to separate or delegate admin models. +- Collaboration capabilities show the level of collaboration the solution supports, more integrated solutions supply higher fidelity of teamwork. +- The IT admin model shows if the admin model requires to centralized or can be decentralized. +- Limitations: any issues of challenges worth listing. ++### Decision tree ++Use the following decision tree to help you decide which scenario would work best for your organization. ++[![decision tree.](media/parallel-identity-options/identity-decision-tree.png)](media/parallel-identity-options/identity-decision-tree.png#lightbox) ++The rest of this document, will outline four scenarios A-D with various options supporting them. ++## Scenario A - If Contoso doesn't wish to rely upon Litware's existing identity infrastructure ++For this option, Litware may not have any identity systems (for example, a small business), or the customer may wish to turn off Litware's infrastructure. Or they wish to leave it untouched, for use by Litware employees to authenticate to Litware's apps but give Litware employees new identities as part of Contoso. For example, if Alice Smith was a Litware employee, she might have two identities ΓÇô Alice@litware.com and ASmith123@contoso.com. Those identities would be entirely distinct from each other. ++### Option 1 - Combine into a single HR system ++Typically, customers would bring the Litware employees into the Contoso HR system. This option would trigger those employees to receive accounts and the right access to Contoso's directories and apps. A Litware user would then have a new Contoso identity, which they could use to request access to the right Contoso apps. ++### Option 2 - Keep Litware HR system ++Sometimes converging the HR systems may not be possible, at least not in the short term. Instead, the customer would connect their provisioning system, for example, MIM, to read from *both* HR systems. In this diagram, the top HR is the existing Contoso environment, and the second HR is Litware's addition to the overall infrastructure. ++![Retain Litware HR system](media/parallel-identity-options/identity-combined-2.png) ++The same scenario would also be possible using Azure AD Workday or SuccessFactors inbound ΓÇô Contoso could bring in users from Litware's Workday HR source alongside existing Contoso employees. ++### Outcomes of consolidating all identity infrastructure ++- Reduced IT infrastructure, only one identity system to manage, no network connectivity requirements except for an HR system. +- Consistent end user and administrative experience ++### Constraints of consolidating all identity infrastructure ++- Any data that is needed by Contoso employees that originated in Litware must be migrated to the Contoso environment. +- Any Active Directory or Azure AD-integrated apps from Litware that will be needed for Contoso must be reconfigured to the Contoso environment. This reconfiguration may require changes to the configuration, which groups it uses for access, or potentially to the apps themselves. ++## Scenario B - If Contoso wishes to keep Litware's Active Directory forests, but not use Litware's Azure AD ++Litware may have many existing Active Directory-based apps that they rely on, and so Contoso may wish to continue to have Litware employees keep their own identities in their existing AD. A Litware employee would then use their existing identity for their authentication of their existing resources and authentication of Contoso resources. In this scenario, Litware doesn't have any cloud identities in Microsoft Online Services ΓÇô either Litware wasn't an Azure AD customer, nothing of Litware's cloud assets were to be shared with Contoso, or Contoso migrated Litware's cloud assets to be part of Contoso's tenant. ++### Option 3 - Forest trust with the acquired forest ++Using an [Active Directory forest trust](/windows-server/identity/ad-ds/plan/forest-design-models), Contoso and Litware can connect their Active Directory domains. This trust enables Litware users to authenticate Contoso's Active Directory-integrated apps. Also [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) can also read from Litware's Active Directory forest so that Litware users authenticate with Contoso's Azure AD integrated apps. This deployment topology requires a network route set up between the two domains, and TCP/IP network connectivity between any Litware user and Contoso Active Directory-integrated app. It's also straightforward to set up bidirectional trusts, so that Contoso users can access Litware AD-integrated apps (if any). ++![forest trust with single tenant](media/parallel-identity-options/identity-combined-3.png) ++### Outcome of setting up a forest trust ++- All Litware employees can authenticate Contoso's Active Directory or Azure AD-integrated apps, and Contoso can use current AD-based tools to manage authorization. ++### Constraints of setting up a forest trust ++- Requires TCP/IP connectivity between users who are domain joined to one forest and resources joined to the other forest. +- Requires the Active Directory-based apps in the Contoso forest to be multi-forest-aware ++### Option 4 - Configure Azure AD Connect to the acquired forest without forest trust ++A customer can also configure Azure AD Connect to read from another forest. This configuration enables the Litware users to authenticate to Contoso's Azure AD integrated apps but doesn't supply access to Contoso's Active Directory integrated apps to the Litware user ΓÇô those Contoso apps don't recognize Litware users. This deployment topology requires TCP/IP network connectivity between Azure AD Connect and Litware's domain controllers. For example, if Azure AD Connect is on a Contoso IaaS VM, they would need to establish a tunnel also to Litware's network as well. ++![Azure AD Connect two forests](media/parallel-identity-options/identity-combined-4.png) ++### Outcome of using Azure AD Connect to provision one tenant ++- All Litware employees can authenticate Contoso's Azure AD integrated apps. ++### Constraints of using Azure AD Connect to provision one tenant ++- Requires TCP/IP connectivity between Contoso's Azure AD Connect and Litware's Active Directory domains. +- Doesn't permit Litware users to have access to Contoso's Active Directory based applications ++### Option 5 - Deploy Azure AD Connect cloud sync in the acquired forest ++[Azure AD Connect cloud provisioning](../cloud-sync/what-is-cloud-sync.md) removes the network connectivity requirement, but you can only have one Active Directory to Azure AD linking for a given user with cloud sync. Litware users can authenticate Contoso's Azure AD integrated apps, but not Contoso's Active Directory-integrated apps. This topology doesn't require any TCP/IP connectivity between Litware and Contoso's on-premises environments. ++![Deploy Azure AD Connect cloud sync in the acquired forest](media/parallel-identity-options/identity-combined-5.png) ++### Outcome of deploying Azure AD Connect cloud sync in the acquired forest ++- All Litware employees can authenticate Contoso's Azure AD-integrated apps. ++### Constraints of using Azure AD Connect cloud sync in the acquired forest ++- Doesn't permit Litware users to have access to Contoso's AD-based applications ++## Scenario C - If Contoso wants to keep Litware's Azure AD ++Litware may be a Microsoft Online Services or Azure customer or may have one or more Azure AD-based apps that they rely on. So, Contoso may want to continue to have Litware employees keep their own identities for access to those resources. A Litware employee would then use their existing identity for their authentication of their existing resources and authentication of Contoso resources. ++This scenario is suitable in cases where: ++- Litware has an extensive Azure or Microsoft Online Services investment including multiple Office 365 tenants that would be costly or time consuming to migrate to another tenant. +- Litware may be spun out in future or is a partnership that will run independently. +- Litware doesn't have on-premises infrastructure ++### Option 6 - Maintain parallel provisioning and SSO for apps in each Azure AD ++One option is for each Azure AD to independently provide SSO and [provision](../app-provisioning/user-provisioning.md) users from their directory into the target app. For example, if Contoso IT are using an app such as Salesforce, they would provide Litware with administrative rights to create users in the same Salesforce subscription. ++![parallel provisioning for apps](media/parallel-identity-options/identity-combined-6.png) ++### Outcome of parallel provisioning ++- Users can authenticate apps using their existing identity, without making changes to Contoso's infrastructure. ++### Constraints of parallel provisioning ++- If using federation, it requires applications to support multiple federation providers for the same subscription. +- Not possible for Microsoft apps such as Office or Azure +- Contoso doesn't have visibility in their Azure AD of application access for Litware users ++### Option 7 - Configure B2B accounts for users from the acquired tenant ++If Litware has been running its own tenant, then Contoso can read the users from that tenant, and through the B2B API, invite each of those users into the Contoso tenant. (This bulk invite process can be done through the [MIM graph connector](/microsoft-identity-manager/microsoft-identity-manager-2016-connector-graph), for example.) If Contoso also has AD-based apps that they wish to make available to Litware users, then MIM could also create users in Active Directory that would map to the UPNs of Azure AD users, so that the app proxy could perform KCD on behalf of a representation of a Litware user in Contoso's Active Directory. ++Then when a Litware employee wishes to access a Contoso app, they can do so by authenticating to their own directory, with access assignment to the resource tenant. ++![configure B2B accounts for user from the other tenant](media/parallel-identity-options/identity-combined-7.png) ++### Outcome of setting up B2B accounts for users from the other tenant ++- Litware users can authenticate Contoso apps, and Contoso controls that access in their tenant. ++### Constraints of setting up B2B accounts for users from the other tenant ++- It requires a duplicate account for each Litware user who requires access to Contoso resources. +- Requires the apps to be B2B capable for SSO. ++### Option 8 - Configure B2B but with a common HR feed for both directories ++In some situations, after acquisition the organization may converge on a single HR platform, but still run existing identity management systems. In this scenario, MIM could provision users into multiple Active Directory systems, depending on which part of the organization the user is affiliated with. They could continue to use B2B so that users authenticate their existing directory, and have a unified GAL. ++![Configure B2B users but with a common HR system feed](media/parallel-identity-options/identity-combined-8.png) ++### Outcome of setting up B2B guest users from a common HR system feed ++- Litware users can authenticate to Contoso apps, and Contoso control that access in their tenant. +- Litware and Contoso have a unified GAL. +- No change to Litware's Active Directory or Azure AD ++### Constraints of setting up B2B guest users from a common HR system feed ++- Requires changes to Contoso's provisioning to also send users to Litware's Active Directory, and connectivity between Litware's domains and Contoso's domains. +- Requires the apps to be B2B capable for SSO. ++## Scenario D - If Litware is using non-Active Directory infrastructure ++Finally, if Litware is using another directory service, either on-premises or in the cloud, then Contoso IT can still configure that Litware employees authenticate and can get access to Contoso's resources using their existing identity. ++### Option 9 - Use B2B direct federation (public preview) ++In this scenario, Litware is assumed to have: ++- Some existing directories, such as OpenLDAP or even an SQL database or flat file of users with their email addresses that they can regularly share with Contoso. +- An identity provider that supports SAML, such as PingFederate or OKTA. +- A publicly routed DNS domain such as Litware.com and users with email addresses in that domain ++In this approach, Contoso would configure a [direct federation](../external-identities/direct-federation.md) relationship from their tenant for that domain to Litware's identity provider, and then regularly read updates to Litware users from their directory to invite the Litware users into Contoso's Azure AD. This update can be done with a MIM Graph connector. If Contoso also has Active Directory-based apps that they wish to make available to Litware users, then MIM could also create users in Active Directory that would map to the UPNs of Azure AD users, so that the app proxy could perform KCD on behalf of a representation of a Litware user in Contoso's Active Directory. ++![Use B2B direct federation](media/parallel-identity-options/identity-combined-9.png) ++### Outcome of using B2B direct federation ++- Litware users authenticate to Contoso's Azure AD with their existing identity provider and access Contoso's cloud and on-premises web apps, ++### Constraints of using B2B direct federation ++- Require the Contoso apps to able to support B2B user SSO. ++## Next steps ++- [What is Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md) +- [Setup Inbound provisioning for Azure AD](../app-provisioning/plan-cloud-hr-provision.md) +- [Setup B2B direct federation](../external-identities/direct-federation.md) +- [Multi-tenant user management options](multi-tenant-user-management-introduction.md) +- [What is application provisioning?](../app-provisioning/user-provisioning.md) |
active-directory | Protect M365 From On Premises Attacks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/protect-m365-from-on-premises-attacks.md | + + Title: Protecting Microsoft 365 from on-premises attacks +description: Learn how to configure your systems to help protect your Microsoft 365 cloud environment from on-premises compromise. +++++++ Last updated : 08/26/2022++++ - it-pro + - seodec18 + - kr2b-contr-experiment +++ +# Protecting Microsoft 365 from on-premises attacks ++Many customers connect their private corporate networks to Microsoft 365 to benefit their users, devices, and applications. However, these private networks can be compromised in many well-documented ways. Microsoft 365 acts as a sort of nervous system for many organizations. It's critical to protect it from compromised on-premises infrastructure. ++This article shows you how to configure your systems to help protect your Microsoft 365 cloud environment from on-premises compromise, including the following elements: ++- Azure Active Directory (Azure AD) tenant configuration settings +- How Azure AD tenants can be safely connected to on-premises systems +- The tradeoffs required to operate your systems in ways that protect your cloud systems from on-premises compromise ++Microsoft strongly recommends that you implement this guidance. ++## Threat sources in on-premises environments ++Your Microsoft 365 cloud environment benefits from an extensive monitoring and security infrastructure. Microsoft 365 uses machine learning and human intelligence to look across worldwide traffic. It can rapidly detect attacks and allow you to reconfigure nearly in real time. ++Hybrid deployments can connect on-premises infrastructure to Microsoft 365. In such deployments, many organizations delegate trust to on-premises components for critical authentication and directory object state management decisions. If the on-premises environment is compromised, these trust relationships become an attacker's opportunities to compromise your Microsoft 365 environment. ++The two primary threat vectors are *federation trust relationships* and *account synchronization.* Both vectors can grant an attacker administrative access to your cloud. ++- **Federated trust relationships**, such as Security Assertions Markup Language (SAML) authentication, are used to authenticate to Microsoft 365 through your on-premises identity infrastructure. If a SAML token-signing certificate is compromised, federation allows anyone who has that certificate to impersonate any user in your cloud. ++ We recommend that you disable federation trust relationships for authentication to Microsoft 365 when possible. ++- **Account synchronization** can be used to modify privileged users, including their credentials, or groups that have administrative privileges in Microsoft 365. ++ We recommend that you ensure that synchronized objects hold no privileges beyond a user in Microsoft 365. You can control privileges either directly or through inclusion in trusted roles or groups. Ensure these objects have no direct or nested assignment in trusted cloud roles or groups. ++## Protecting Microsoft 365 from on-premises compromise ++To address the threats described above, we recommend you adhere to the principles illustrated in the following diagram: ++![Reference architecture for protecting Microsoft 365, as described in the following list.](media/protect-m365/protect-m365-principles.png) ++1. **Fully isolate your Microsoft 365 administrator accounts.** They should be: ++ - Mastered in Azure AD. + - Authenticated by using multifactor authentication. + - Secured by Azure AD Conditional Access. + - Accessed only by using Azure-managed workstations. ++ These administrator accounts are restricted-use accounts. No on-premises accounts should have administrative privileges in Microsoft 365. ++ For more information, see [About admin roles](/microsoft-365/admin/add-users/about-admin-roles). Also, see [Roles for Microsoft 365 in Azure AD](../roles/m365-workload-docs.md). ++1. **Manage devices from Microsoft 365.** Use Azure AD join and cloud-based mobile device management (MDM) to eliminate dependencies on your on-premises device management infrastructure. These dependencies can compromise device and security controls. ++1. **Ensure no on-premises account has elevated privileges to Microsoft 365.** Some accounts access on-premises applications that require NTLM, LDAP, or Kerberos authentication. These accounts must be in the organization's on-premises identity infrastructure. Ensure that these accounts, including service accounts, aren't included in privileged cloud roles or groups. Also ensure that changes to these accounts can't affect the integrity of your cloud environment. Privileged on-premises software must not be capable of affecting Microsoft 365 privileged accounts or roles. ++1. **Use Azure AD cloud authentication to eliminate dependencies on your on-premises credentials.** Always use strong authentication, such as Windows Hello, FIDO, Microsoft Authenticator, or Azure AD multifactor authentication. ++## Specific security recommendations ++The following sections provide guidance about how to implement the principles described above. ++### Isolate privileged identities ++In Azure AD, users who have privileged roles, such as administrators, are the root of trust to build and manage the rest of the environment. Implement the following practices to minimize the effects of a compromise. ++- Use cloud-only accounts for Azure AD and Microsoft 365 privileged roles. ++- Deploy privileged access devices for privileged access to manage Microsoft 365 and Azure AD. See [Device roles and profiles](/security/compass/privileged-access-devices#device-roles-and-profiles). ++ Deploy Azure AD Privileged Identity Management (PIM) for just-in-time access to all human accounts that have privileged roles. Require strong authentication to activate roles. See [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md). ++- Provide administrative roles that allow the least privilege necessary to do required tasks. See [Least privileged roles by task in Azure Active Directory](../roles/delegate-by-task.md). ++- To enable a rich role assignment experience that includes delegation and multiple roles at the same time, consider using Azure AD security groups or Microsoft 365 Groups. These groups are collectively called *cloud groups*. ++ Also, enable role-based access control. See [Assign Azure AD roles to groups](../roles/groups-assign-role.md). You can use administrative units to restrict the scope of roles to a portion of the organization. See [Administrative units in Azure Active Directory](../roles/administrative-units.md). ++- Deploy emergency access accounts. Do *not* use on-premises password vaults to store credentials. See [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md). ++For more information, see [Securing privileged access](/security/compass/overview). Also, see [Secure access practices for administrators in Azure AD](../roles/security-planning.md). ++### Use cloud authentication ++Credentials are a primary attack vector. Implement the following practices to make credentials more secure: ++- **Deploy passwordless authentication**. Reduce the use of passwords as much as possible by deploying passwordless credentials. These credentials are managed and validated natively in the cloud. For more information, see [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md). ++ Choose from these authentication methods: ++ - [Windows Hello for business](/windows/security/identity-protection/hello-for-business/passwordless-strategy) + - [The Microsoft Authenticator app](../authentication/howto-authentication-passwordless-phone.md) + - [FIDO2 security keys](../authentication/howto-authentication-passwordless-security-key-windows.md) ++- **Deploy multifactor authentication**. For more information, see [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md). ++ Provision multiple strong credentials by using Azure AD multifactor authentication. That way, access to cloud resources requires an Azure AD managed credential in addition to an on-premises password. For more information, see [Build resilience with credential management](../fundamentals/resilience-in-credentials.md) and [Create a resilient access control management strategy by using Azure AD](./resilience-overview.md). ++### Limitations and tradeoffs ++Hybrid account password management requires hybrid components such as password protection agents and password writeback agents. If your on-premises infrastructure is compromised, attackers can control the machines on which these agents reside. This vulnerability won't compromise your cloud infrastructure. But your cloud accounts won't protect these components from on-premises compromise. ++On-premises accounts synced from Active Directory are marked to never expire in Azure AD. This setting is usually mitigated by on-premises Active Directory password settings. If your instance of Active Directory is compromised and synchronization is disabled, set the [EnforceCloudPasswordPolicyForPasswordSyncedUsers](../hybrid/how-to-connect-password-hash-synchronization.md) option to force password changes. ++## Provision user access from the cloud ++*Provisioning* refers to the creation of user accounts and groups in applications or identity providers. ++![Diagram of provisioning architecture shows the interaction of Azure A D with Cloud HR, Azure A D B 2 B, Azure app provisioning, and group-based licensing.](media/protect-m365/protect-m365-provision.png) ++We recommend the following provisioning methods: ++- **Provision from cloud HR apps to Azure AD.** This provisioning enables an on-premises compromise to be isolated. This isolation doesn't disrupt your joiner-mover-leaver cycle from your cloud HR apps to Azure AD. +- **Cloud applications.** Where possible, deploy Azure AD app provisioning as opposed to on-premises provisioning solutions. This method protects some of your software as a service (SaaS) apps from malicious hacker profiles in on-premises breaches. For more information, see [What is app provisioning in Azure Active Directory](../app-provisioning/user-provisioning.md). +- **External identities.** Use Azure AD B2B collaboration to reduce the dependency on on-premises accounts for external collaboration with partners, customers, and suppliers. Carefully evaluate any direct federation with other identity providers. For more information, see [B2B collaboration overview](../external-identities/what-is-b2b.md). ++ We recommend limiting B2B guest accounts in the following ways: ++ - Limit guest access to browsing groups and other properties in the directory. Use the external collaboration settings to restrict guests' ability to read groups they're not members of. + - Block access to the Azure portal. You can make rare necessary exceptions. Create a Conditional Access policy that includes all guests and external users. Then implement a policy to block access. See [Conditional Access](../conditional-access/concept-conditional-access-cloud-apps.md). ++- **Disconnected forests.** Use Azure AD cloud provisioning to connect to disconnected forests. This approach eliminates the need to establish cross-forest connectivity or trusts, which can broaden the effect of an on-premises breach. For more information, see [What is Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md). ++### Limitations and tradeoffs ++When used to provision hybrid accounts, the Azure-AD-from-cloud-HR system relies on on-premises synchronization to complete the data flow from Active Directory to Azure AD. If synchronization is interrupted, new employee records won't be available in Azure AD. ++## Use cloud groups for collaboration and access ++Cloud groups allow you to decouple your collaboration and access from your on-premises infrastructure. ++- **Collaboration**. Use Microsoft 365 Groups and Microsoft Teams for modern collaboration. Decommission on-premises distribution lists, and [upgrade distribution lists to Microsoft 365 Groups in Outlook](/office365/admin/manage/upgrade-distribution-lists). +- **Access**. Use Azure AD security groups or Microsoft 365 Groups to authorize access to applications in Azure AD. +- **Office 365 licensing**. Use group-based licensing to provision to Office 365 by using cloud-only groups. This method decouples control of group membership from on-premises infrastructure. ++Owners of groups that are used for access should be considered privileged identities to avoid membership takeover in an on-premises compromise. A takeover would include direct manipulation of group membership on-premises or manipulation of on-premises attributes that can affect dynamic group membership in Microsoft 365. ++## Manage devices from the cloud ++Use Azure AD capabilities to securely manage devices. ++Deploy Azure AD joined Windows 10 workstations with mobile device management policies. Enable Windows Autopilot for a fully automated provisioning experience. See [Plan your Azure AD join implementation](../devices/device-join-plan.md) and [Windows Autopilot](/mem/autopilot/windows-autopilot). ++- **Use Windows 10 workstations**. + - Deprecate machines that run Windows 8.1 and earlier. + - Don't deploy computers that have server operating systems as workstations. +- **Use Microsoft Intune as the authority for all device management workloads.** See [Microsoft Intune](https://www.microsoft.com/security/business/endpoint-management/microsoft-intune). +- **Deploy privileged access devices.** For more information, see [Device roles and profiles](/security/compass/privileged-access-devices#device-roles-and-profiles). ++### Workloads, applications, and resources ++- **On-premises single-sign-on (SSO) systems** ++ Deprecate any on-premises federation and web access management infrastructure. Configure applications to use Azure AD. ++- **SaaS and line-of-business (LOB) applications that support modern authentication protocols** ++ Use Azure AD for SSO. The more apps you configure to use Azure AD for authentication, the less risk in an on-premises compromise. For more information, see [What is single sign-on in Azure Active Directory](../manage-apps/what-is-single-sign-on.md). ++- **Legacy applications** ++ You can enable authentication, authorization, and remote access to legacy applications that don't support modern authentication. Use [Azure AD Application Proxy](../app-proxy/application-proxy.md). Or, enable them through a network or application delivery controller solution by using secure hybrid access partner integrations. See [Secure legacy apps with Azure Active Directory](../manage-apps/secure-hybrid-access.md). ++ Choose a VPN vendor that supports modern authentication. Integrate its authentication with Azure AD. In an on-premises compromise, you can use Azure AD to disable or block access by disabling the VPN. ++- **Application and workload servers** ++ Applications or resources that required servers can be migrated to Azure infrastructure as a service (IaaS). Use Azure AD Domain Services (Azure AD DS) to decouple trust and dependency on on-premises instances of Active Directory. To achieve this decoupling, make sure virtual networks used for Azure AD DS don't have a connection to corporate networks. See [Azure AD Domain Services](../../active-directory-domain-services/overview.md). ++ Use credential tiering. Application servers are typically considered tier-1 assets. For more information, see [Enterprise access model](/security/compass/privileged-access-access-model#ADATM_BM). ++## Conditional Access policies ++Use Azure AD Conditional Access to interpret signals and use them to make authentication decisions. For more information, see the [Conditional Access deployment plan](../conditional-access/plan-conditional-access.md). ++- Use Conditional Access to block legacy authentication protocols whenever possible. Additionally, disable legacy authentication protocols at the application level by using an application-specific configuration. See [Block legacy authentication](../conditional-access/howto-conditional-access-policy-block-legacy.md). ++ For more information, see [Legacy authentication protocols](../fundamentals/auth-sync-overview.md#legacy-authentication-protocols). Or see specific details for [Exchange Online](/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](/powershell/module/sharepoint-online/set-spotenant). ++- Implement the recommended identity and device access configurations. See [Common Zero Trust identity and device access policies](/microsoft-365/security/office-365-security/identity-access-policies). ++- If you're using a version of Azure AD that doesn't include Conditional Access, use [Security defaults in Azure AD](../fundamentals/concept-fundamentals-security-defaults.md). ++ For more information about Azure AD feature licensing, see the [Azure AD pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). ++## Monitor ++After you configure your environment to protect your Microsoft 365 from an on-premises compromise, proactively monitor the environment. For more information, see [What is Azure Active Directory monitoring](../reports-monitoring/overview-monitoring.md). ++### Scenarios to monitor ++Monitor the following key scenarios, in addition to any scenarios specific to your organization. For example, you should proactively monitor access to your business-critical applications and resources. ++- **Suspicious activity** ++ Monitor all Azure AD risk events for suspicious activity. See [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md). Azure AD Identity Protection is natively integrated with [Microsoft Defender for Identity](/defender-for-identity/what-is). ++ Define network named locations to avoid noisy detections on location-based signals. See [Using the location condition in a Conditional Access policy](../conditional-access/location-condition.md). ++- **User and Entity Behavioral Analytics (UEBA) alerts** ++ Use UEBA to get insights on anomaly detection. Microsoft Defender for Cloud Apps provides UEBA in the cloud. See [Investigate risky users](/cloud-app-security/tutorial-ueba). ++ You can integrate on-premises UEBA from Azure Advanced Threat Protection (ATP). Microsoft Defender for Cloud Apps reads signals from Azure AD Identity Protection. See [Connect to your Active Directory Forest](/defender-for-identity/install-step2). ++- **Emergency access accounts activity** ++ Monitor any access that uses emergency access accounts. See [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md). Create alerts for investigations. This monitoring must include the following actions: ++ - Sign-ins + - Credential management + - Any updates on group memberships + - Application assignments ++- **Privileged role activity** ++ Configure and review security alerts generated by Azure AD Privileged Identity Management (PIM). Monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly. See [Security alerts](../privileged-identity-management/pim-how-to-configure-security-alerts.md?tabs=new#security-alerts). ++- **Azure AD tenant-wide configurations** ++ Any change to tenant-wide configurations should generate alerts in the system. These changes include but aren't limited to the following changes: ++ - Updated custom domains + - Azure AD B2B changes to allowlists and blocklists + - Azure AD B2B changes to allowed identity providers, such as SAML identity providers through direct federation or social sign-ins + - Conditional Access or Risk policy changes ++- **Application and service principal objects** ++ - New applications or service principals that might require Conditional Access policies + - Credentials added to service principals + - Application consent activity ++- **Custom roles** ++ - Updates to the custom role definitions + - Newly created custom roles ++### Log management ++Define a log storage and retention strategy, design, and implementation to facilitate a consistent tool set. For example, you could consider security information and event management (SIEM) systems like Microsoft Sentinel, common queries, and investigation and forensics playbooks. ++- **Azure AD logs**. Ingest generated logs and signals by consistently following best practices for settings such as diagnostics, log retention, and SIEM ingestion. ++ The log strategy must include the following Azure AD logs: ++ - Sign-in activity + - Audit logs + - Risk events ++ Azure AD provides Azure Monitor integration for the sign-in activity log and audit logs. See [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md). ++ Use the Microsoft Graph API to ingest risk events. See [Use the Microsoft Graph identity protection APIs](/graph/api/resources/identityprotection-root). ++ You can stream Azure AD logs to Azure Monitor logs. See [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). ++- **Hybrid infrastructure operating system security logs**. All hybrid identity infrastructure operating system logs should be archived and carefully monitored as a tier-0 system, because of the surface-area implications. Include the following elements: ++ - Application Proxy agents + - Password writeback agents + - Password Protection Gateway machines + - Network policy servers (NPSs) that have the Azure AD multifactor authentication RADIUS extension + - Azure AD Connect ++ You must deploy Azure AD Connect Health to monitor identity synchronization. See [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md). ++## Next steps ++- [Build resilience into identity and access management by using Azure AD](resilience-overview.md) +- [Secure external access to resources](secure-external-access-resources.md) +- [Integrate all your apps with Azure AD](../fundamentals/five-steps-to-full-application-integration.md) |
active-directory | Recover From Deletions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/recover-from-deletions.md | + + Title: Recover from deletions in Azure Active Directory +description: Learn how to recover from unintended deletions. +++++++ Last updated : 11/14/2022+++++++# Recover from deletions ++This article addresses recovering from soft and hard deletions in your Azure Active Directory (Azure AD) tenant. If you haven't already done so, read [Recoverability best practices](recoverability-overview.md) for foundational knowledge. ++## Monitor for deletions ++The [Azure AD Audit log](../reports-monitoring/concept-audit-logs.md) contains information on all delete operations performed in your tenant. Export these logs to a security information and event management tool such as [Microsoft Sentinel](../../sentinel/overview.md). ++You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on how to find deleted items by using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0](/graph/api/directory-deleteditems-list?tabs=http). ++### Audit log ++The Audit log always records a "Delete \<object\>" event when an object in the tenant is removed from an active state by either a soft or hard deletion. ++[![Screenshot that shows an Audit log with deletions.](./media/recoverability/delete-audit-log.png)](./media/recoverability/delete-audit-log.png#lightbox) ++A delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete. Track the occurrence of hard-delete events by comparing "Delete \<object\>" events with the type of object that was deleted. Note the events that don't support soft delete. Also note "Hard Delete \<object\>" events. ++| Object type | Activity in log| Result | +| - | - | - | +| Application| Delete application| Soft deleted | +| Application| Hard delete application| Hard deleted | +| User| Delete user| Soft deleted | +| User| Hard delete user| Hard deleted | +| Microsoft 365 Group| Delete group| Soft deleted | +| Microsoft 365 Group| Hard delete group| Hard deleted | +| All other objects| Delete "objectType"| Hard deleted | ++> [!NOTE] +> The Audit log doesn't distinguish the group type of a deleted group. Only Microsoft 365 Groups are soft deleted. If you see a Delete group entry, it might be the soft delete of a Microsoft 365 Group or the hard delete of another type of group. +> +>*It's important that your documentation of your known good state includes the group type for each group in your organization*. To learn more about documenting your known good state, see [Recoverability best practices](recoverability-overview.md). ++### Monitor support tickets ++A sudden increase in support tickets about access to a specific object might indicate that a deletion occurred. Because some objects have dependencies, deletion of a group used to access an application, an application itself, or a Conditional Access policy that targets an application can all cause broad sudden impact. If you see a trend like this, check to ensure that none of the objects required for access were deleted. ++## Soft deletions ++When objects such as users, Microsoft 365 Groups, or application registrations are soft deleted, they enter a suspended state in which they aren't available for use by other services. In this state, items retain their properties and can be restored for 30 days. After 30 days, objects in the soft-deleted state are permanently, or hard, deleted. ++> [!NOTE] +> Objects can't be restored from a hard-deleted state. They must be re-created and reconfigured. ++### When soft deletes occur ++It's important to understand why object deletions occur in your environment so that you can prepare for them. This section outlines frequent scenarios for soft deletion by object class. You might see scenarios that are unique to your organization, so a discovery process is key to preparation. ++### Users ++Users enter the soft-delete state anytime the user object is deleted by using the Azure portal, Microsoft Graph, or PowerShell. ++The most frequent scenarios for user deletion are: ++* An administrator intentionally deletes a user in the Azure portal in response to a request or as part of routine user maintenance. +* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you might have a script that removes users who haven't signed in for a specified time. +* A user is moved out of scope for synchronization with Azure AD Connect. +* A user is removed from an HR system and is deprovisioned via an automated workflow. ++### Microsoft 365 Groups ++The most frequent scenarios for Microsoft 365 Groups being deleted are: ++* An administrator intentionally deletes the group, for example, in response to a support request. +* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you might have a script that deletes groups that haven't been accessed or attested to by the group owner for a specified time. +* Unintentional deletion of a group owned by non-admins. ++### Application objects and service principals ++The most frequent scenarios for application deletion are: ++* An administrator intentionally deletes the application, for example, in response to a support request. +* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you might want a process for deleting abandoned applications that are no longer used or managed. In general, create an offboarding process for applications rather than scripting to avoid unintentional deletions. ++When you delete an application, the application registration by default enters the soft-delete state. To understand the relationship between application registrations and service principals, see [Apps and service principals in Azure AD - Microsoft identity platform](../develop/app-objects-and-service-principals.md). ++### Administrative units ++The most common scenario for deletions is when administrative units (AU) are deleted by accident, although still needed. ++## Recover from soft deletion ++You can restore soft-deleted items in the administrative portal, or by using Microsoft Graph. Not all object classes can manage soft-delete capabilities in the portal, some are only listed, viewed, hard deleted, or restored using the deletedItems Microsoft Graph API. ++### Properties maintained with soft delete ++|Object type|Important properties maintained| +||| +|Users (including external users)|All properties maintained, including ObjectID, group memberships, roles, licenses, and application assignments| +|Microsoft 365 Groups|All properties maintained, including ObjectID, group memberships, licenses, and application assignments| +|Application registration | All properties maintained. See more information after this table.| +|Service principal|All properties maintained| +|Administrative unit (AU)|All properties maintained| ++### Users ++You can see soft-deleted users in the Azure portal on the **Users | Deleted users** page. ++![Screenshot that shows restoring users in the Azure portal.](media/recoverability/deletion-restore-user.png) ++For more information on how to restore users, see the following documentation: ++* To restore from the Azure portal, see [Restore or permanently remove recently deleted user](../fundamentals/users-restore.md). +* To restore by using Microsoft Graph, see [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http). ++### Groups ++You can see soft-deleted Microsoft 365 Groups in the Azure portal on the **Groups | Deleted groups** page. ++![Screenshot that shows restoring groups in the Azure portal.](media/recoverability/deletion-restore-groups.png) ++For more information on how to restore soft-deleted Microsoft 365 Groups, see the following documentation: ++* To restore from the Azure portal, see [Restore a deleted Microsoft 365 Group](../enterprise-users/groups-restore-deleted.md). +* To restore by using Microsoft Graph, see [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http). ++### Applications and service principals ++Applications have two objects: the application registration and the service principal. For more information on the differences between the registration and the service principal, see [Apps and service principals in Azure AD](../develop/app-objects-and-service-principals.md). ++To restore an application from the Azure portal, select **App registrations** > **Deleted applications**. Select the application registration to restore, and then select **Restore app registration**. ++[![Screenshot that shows the app registration restore process in the azure portal.](./media/recoverability/deletion-restore-application.png)](./media/recoverability/deletion-restore-application.png#lightbox) ++Currently, service principals can be listed, viewed, hard deleted, or restored via the deletedItems Microsoft Graph API. To restore applications using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http). ++### Administrative units ++AUs can be listed, viewed, or restored via the deletedItems Microsoft Graph API. To restore AUs using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http). Once an AU is deleted it remains in a soft deleted state and can be restored for 30 days, but cannot be hard deleted during that time. Soft deleted AUs are hard deleted automatically after 30 days. ++## Hard deletions ++A hard deletion is the permanent removal of an object from your Azure AD tenant. Objects that don't support soft delete are removed in this way. Similarly, soft-deleted objects are hard deleted after a deletion time of 30 days. The only object types that support a soft delete are: ++* Users +* Microsoft 365 Groups +* Application registration +* Service principal +* Administrative unit ++> [!IMPORTANT] +> All other item types are hard deleted. When an item is hard deleted, it can't be restored. It must be re-created. Neither administrators nor Microsoft can restore hard-deleted items. Prepare for this situation by ensuring that you have processes and documentation to minimize potential disruption from a hard delete. +> +> For information on how to prepare for and document current states, see [Recoverability best practices](recoverability-overview.md). ++### When hard deletes usually occur ++Hard deletes might occur in the following circumstances. ++Moving from soft to hard delete: ++* A soft-deleted object wasn't restored within 30 days. +* An administrator intentionally deletes an object in the soft delete state. ++Directly hard deleted: ++* The object type that was deleted doesn't support soft delete. +* An administrator chooses to permanently delete an item by using the portal, which typically occurs in response to a request. +* An automation script triggers the deletion of the object by using Microsoft Graph or PowerShell. Use of an automation script to clean up stale objects isn't uncommon. A robust off-boarding process for objects in your tenant helps you to avoid mistakes that might result in mass deletion of critical objects. ++## Recover from hard deletion ++Hard-deleted items must be re-created and reconfigured. It's best to avoid unwanted hard deletions. ++### Review soft-deleted objects ++Ensure you have a process to frequently review items in the soft-delete state and restore them if appropriate. To do so, you should: ++* Frequently [list deleted items](/graph/api/directory-deleteditems-list?tabs=http). +* Ensure that you have specific criteria for what should be restored. +* Ensure that you have specific roles or users assigned to evaluate and restore items as appropriate. +* Develop and test a continuity management plan. For more information, see [Considerations for your Enterprise Business Continuity Management Plan](/compliance/assurance/assurance-developing-your-ebcm-plan). ++For more information on how to avoid unwanted deletions, see the following articles in [Recoverability best practices](recoverability-overview.md): ++* Business continuity and disaster planning +* Document known good states +* Monitoring and data retention |
active-directory | Recover From Misconfigurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/recover-from-misconfigurations.md | + + Title: Recover from misconfigurations in Azure Active Directory +description: Learn how to recover from misconfigurations. +++++++ Last updated : 08/26/2022+++++++# Recover from misconfiguration ++Configuration settings in Azure Active Directory (Azure AD) can affect any resource in the Azure AD tenant through targeted or tenant-wide management actions. ++## What is configuration? ++Configurations are any changes in Azure AD that alter the behavior or capabilities of an Azure AD service or feature. For example, when you configure a Conditional Access policy, you alter who can access the targeted applications and under what circumstances. ++You need to understand the configuration items that are important to your organization. The following configurations have a high impact on your security posture. ++### Tenant-wide configurations ++* **External identities**: Global administrators for the tenant identify and control the external identities that can be provisioned in the tenant. They determine: ++ * Whether to allow external identities in the tenant. + * From which domains external identities can be added. + * Whether users can invite users from other tenants. ++* **Named locations**: Global administrators can create named locations, which can then be used to: ++ * Block sign-ins from specific locations. + * Trigger Conditional Access policies like multifactor authentication. ++* **Allowed authentication methods**: Global administrators set the authentication methods allowed for the tenant. +* **Self-service options**: Global administrators set self-service options like self-service password reset and create Office 365 groups at the tenant level. ++The implementation of some tenant-wide configurations can be scoped, provided they aren't overridden by global administration policies. For example: ++* If the tenant is configured to allow external identities, a resource administrator can still exclude those identities from accessing a resource. +* If the tenant is configured to allow personal device registration, a resource administrator can exclude those devices from accessing specific resources. +* If named locations are configured, a resource administrator can configure policies that either allow or exclude access from those locations. ++### Conditional Access configurations ++Conditional Access policies are access control configurations that bring together signals to make decisions and enforce organizational policies. ++![Screenshot that shows user, location, device, application, and risk signals coming together in Conditional Access policies.](media\recoverability\miscofigurations-conditional-accss-signals.png) ++To learn more about Conditional Access policies, see [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md). ++> [!NOTE] +> While configuration alters the behavior or capabilities of an object or policy, not all changes to an object are configuration. You can change the data or attributes associated with an item, like changing a user's address, without affecting the capabilities of that user object. ++## What is misconfiguration? ++Misconfiguration is a configuration of a resource or policy that diverges from your organizational policies or plans and causes unintended or unwanted consequences. ++A misconfiguration of tenant-wide settings or Conditional Access policies can seriously affect your security and the public image of your organization by: ++* Changing how administrators, tenant users, and external users interact with resources in your tenant: ++ * Unnecessarily limiting access to resources. + * Loosening access controls on sensitive resources. ++* Changing the ability of your users to interact with other tenants and external users to interact with your tenant. +* Causing denial of service, for example, by not allowing customers to access their accounts. +* Breaking dependencies among data, systems, and applications resulting in business process failures. ++### When does misconfiguration occur? ++Misconfiguration is most likely to occur when: ++* A mistake is made during ad-hoc changes. +* A mistake is made as a result of troubleshooting exercises. +* An action was carried out with malicious intent by a bad actor. ++## Prevent misconfiguration ++It's critical that alterations to the intended configuration of an Azure AD tenant are subject to robust change management processes, including: ++* Documenting the change, including prior state and intended post-change state. +* Using Privileged Identity Management (PIM) to ensure that administrators with intent to change must deliberately escalate their privileges to do so. To learn more about PIM, see [What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md). +* Using a strong approval workflow for changes, for example, requiring [approval of PIM escalation of privileges](../privileged-identity-management/azure-ad-pim-approval-workflow.md). ++## Monitor for configuration changes ++While you want to prevent misconfiguration, you can't set the bar for changes so high that it affects the ability of administrators to perform their work efficiently. ++Closely monitor for configuration changes by watching for the following operations in your [Azure AD Audit log](../reports-monitoring/concept-audit-logs.md): ++* Add +* Create +* Update +* Set +* Delete ++The following table includes informative entries in the Audit log you can look for. ++### Conditional Access and authentication method configuration changes ++Conditional Access policies are created on the **Conditional Access** page in the Azure portal. Changes to policies are made on the **Conditional Access policy details** page for the policy. ++| Service filter| Activities| Potential impacts | +| - | - | - | +| Conditional Access| Add, update, or delete Conditional Access policy| User access is granted or blocked when it shouldnΓÇÖt be. | +| Conditional Access| Add, update, or delete named location| Network locations consumed by the Conditional Access policy aren't configured as intended, which creates gaps in Conditional Access policy conditions. | +| Authentication method| Update authentication methods policy| Users can use weaker authentication methods or are blocked from a method they should use. | ++### User and password reset configuration changes ++User settings changes are made on the Azure portal **User settings** page. Password reset changes are made on the **Password reset** page. Changes made on these pages are captured in the Audit log as detailed in the following table. ++| Service filter| Activities| Potential impacts | +| - | - | - | +| Core directory| Update company settings| Users might or might not be able to register applications, contrary to intent. | +| Core directory| Set company information| Users might or might not be able to access the Azure AD administration portal, contrary to intent. <br>Sign-in pages don't represent the company brand, with potential damage to reputation. | +| Core directory| **Activity**: Updated service principal<br>**Target**: 0365 LinkedIn connection| Users might or might not be able to connect their Azure AD account with LinkedIn, contrary to intent. | +| Self-service group management| Update MyApps feature value| Users might or might not be able to use user features, contrary to intent. | +| Self-service group management| Update ConvergedUXV2 feature value| Users might or might not be able to use user features, contrary to intent. | +| Self-service group management| Update MyStaff feature value| Users might or might not be able to use user features, contrary to intent. | +| Core directory| **Activity**: Update service principal<br>**Target**: Microsoft password reset service| Users are able or unable to reset their password, contrary to intent. <br>Users are required or not required to register for self-service password reset, contrary to intent.<br> Users can reset their password by using methods that are unapproved, for example, by using security questions. | ++### External identities configuration changes ++You can make changes to these settings on the **External identities** or **External collaboration** settings pages in the Azure portal. ++| Service filter| Activities| Potential impacts | +| - | - | - | +| Core directory| Add, update, or delete a partner to cross-tenant access setting| Users have outbound access to tenants that should be blocked.<br>Users from external tenants who should be blocked have inbound access. | +| B2C| Create or delete identity provider| Identity providers for users who should be able to collaborate are missing, blocking access for those users. | +| Core directory| Set directory feature on tenant| External users have greater or less visibility of directory objects than intended.<br>External users might or might not invite other external users to your tenant, contrary to intent. | +| Core directory| Set federation settings on domain| External user invitations might or might not be sent to users in other tenants, contrary to intent. | +| AuthorizationPolicy| Update authorization policy| External user invitations might or might not be sent to users in other tenants, contrary to intent. | +| Core directory| Update policy| External user invitations might or might not be sent to users in other tenants, contrary to intent. | ++### Custom role and mobility definition configuration changes ++| Service filter| Activities/portal| Potential impacts | +| - |- | -| +| Core directory| Add role definition| Custom role scope is narrower or broader than intended. | +| PIM| Update role setting| Custom role scope is narrower or broader than intended. | +| Core directory| Update role definition| Custom role scope is narrower or broader than intended. | +| Core directory| Delete role definition| Custom roles are missing. | +| Core directory| Add delegated permission grant| Mobile device management or mobile application management configuration is missing or misconfigured, which leads to the failure of device or application management. | ++### Audit log detail view ++Selecting some audit entries in the Audit log will provide you with details on the old and new configuration values. For example, for Conditional Access policy configuration changes, you can see the information in the following screenshot. ++![Screenshot that shows Audit log details for a change to a Conditional Access policy.](media/recoverability/misconfiguration-audit-log-details.png) ++## Use workbooks to track changes ++Azure Monitor workbooks can help you monitor configuration changes. ++The [Sensitive operations report workbook](../reports-monitoring/workbook-sensitive-operations-report.md) can help identify suspicious application and service principal activity that might indicate a compromise, including: ++* Modified application or service principal credentials or authentication methods. +* New permissions granted to service principals. +* Directory role and group membership updates for service principals. +* Modified federation settings. ++The [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md) can help you monitor which applications in external tenants your users are accessing and which applications your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants. ++## Next steps ++- For foundational information on recoverability, see [Recoverability best practices](recoverability-overview.md). +- For information on recovering from deletions, see [Recover from deletions](recover-from-deletions.md). |
active-directory | Recoverability Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/recoverability-overview.md | + + Title: Recoverability best practices in Azure Active Directory +description: Learn the best practices for increasing recoverability. +++++++ Last updated : 08/26/2022+++++++# Recoverability best practices ++Unintended deletions and misconfigurations will happen to your tenant. To minimize the impact of these unintended events, you must prepare for their occurrence. ++Recoverability is the preparatory processes and functionality that enable you to return your services to a prior functioning state after an unintended change. Unintended changes include the soft or hard deletion or misconfiguration of applications, groups, users, policies, and other objects in your Azure Active Directory (Azure AD) tenant. ++Recoverability helps your organization be more resilient. Resilience, while related, is different. Resilience is the ability to endure disruption to system components and recover with minimal impact to your business, users, customers, and operations. For more information about how to make your systems more resilient, see [Building resilience into identity and access management with Azure Active Directory](resilience-overview.md). ++This article describes the best practices in preparing for deletions and misconfigurations to minimize the unintended consequences to your organization's business. ++## Deletions and misconfigurations ++Deletions and misconfigurations have different impacts on your tenant. ++### Deletions ++The impact of deletions depends on the object type. ++Users, Microsoft 365 Groups, and applications can be soft deleted. Soft-deleted items are sent to the Azure AD recycle bin. While in the recycle bin, items aren't available for use. However, they retain all their properties and can be restored via a Microsoft Graph API call or in the Azure portal. Items in the soft-delete state that aren't restored within 30 days are permanently, or hard, deleted. ++![Diagram that shows that users, Microsoft 365 Groups, and applications are soft deleted and then hard deleted after 30 days.](media/recoverability/overview-deletes.png) ++> [!IMPORTANT] +> All other object types are hard deleted immediately when they're selected for deletion. When an object is hard deleted, it can't be recovered. It must be re-created and reconfigured. +> +>For more information on deletions and how to recover from them, see [Recover from deletions](recover-from-deletions.md). ++### Misconfigurations ++Misconfigurations are configurations of a resource or policy that diverge from your organizational policies or plans and cause unintended or unwanted consequences. Misconfiguration of tenant-wide settings or Conditional Access policies can seriously affect your security and the public image of your organization. Misconfigurations can: ++* Change how administrators, tenant users, and external users interact with resources in your tenant. +* Change the ability of your users to interact with other tenants and external users to interact with your tenant. +* Cause denial of service. +* Break dependencies among data, systems, and applications. ++For more information on misconfigurations and how to recover from them, see [Recover from misconfigurations](recover-from-misconfigurations.md). ++## Shared responsibility ++Recoverability is a shared responsibility between Microsoft as your cloud service provider and your organization. ++![Diagram that shows shared responsibilities between Microsoft and customers for planning and recovery.](media/recoverability/overview-shared-responsiblility.png) ++You can use the tools and services that Microsoft provides to prepare for deletions and misconfigurations. ++## Business continuity and disaster planning ++Restoring a hard-deleted or misconfigured item is a resource-intensive process. You can minimize the resources needed by planning ahead. Consider having a specific team of admins in charge of restorations. ++### Test your restoration process ++Rehearse your restoration process for different object types and the communication that will go out as a result. Be sure to rehearse with test objects, ideally in a test tenant. ++Testing your plan can help you determine the: ++- Validity and completeness of your object state documentation. +- Typical time to resolution. +- Appropriate communications and their audiences. +- Expected successes and potential challenges. ++### Create the communication process ++Create a process of predefined communications to make others aware of the issue and timelines for restoration. Include the following points in your restoration communication plan: ++- The types of communications to go out. Consider creating predefined templates. +- Stakeholders to receive communications. Include the following groups, as applicable: ++ - Affected business owners. + - Operational admins who will perform recovery. + - Business and technical approvers. + - Affected users. ++- Define the events that trigger communications, such as: ++ - Initial deletion. + - Impact assessment. + - Time to resolution. + - Restoration. ++## Document known good states ++Document the state of your tenant and its objects regularly. Then if a hard delete or misconfiguration occurs, you have a roadmap to recovery. The following tools can help you document your current state: ++- [Microsoft Graph APIs](/graph/overview) can be used to export the current state of many Azure AD configurations. +- [Azure AD Exporter](https://github.com/microsoft/azureadexporter) is a tool you can use to export your configuration settings. +- [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) is a module of the PowerShell Desired State Configuration framework. You can use it to export configurations for reference and application of the prior state of many settings. +- [Conditional Access APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) can be used to manage your Conditional Access policies as code. ++### Commonly used Microsoft Graph APIs ++You can use Microsoft Graph APIs to export the current state of many Azure AD configurations. The APIs cover most scenarios where reference material about the prior state, or the ability to apply that state from an exported copy, could become vital to keeping your business running. ++Microsoft Graph APIs are highly customizable based on your organizational needs. To implement a solution for backups or reference material requires developers to engineer code to query for, store, and display the data. Many implementations use online code repositories as part of this functionality. ++### Useful APIs for recovery ++| Resource types| Reference links | +| - | - | +| Users, groups, and other directory objects| [directoryObject API](/graph/api/resources/directoryObject) | +| Directory roles| [directoryRole API](/graph/api/resources/directoryrole) | +| Conditional Access policies| [Conditional Access policy API](/graph/api/resources/conditionalaccesspolicy) | +| Devices| [devices API](/graph/api/resources/device) | +| Domains| [domains API](/graph/api/domain-list?tabs=http) | +| Administrative units| [administrative unit API)](/graph/api/resources/administrativeunit) | +| Deleted items*| [deletedItems API](/graph/api/resources/directory) | ++*Securely store these configuration exports with access provided to a limited number of admins. ++The [Azure AD Exporter](https://github.com/microsoft/azureadexporter) can provide most of the documentation you need: ++- Verify that you've implemented the desired configuration. +- Use the exporter to capture current configurations. +- Review the export, understand the settings for your tenant that aren't exported, and manually document them. +- Store the output in a secure location with limited access. ++> [!NOTE] +> Settings in the legacy multifactor authentication portal for Application Proxy and federation settings might not be exported with the Azure AD Exporter, or with the Microsoft Graph API. +The [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) module uses Microsoft Graph and PowerShell to retrieve the state of many of the configurations in Azure AD. This information can be used as reference information or, by using PowerShell Desired State Configuration scripting, to reapply a known good state. ++ Use [Conditional Access Graph APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) to manage policies like code. Automate approvals to promote policies from preproduction environments, backup and restore, monitor change, and plan ahead for emergencies. ++### Map the dependencies among objects ++The deletion of some objects can cause a ripple effect because of dependencies. For example, deletion of a security group used for application assignment would result in users who were members of that group being unable to access the applications to which the group was assigned. ++#### Common dependencies ++| Object type| Potential dependencies | +| - | - | +| Application object| Service principal (enterprise application). <br>Groups assigned to the application. <br>Conditional Access policies affecting the application. | +| Service principals| Application object. | +| Conditional Access policies| Users assigned to the policy.<br>Groups assigned to the policy.<br>Service principal (enterprise application) targeted by the policy. | +| Groups other than Microsoft 365 Groups| Users assigned to the group.<br>Conditional Access policies to which the group is assigned.<br>Applications to which the group is assigned access. | ++## Monitoring and data retention ++The [Azure AD Audit log](../reports-monitoring/concept-audit-logs.md) contains information on all delete and configuration operations performed in your tenant. We recommend that you export these logs to a security information and event management tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on finding deleted items by using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0 ](/graph/api/directory-deleteditems-list?tabs=http). ++### Audit logs ++The Audit log always records a "Delete \<object\>" event when an object in the tenant is removed from an active state, either from active to soft deleted or active to hard deleted. +++A Delete event for applications, service principals, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete. ++| Object type | Activity in log| Result | +| - | - | - | +| Application| Delete application and service principal| Soft deleted | +| Application| Hard delete application | Hard deleted | +| Service principal| Delete service principal| Soft deleted | +| Service principal| Hard delete service principal| Hard deleted | +| User| Delete user| Soft deleted | +| User| Hard delete user| Hard deleted | +| Microsoft 365 Groups| Delete group| Soft deleted | +| Microsoft 365 Groups| Hard delete group| Hard deleted | +| All other objects| Delete ΓÇ£objectTypeΓÇ¥| Hard deleted | ++> [!NOTE] +> The Audit log doesn't distinguish the group type of a deleted group. Only Microsoft 365 Groups are soft deleted. If you see a Delete group entry, it might be the soft delete of a Microsoft 365 Group or the hard delete of another type of group. Your documentation of your known good state should include the group type for each group in your organization. ++For information on monitoring configuration changes, see [Recover from misconfigurations](recover-from-misconfigurations.md). ++### Use workbooks to track configuration changes ++Azure Monitor workbooks can help you monitor configuration changes. ++The [Sensitive operations report workbook](../reports-monitoring/workbook-sensitive-operations-report.md) can help identify suspicious application and service principal activity that might indicate a compromise, including: ++- Modified application or service principal credentials or authentication methods. +- New permissions granted to service principals. +- Directory role and group membership updates for service principals. +- Modified federation settings. ++The [Cross-tenant access activity workbook ](../reports-monitoring/workbook-cross-tenant-access-activity.md)can help you monitor which applications in external tenants your users are accessing and which applications in your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants. ++## Operational security ++Preventing unwanted changes is far less difficult than needing to re-create and reconfigure objects. Include the following tasks in your change management processes to minimize accidents: ++- Use a least privilege model. Ensure that each member of your team has the least privileges necessary to complete their usual tasks. Require a process to escalate privileges for more unusual tasks. +- Administrative control of an object enables configuration and deletion. Use read-only admin roles, for example, the Global Reader role, for tasks that don't require operations to create, update, or delete (CRUD). When CRUD operations are required, use object-specific roles when possible. For example, User administrators can delete only users, and Application administrators can delete only applications. Use these more limited roles whenever possible, instead of a Global administrator role, which can delete anything, including the tenant. +- [Use Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md). PIM enables just-in-time escalation of privileges to perform tasks like hard deletion. You can configure PIM to have notifications or approvals for the privilege escalation. ++## Next steps ++- [Recover from deletions](recover-from-deletions.md) +- [Recover from misconfigurations](recover-from-misconfigurations.md) |
active-directory | Resilience App Development Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-app-development-overview.md | + + Title: Increase the resilience of authentication and authorization applications you develop +description: Resilience guidance for application development using Azure Active Directory and the Microsoft identity platform ++++++++ Last updated : 03/02/2023+++# Increase the resilience of authentication and authorization applications you develop ++The Microsoft identity platform helps you build applications your users and customers can sign in to using their Microsoft identities or social accounts. Microsoft identity platform uses token-based authentication and authorization. Client applications acquire tokens from an identity provider (IdP) to authenticate users and authorize applications to call protected APIs. A service validates tokens. ++Learn more: ++[What is the Microsoft identity platform?](../develop/v2-overview.md) +[Security tokens](../develop/security-tokens.md) ++A token is valid for a length of time, and then the app must acquire a new one. Rarely, a call to retrieve a token fails due to network or infrastructure issues or an authentication service outage. ++The following articles have guidance for client and service applications for a signed in user and daemon applications. They contain best practices for using tokens and calling resources. ++- [Increase the resilience of authentication and authorization in client applications you develop](resilience-client-app.md) +- [Increase the resilience of authentication and authorization in daemon applications you develop](resilience-daemon-app.md) +- [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md) +- [Build resilience in your customer identity and access management with Azure AD B2C](resilience-b2c.md) +- [Build services that are resilient to Azure AD's OpenID Connect metadata refresh](../develop/howto-build-services-resilient-to-metadata-refresh.md) |
active-directory | Resilience B2b Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-b2b-authentication.md | + + Title: Build resilience in external user authentication with Azure Active Directory +description: A guide for IT admins and architects to building resilient authentication for external users ++++++ Last updated : 11/16/2022+++++# Build resilience in external user authentication ++[Azure Active Directory B2B collaboration](../external-identities/what-is-b2b.md) (Azure AD B2B) is a feature of [External Identities](../external-identities/external-collaboration-settings-configure.md) that enables collaboration with other organizations and individuals. It enables the secure onboarding of guest users into your Azure AD tenant without having to manage their credentials. External users bring their identity and credentials with them from an external identity provider (IdP) so they don't have to remember a new credential. ++## Ways to authenticate external users ++You can choose the methods of external user authentication to your directory. You can use Microsoft IdPs or other IdPs. ++With every external IdP, you take a dependency on the availability of that IdP. With some methods of connecting to IdPs, there are things you can do to increase your resilience. ++> [!NOTE] +> Azure AD B2B has the built-in ability to authenticate any user from any [Azure Active Directory](../index.yml) tenant or with a personal [Microsoft Account](https://account.microsoft.com/account). You do not have to do any configuration with these built-in options. ++### Considerations for resilience with other IdPs ++When you use external IdPs for guest user authentication, there are configurations that you must maintain to prevent disruptions. ++| Authentication Method| Resilience considerations | +| - | - | +| Federation with social IDPs like [Facebook](../external-identities/facebook-federation.md) or [Google](../external-identities/google-federation.md).| You must maintain your account with the IdP and configure your Client ID and Client Secret. | +| [SAML/WS-Fed identity provider (IdP) federation](../external-identities/direct-federation.md)| You must collaborate with the IdP owner for access to their endpoints upon which you're dependent. You must maintain the metadata that contain the certificates and endpoints. | +| [Email one-time passcode](../external-identities/one-time-passcode.md)| You're dependent on Microsoft's email system, the user's email system, and the user's email client. | ++## Self-service sign-up ++As an alternative to sending invitations or links, you can enable [Self-service sign-up](../external-identities/self-service-sign-up-overview.md). This method allows external users to request access to an application. You must create an [API connector](../external-identities/self-service-sign-up-add-api-connector.md) and associate it with a user flow. You associate user flows that define the user experience with one or more applications. ++It's possible to use [API connectors](../external-identities/api-connectors-overview.md) to integrate your self-service sign-up user flow with external systems' APIs. This API integration can be used for [custom approval workflows](../external-identities/self-service-sign-up-add-approvals.md), [performing identity verification](../external-identities/code-samples-self-service-sign-up.md), and other tasks such as overwriting user attributes. Using APIs requires that you manage the following dependencies. ++* **API Connector Authentication**: Setting up a connector requires an endpoint URL, a username, and a password. Set up a process by which these credentials are maintained, and work with the API owner to ensure you know any expiration schedule. +* **API Connector Response**: Design API Connectors in the sign-up flow to fail gracefully if the API isn't available. Examine and provide to your API developers these [example API responses](../external-identities/self-service-sign-up-add-api-connector.md) and the [best practices for troubleshooting](../external-identities/self-service-sign-up-add-api-connector.md). Work with the API development team to test all possible response scenarios, including continuation, validation-error, and blocking responses. ++## Next steps ++### Resilience resources for administrators and architects + +* [Build resilience with credential management](resilience-in-credentials.md) +* [Build resilience with device states](resilience-with-device-states.md) +* [Build resilience by using Continuous Access Evaluation (CAE)](resilience-with-continuous-access-evaluation.md) +* [Build resilience in your hybrid authentication](resilience-in-hybrid.md) +* [Build resilience in application access with Application Proxy](resilience-on-premises-access.md) ++### Resilience resources for developers ++* [Build IAM resilience in your applications](resilience-app-development-overview.md) +* [Build resilience in your CIAM systems](resilience-b2c.md) |
active-directory | Resilience B2c Developer Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-b2c-developer-best-practices.md | + + Title: Resilience through developer best practices using Azure AD B2C +description: Resilience through developer best practices in Customer Identity and Access Management using Azure AD B2C ++++++++ Last updated : 12/01/2022+++++# Resilience through developer best practices ++In this article, we share some learnings that are based on our experience from working with large customers. You may consider these recommendations in the design and implementation of your services. ++![Image shows developer experience components](media/resilience-b2c-developer-best-practices/developer-best-practices-architecture.png) ++## Use the Microsoft Authentication Library (MSAL) ++The [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) and the [Microsoft identity web authentication library for ASP.NET](../develop/reference-v2-libraries.md) simplify acquiring, managing, caching, and refreshing the tokens an application requires. These libraries are optimized specifically to support Microsoft Identity including features that improve application resiliency. ++Developers should adopt latest releases of MSAL and stay up to date. See [how to increase resilience of authentication and authorization](resilience-app-development-overview.md) in your applications. Where possible, avoid implementing your own authentication stack and use well-established libraries. ++## Optimize directory reads and writes ++The Microsoft Azure AD B2C directory service supports billions of authentications a day. It's designed for a high rate of reads per second. Optimize your writes to minimize dependencies and increase resilience. ++### How to optimize directory reads and writes ++- **Avoid write functions to the directory on sign-in**: Never execute a write on sign-in without a precondition (if clause) in your custom policies. One use case that requires a write on a sign-in is [just-in-time migration of user passwords](https://github.com/azure-ad-b2c/user-migration/tree/master/seamless-account-migration). Avoid any scenario that requires a write on every sign-in. [Preconditions](../../active-directory-b2c/userjourneys.md) in a user journey will look like this: ++ ```xml + <Precondition Type="ClaimEquals" ExecuteActionsIf="true"> + <Value>requiresMigration</Value> + ... + <Precondition/> + ``` ++- **Understand throttling**: The directory implements both application and tenant level throttling rules. There are further rate limits for Read/GET, Write/POST, Update/PUT, and Delete/DELETE operations and each operation have different limits. ++ - A write at the time of sign-in will fall under a POST for new users or PUT for existing users. + - A custom policy that creates or updates a user on every sign-in, can potentially hit an application level PUT or POST rate limit. The same limits apply when updating directory objects via Azure AD or Microsoft Graph. Similarly, examine the reads to keep the number of reads on every sign-in to the minimum. + - Estimate peak load to predict the rate of directory writes and avoid throttling. Peak traffic estimates should include estimates for actions such as sign-up, sign-in, and Multi-factor authentication (MFA). Be sure to test both the Azure AD B2C system and your application for peak traffic. It's possible that Azure AD B2C can handle the load without throttling, when your downstream applications or services won't. + - Understand and plan your migration timeline. When planning to migrate users to Azure AD B2C using Microsoft Graph, consider the application and tenant limits to calculate the time needed to complete the migration of users. If you split your user creation job or script using two applications, you can use the per application limit. It would still need to remain below the per tenant threshold. + - Understand the effects of your migration job on other applications. Consider the live traffic served by other relying applications to make sure you don't cause throttling at the tenant level and resource starvation for your live application. For more information, see the [Microsoft Graph throttling guidance](/graph/throttling). + - Use a [load test sample](https://github.com/azure-ad-b2c/load-tests) to simulate sign-up and sign-in. + - Learn more about [Azure Active Directory B2C service limits and restrictions](../../active-directory-b2c/service-limits.md?pivots=b2c-custom-policy). + +## Extend token lifetimes ++In an unlikely event, when the Azure AD B2C authentication service is unable to complete new sign-ups and sign-ins, you can still provide mitigation for users who are signed in. With [configuration](../../active-directory-b2c/configure-tokens.md), you can allow users that are already signed in to continue using the application without any perceived disruption until the user signs out from the application or the [session](../../active-directory-b2c/session-behavior.md) times out due to inactivity. ++Your business requirements and desired end-user experience will dictate your frequency of token refresh for both web and Single-page applications (SPAs). ++### How to extend token lifetimes ++- **Web applications**: For web applications where the authentication token is validated at the beginning of sign-in, the application depends on the session cookie to continue to extend the session validity. Enable users to remain signed in by implementing rolling session times that will continue to renew sessions based on user activity. If there's a long-term token issuance outage, these session times can be further increased as a onetime configuration on the application. Keep the lifetime of the session to the maximum allowed. +- **SPAs**: A SPA may depend on access tokens to make calls to the APIs. A SPA traditionally uses the implicit flow that doesn't result in a refresh token. The SPA can use a hidden `iframe` to perform new token requests against the authorization endpoint if the browser still has an active session with the Azure AD B2C. For SPAs, there are a few options available to allow the user to continue to use the application. + - Extend the access token's validity duration to meet your business requirements. + - Build your application to use an API gateway as the authentication proxy. In this configuration, the SPA loads without any authentication and the API calls are made to the API gateway. The API gateway sends the user through a sign-in process using an [authorization code grant](https://oauth.net/2/grant-types/authorization-code/) based on a policy and authenticates the user. Then the authentication session between the API gateway and the client is maintained using an authentication cookie. The API gateway services the APIs using the token that is obtained by the API gateway (or some other direct authentication method such as certificates, client credentials, or API keys). + - [Migrate your SPA from implicit grant](https://developer.microsoft.com/identity/blogs/msal-js-2-0-supports-authorization-code-flow-is-now-generally-available/) to [authorization code grant flow](../../active-directory-b2c/implicit-flow-single-page-application.md) with Proof Key for Code Exchange (PKCE) and Cross-origin Resource Sharing (CORS) support. Migrate your application from MSAL.js 1.x to MSAL.js 2.x to realize the resiliency of Web applications. + - For mobile applications, it's recommended to extend both the refresh and access token lifetimes. +- **Backend or microservice applications**: Because backend (daemon) applications are non-interactive and aren't in a user context, the prospect of token theft is greatly diminished. Recommendation is to strike a balance between security and lifetime and set a long token lifetime. ++## Configure Single sign-on ++With [Single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md), users sign in once with a single account and get access to multiple applications. The application can be a web, mobile, or a Single page application (SPA), regardless of platform or domain name. When the user initially signs in to an application, Azure AD B2C persists a [cookie-based session](../../active-directory-b2c/session-behavior.md). ++Upon subsequent authentication requests, Azure AD B2C reads and validates the cookie-based session and issues an access token without prompting the user to sign in again. If SSO is configured with a limited scope at a policy or an application, later access to other policies and applications will require fresh authentication. ++### How to configure SSO ++[Configure SSO](../hybrid/how-to-connect-sso-quick-start.md) to be tenant-wide (default) to allow multiple applications and user flows in your tenant to share the same user session. Tenant-wide configuration provides most resiliency to fresh authentication. ++## Safe deployment practices ++The most common disrupters of service are the code and configuration changes. Adoption of Continuous Integration and Continuous Delivery (CICD) processes and tools help with rapid deployment at a large scale and reduces human errors during testing and deployment into production. Adopt CICD for error reduction, efficiency, and consistency. [Azure Pipelines](/azure/devops/pipelines/apps/cd/azure/cicd-data-overview) is an example of CICD. ++## Protect from bots ++Protect your applications against known vulnerabilities such as Distributed Denial of Service (DDoS) attacks, SQL injections, cross-site scripting, remote code execution, and many others as documented in [OWASP Top 10](https://owasp.org/www-project-top-ten/). Deployment of a Web Application Firewall (WAF) can defend against common exploits and vulnerabilities. ++- Use Azure [WAF](../../web-application-firewall/overview.md), which provides centralized protection against attacks. +- Use WAF with Azure AD [Identity Protection and Conditional Access to provide multi-layer protection](../../active-directory-b2c/conditional-access-identity-protection-overview.md) when using Azure AD B2C. +- Build resistance to bot-driven [sign-ups by integrating with a CAPTCHA system](https://github.com/azure-ad-b2c/samples/tree/master/policies/captcha-integration). ++## Secrets rotation ++Azure AD B2C uses secrets for applications, APIs, policies, and encryption. The secrets secure authentication, external interactions, and storage. The National Institute of Standards and Technology (NIST) calls the time span during which a specific key is authorized for use by legitimate entities a cryptoperiod. Choose the right length of [cryptoperiod](https://csrc.nist.gov/publications/detail/sp/800-57-part-1/rev-5/final) to meet your business needs. Developers need to manually set the expiration and rotate secrets well in advance of their expiration. ++### How to implement secret rotation ++- Use [managed identities](../managed-identities-azure-resources/overview.md) for supported resources to authenticate to any service that supports Azure AD authentication. When you use managed identities, you can manage resources automatically, including rotation of credentials. +- Take an inventory of all the [keys and certificates configured](../../active-directory-b2c/policy-keys-overview.md) in Azure AD B2C. This list is likely to include keys used in custom policies, [APIs](../../active-directory-b2c/secure-rest-api.md), signing ID token, and certificates for SAML. +- Using CICD, rotate secrets that are about to expire within two months from the anticipated peak season. The recommended maximum cryptoperiod of private keys associated to a certificate is one year. +- Proactively monitor and rotate the API access credentials such as passwords, and certificates. ++## Test REST APIs ++In the context of resiliency, testing of REST APIs needs to include verification of ΓÇô HTTP codes, response payload, headers, and performance. Testing shouldn't include only happy path tests, but also check whether the API handles problem scenarios gracefully. ++### How to test APIs ++We recommend your test plan to include [comprehensive API tests](../../active-directory-b2c/best-practices.md#testing). If you're planning for an upcoming surge because of promotion or holiday traffic, you need to revise your load testing with the new estimates. Conduct load testing of your APIs and Content Delivery Network (CDN) in a developer environment and not in production. ++## Next steps ++- [Resilience resources for Azure AD B2C developers](resilience-b2c.md) + - [Resilient end-user experience](resilient-end-user-experience.md) + - [Resilient interfaces with external processes](resilient-external-processes.md) + - [Resilience through monitoring and analytics](resilience-with-monitoring-alerting.md) +- [Build resilience in your authentication infrastructure](resilience-in-infrastructure.md) +- [Increase resilience of authentication and authorization in your applications](resilience-app-development-overview.md) |
active-directory | Resilience B2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-b2c.md | + + Title: Build resilience in Customer Identity and Access Management using Azure AD B2C +description: Methods to build resilience in Customer Identity and Access Management using Azure AD B2C ++++++++ Last updated : 12/01/2022+++++# Build resilience in your customer identity and access management with Azure Active Directory B2C ++[Azure Active Directory (AD) B2C](../../active-directory-b2c/overview.md) is a Customer Identity and Access Management (CIAM) platform that is designed to help you launch your critical customer facing applications successfully. We have many built-in features for [resilience](https://azure.microsoft.com/blog/advancing-azure-active-directory-availability/) that are designed to help our service scale to your needs and improve resilience in the face of potential outage situations. In addition, when launching a mission critical application, it's important to consider various design and configuration elements in your application. Consider how the application is configured within Azure AD B2C to ensure that you get a resilient behavior in response to outage or failure scenarios. In this article, we'll discuss some of the best practices to help you increase resilience. ++A resilient service is one that continues to function despite disruptions. You can help improve resilience in your service by: ++- understanding all the components ++- eliminating single points of failures ++- isolating failing components to limit their impact ++- providing redundancy with fast failover mechanisms and recovery paths ++As you develop your application, we recommend considering how to [increase resilience of authentication and authorization in your applications](resilience-app-development-overview.md) with the identity components of your solution. This article attempts to address enhancements for resilience specific to Azure AD B2C applications. Our recommendations are grouped by CIAM functions. ++![Image shows CIAM components](media/resilience-b2c/high-level-components.png) +In the subsequent sections, we'll guide you to build resilience in the following areas: ++- [End-user experience](resilient-end-user-experience.md): Enable a fallback plan for your authentication flow and mitigate the potential impact from a disruption of Azure AD B2C authentication service. ++- [Interfaces with external processes](resilient-external-processes.md): Build resilience in your applications and interfaces by recovering from errors. ++- [Developer best practices](resilience-b2c-developer-best-practices.md): Avoid fragility because of common custom policy issues and improve error handling in the areas like interactions with claims verifiers, third-party applications, and REST APIs. ++- [Monitoring and analytics](resilience-with-monitoring-alerting.md): Assess the health of your service by monitoring key indicators and detect failures and performance disruptions through alerting. ++- [Build resilience in your authentication infrastructure](resilience-in-infrastructure.md) ++- [Increase resilience of authentication and authorization in your applications](resilience-app-development-overview.md) ++Watch this video to know how to build resilient and scalable flows using Azure AD B2C. +>[!Video https://www.youtube.com/embed/8f_Ozpw9yTs] |
active-directory | Resilience Client App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-client-app.md | + + Title: Increase the resilience of authentication and authorization in client applications you develop +description: Learn to increasing resiliency of authentication and authorization in client application using the Microsoft identity platform ++++++++ Last updated : 03/02/2023+++# Increase the resilience of authentication and authorization in client applications you develop ++Learn to build resilience into client applications that use the Microsoft identity platform and Azure Active Directory (Azure AD) to sign in users, and perform actions on behalf of those users. ++## Use the Microsoft Authentication Library (MSAL) ++The Microsoft Authentication Library (MSAL) is part of the Microsoft identity platform. MSAL acquires, manages, caches, and refreshes tokens; it uses best practices for resilience. MSAL helps developers create secure solutions. ++Learn more: ++* [Overview of the Microsoft Authentication Library](../develop/msal-overview.md) +* [What is the Microsoft identity platform?](../develop/v2-overview.md) +* [Microsoft identity platform documentation](../develop/index.yml) ++MSAL caches tokens and uses a silent token acquisition pattern. MSAL serializes the token cache on operating systems that natively provide secure storage like Universal Windows Platform (UWP), iOS, and Android. Customize the serialization behavior when you're using: ++* Microsoft.Identity.Web +* MSAL.NET +* MSAL for Java +* MSAL for Python ++Learn more: ++* [Token cache serialization](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization) +* [Token cache serialization in MSAL.NET](../develop/msal-net-token-cache-serialization.md) +* [Custom token cache serialization in MSAL for Java](../develop/msal-java-token-cache-serialization.md) +* [Custom token cache serialization in MSAL for Python](../develop/msal-python-token-cache-serialization.md). ++ ![Diagram of a device and and application using MSAL to call Microsoft Identity](media/resilience-client-app/resilience-with-microsoft-authentication-library.png) ++When you're using MSAL, token caching, refreshing, and silent acquisition is supported. Use simple patterns to acquire the tokens for authentication. There's support for many languages. Find code sample on, [Microsoft identity platform code samples](../develop/sample-v2-code.md). ++## [C#](#tab/csharp) ++```csharp +try +{ + result = await app.AcquireTokenSilent(scopes, account).ExecuteAsync(); +} +catch(MsalUiRequiredException ex) +{ + result = await app.AcquireToken(scopes).WithClaims(ex.Claims).ExecuteAsync() +} +``` ++## [JavaScript](#tab/javascript) ++```javascript +return myMSALObj.acquireTokenSilent(request).catch(error => { + console.warn("silent token acquisition fails. acquiring token using redirect"); + if (error instanceof msal.InteractionRequiredAuthError) { + // fallback to interaction when silent call fails + return myMSALObj.acquireTokenPopup(request).then(tokenResponse => { + console.log(tokenResponse); ++ return tokenResponse; + }).catch(error => { + console.error(error); + }); + } else { + console.warn(error); + } +}); +``` ++++MSAL is able to refresh tokens. When the Microsoft identity platform issues a long-lived token, it can send information to the client to refresh the token (refresh\_in). The app runs while the old token is valid, but it takes longer for another token acquisition. ++### MSAL releases ++We recommend developers build a process to use the latest MSAL release because authentication is part of app security. Use this practice for libraries under development and improve app resilience. ++Find the latest version and release notes: ++* [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) +* [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/releases) +* [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python/releases) +* [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) +* [microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc/releases) +* [microsoft-authentication-library-for-android](https://github.com/AzureAD/microsoft-authentication-library-for-android/releases) +* [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases) +* [microsoft-identity-web](https://github.com/AzureAD/microsoft-identity-web/releases) ++## Resilient patterns for token handling ++If you don't use MSAL, use resilient patterns for token handling. The MSAL library implements best practices. ++Generally, applications using modern authentication call an endpoint to retrieve tokens that authenticate the user, or authorize the application to call protected APIs. MSAL handles authentication and implements patterns to improve resilience. If you don't use MSAL, use the guidance in this section for best practices. Otherwise, MSAL implements best practices automatically. ++### Cache tokens ++Ensure apps cache tokens accurately from the Microsoft identity platform. After your app receives tokens, the HTTP response with tokens has an `expires_in` property that indicates the duration to cache, and when to reuse it. Confirm application don't attempt to decode an API access token. ++ ![Diagram of an app calling to Microsoft identity platform, through a token cache on the device running the application.](media/resilience-client-app/token-cache.png) ++Cached tokens prevent unnecessary traffic between an app and the Microsoft identity platform. This scenario makes the app less susceptible to token acquisition failures by reducing token acquisition calls. Cached tokens improve application performance, because the app blocks acquiring tokens less frequently. Users remain signed in to your application for the token lifetime. ++### Serialize and persist tokens ++Ensure apps serialize their token cache securely to persist the tokens between app instances. Reuse tokens during their lifetime. Refresh tokens and access tokens are issued for many hours. During this time, users might start your application several times. When an app starts, confirm it looks for valid access, or a refresh token. This increases app resilience and performance. ++Learn more: ++* [Refresh the access tokens](../develop/v2-oauth2-auth-code-flow.md#refresh-the-access-token) +* [Microsoft identity platform access tokens](../develop/access-tokens.md) ++ ![Diagram of an app calling to Microsoft identity platform, through a token cache and token store on the device running the application.](media/resilience-client-app/token-store.png) ++Ensure persistent token storage has access control and encryption, in relation to the user-owner, or process identity. On various operating systems, there are credential storage features. ++### Acquire tokens silently ++Authenticating a user or retrieving authorization to call an API entails multiple steps in Microsoft identity platform. For example, users signing in for the first time enter credentials and perform a multi-factor authentication. Each step affects the resource that provides the service. The best user experience with the least dependencies is silent token acquisition. ++ ![Diagram of Microsoft identity platform services that help complete user authentication or authorization.](media/resilience-client-app/external-dependencies.png) ++Silent token acquisition starts with a valid token from the app token cache. If there's no valid token, the app attempts to acquire a token using an available refresh token, and the token endpoint. If neither option is available, the app acquires a token using the `prompt=none` parameter. This action uses the authorization endpoint, but no UI appears for the user. If possible, the Microsoft identity platform provides a token to the app without user interaction. If no method results in a token, then the user manually reauthenticates. ++> [!NOTE] +> In general, ensure apps don't use prompts like 'login' and 'consent'. These prompts force user interaction, when no interaction is required. ++## Response code handling ++Use the following sections to learn about response codes. ++### HTTP 429 response code ++There are error responses that affect resilience. If your application receives an HTTP 429 response code, Too Many Requests, Microsoft identity platform is throttling your requests. If an app makes too many requests, it's throttled to prevent the app from receiving tokens. Don't allow an app to attempt token acquisition, before the **Retry-After** response field time is complete. Often, a 429 response indicates the application isn't caching and reusing tokens correctly. Confirm how tokens are cached and reused in the application. ++### HTTP 5x response code ++If an application receives an HTTP 5x response code, the app must not enter a fast retry loop. Use the same handling for a 429 response. If no Retry-After header appears, implement an exponential back-off retry with the first retry, at least 5 seconds after the response. ++When a request times out, immediate retries are discouraged. Implement an exponential back-off retry, with the first retry, at least 5 seconds after the response. ++## Retrieving authorization related information ++Many applications and APIs need user information to authorize. Available methods have advantages and disadvantages. ++### Tokens ++Identity (ID) tokens and access tokens have standard claims that provide information. If needed information is in the token, the most efficient technique is token claims, because that prevents another network call. Fewer network calls equate better resilience. ++Learn more: ++* [Microsoft identity platform ID tokens](../develop/id-tokens.md) +* [Microsoft identity platform access tokens](../develop/access-tokens.md) ++> [!NOTE] +> Some applications call the UserInfo endpoint to retrieve claims about the authenticated user. The information in the ID token is a superset of information from the UserInfo endpoint. Enable apps to use the ID token instead of calling the UserInfo endpoint. ++Augment standard token claims with optional claims, such as groups. The **Application Group** option includes groups assigned to the application. The **All** or **Security groups** options include groups from apps in the same tenant, which can add groups to the token. Evaluate the effect, because it can negate the efficiency of requesting groups in the token by causing token bloat, and requiring more calls to get the groups. ++Learn more: ++* [Provide optional claims to your app](../develop/active-directory-optional-claims.md) +* [Configuring groups optional claims](../develop/active-directory-optional-claims.md#configuring-groups-optional-claims) ++We recommend you use and include app roles, which customers manage by using the portal or APIs. Assign roles to users and groups to control access. When a token is issued, the assigned roles are in the token roles claim. Information derived from a token prevents more APIs calls. ++See, [Add app roles to your application and receive them in the token](../develop/howto-add-app-roles-in-azure-ad-apps.md) ++Add claims based on tenant information. For example, an extension has an enterprise-specific User ID. ++Adding information from the directory to a token is efficient and increases resiliency by reducing dependencies. It doesn't address resilience issues due to an inability to acquire a token. Add optional claims for the application's primary scenarios. If the app requires information for administrative functionality, the application can obtain that information, as needed. ++### Microsoft Graph ++Microsoft Graph has a unified API endpoint to access Microsoft 365 data about productivity patterns, identity, and security. Applications using Microsoft Graph can use Microsoft 365 information for authorization. ++Apps require one token to access Microsoft 365, which is more resilient than previous APIs for Microsoft 365 components like Microsoft Exchange or Microsoft SharePoint that required multiple tokens. ++When using Microsoft Graph APIs, use a Microsoft Graph SDK that simplifies building resilient applications that access Microsoft Graph. ++See, [Microsoft Graph SDK overview](/graph/sdks/sdks-overview) ++For authorization, consider using token claims instead of some Microsoft Graph calls. Request groups, app roles, and optional claims in tokens. Microsoft Graph for authorization requires more network calls that rely on the Microsoft identity platform and Microsoft Graph. However, if your application relies on Microsoft Graph as its data layer, then Microsoft Graph for authorization isn't more risk. ++## Use broker authentication on mobile devices ++On mobile devices, an authentication broker like Microsoft Authenticator improves resilience. The authentication broker uses a primary refresh token (PRT) with claims about the user and device. Use PRT for authentication tokens to access other applications from the device. When a PRT requests application access, Azure Active Directory (Azure AD) trusts its device and MFA claims. This increases resilience by reducing steps to authenticate the device. Users aren't challenged with multiple MFA prompts on the same device. ++See, [What is a Primary Refresh Token?](../devices/concept-primary-refresh-token.md) ++ ![Diagram of an app calling Microsoft identity platform, through a token cache and token store, and authentication broker on the device running the application.](media/resilience-client-app/authentication-broker.png) ++MSAL supports broker authentication. Learn more: ++* [SSO through Authentication broker on iOS](../develop/single-sign-on-macos-ios.md#sso-through-authentication-broker-on-ios) +* [Enable cross-app SSO on Android using MSAL](../develop/msal-android-single-sign-on.md) ++## Continuous Access Evaluation ++Continuous Access Evaluation (CAE) increases application security and resilience with long-lived tokens. With CAE, an access token is revoked based on critical events and policy evaluation, rather than short token lifetimes. For some resource APIs, because risk and policy are evaluated in real time, CAE increases token lifetime up to 28 hours. MSAL refreshes long-lived tokens. ++Learn more: ++* [Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) +* [Securing applications with Continuous Access Evaluation](/security/zero-trust/develop/secure-with-cae) +* [Critical event evaluation](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) +* [Conditional Access policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation) +* [How to use CAE enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) ++If you develop resource APIs, go to openid.net for [Shared Signals ΓÇô A Secure Webhooks Framework](https://openid.net/wg/sse/). ++## Next steps ++* [How to use CAE enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) +* [Increase the resilience of authentication and authorization in daemon applications you develop](resilience-daemon-app.md) +* [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md) +* [Build resilience in your customer identity and access management with Azure AD B2C](resilience-b2c.md) |
active-directory | Resilience Daemon App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-daemon-app.md | + + Title: Increase the resilience of authentication and authorization in daemon applications you develop +description: Learn to increase authentication and authorization resiliency in daemon application using the Microsoft identity platform ++++++++ Last updated : 03/03/2023+++# Increase the resilience of authentication and authorization in daemon applications you develop ++Learn to use the Microsoft identity platform and Azure Active Directory (Azure AD) to increase the resilience of daemon applications. Find information about background processes, services, server to server apps, and applications without users. ++See, [What is the Microsoft identity platform?](../develop/v2-overview.md) ++The following diagram illustrates a daemon application making a call to Microsoft identity platform. ++ ![A daemon application making a call to Microsoft identity platform.](media/resilience-daemon-app/calling-microsoft-identity.png) ++## Managed identities for Azure resources ++If you're building daemon apps on Microsoft Azure, use managed identities for Azure resources, which handle secrets and credentials. The feature improves resilience by handling certificate expiry, rotation, or trust. ++See, [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) ++Managed identities use long-lived access tokens and information from Microsoft identity platform to acquire new tokens before tokens expire. Your app runs while acquiring new tokens. ++Managed identities use regional endpoints, which help prevent out-of-region failures by consolidating service dependencies. Regional endpoints help keep traffic in a geographical area. For example, if your Azure resource is in WestUS2, all traffic stays in WestUS2. ++## Microsoft Authentication Library ++If you develop daemon apps and don't use managed identities, use the Microsoft Authentication Library (MSAL) for authentication and authorization. MSAL eases the process of providing client credentials. For example, your application doesn't need to create and sign JSON web token assertions with certificate-based credentials. ++See, [Overview of the Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) ++### Microsoft.Identity.Web for .NET developers ++If you develop daemon apps on ASP.NET Core, use the Microsoft.Identity.Web library to ease authorization. It includes distributed token cache strategies for distributed apps that run in multiple regions. ++Learn more: ++* [Microsoft Identity Web authentication library](../develop/microsoft-identity-web.md) +* [Distributed token cache](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization#distributed-token-cache) ++## Cache and store tokens ++If you don't use MSAL for authentication and authorization, there are best practices for caching and storing tokens. MSAL implements and follows these best practices. ++An application acquires tokens from an identity provider (IdP) to authorize the application to call protected APIs. When your app receives tokens, the response with the tokens contains an `expires\_in` property that tells the application how long to cache, and reuse, the token. Ensure applications use the `expires\_in` property to determine token lifespan. Confirm application don't attempt to decode an API access token. Using the cached token prevents unnecessary traffic between an app and Microsoft identity platform. Users are signed in to your application for the token's lifetime. ++## HTTP 429 and 5xx error codes ++Use the following sections to learn about HTTP 429 and 5xx error codes ++### HTTP 429 ++There are HTTP errors that affect resilience. If your application receives an HTTP 429 error code, Too Many Requests, Microsoft identity platform is throttling your requests, which prevents your app from receiving tokens. Ensure your apps don't attempt to acquire a token until the time in the **Retry-After** response field expires. The 429 error often indicates the application doesn't cache and reuse tokens correctly. ++### HTTP 5xx ++If an application receives an HTTP 5x error code, the app must not enter a fast retry loop. Ensure applications wait until the **Retry-After** field expires. If the response provides no Retry-After header, use an exponential back-off retry with the first retry, at least 5 seconds after the response. ++When a request times out, confirm that applications don't retry immediately. Use the previously cited exponential back-off retry. ++## Next steps ++* [Increase the resilience of authentication and authorization in client applications you develop](resilience-client-app.md) +* [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md) +* [Build resilience in your customer identity and access management with Azure AD B2C](resilience-b2c.md) |
active-directory | Resilience In Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-in-credentials.md | + + Title: Build resilience with credential management in Azure Active Directory +description: A guide for architects + and IT administrators on building a resilient credential strategy. ++++++ Last updated : 11/16/2022+++++# Build resilience with credential management ++When a credential is presented to Azure Active Directory (Azure AD) in a token request, there are multiple dependencies that must be available for validation. The first authentication factor relies on Azure AD authentication and, in some cases, on on-premises infrastructure. For more information on hybrid authentication architectures, see [Build resilience in your hybrid infrastructure](resilience-in-hybrid.md). ++If you implement a second factor, the dependencies for the second factor are added to the dependencies for the first. For example, if your first factor is via PTA and your second factor is SMS, your dependencies are as follows. ++* Azure AD authentication services +* Azure AD Multi-Factor Authentication service +* On-premises infrastructure +* Phone carrier +* The user's device (not pictured) + +![Image of authentication methods and dependencies](./media/resilience-in-credentials/admin-resilience-credentials.png) ++Your credential strategy should consider the dependencies of each authentication type and provision methods that avoid a single point of failure. ++Because authentication methods have different dependencies, it's a good idea to enable users to register for as many second factor options as possible. Be sure to include second factors with different dependencies, if possible. For example, Voice call and SMS as second factors share the same dependencies, so having them as the only options doesn't mitigate risk. ++The most resilient credential strategy is to use passwordless authentication. Windows Hello for Business and FIDO 2.0 security keys have fewer dependencies than strong authentication with two separate factors. The Microsoft Authenticator app, Windows Hello for Business, and FIDO 2.0 security keys are the most secure. ++For second factors, the Microsoft Authenticator app or other authenticator apps using time-based one time passcode (TOTP) or OAuth hardware tokens have the fewest dependencies and are, therefore, more resilient. ++## How do multiple credentials help resilience? ++Provisioning multiple credential types gives users options that accommodate their preferences and environmental constraints. As a result, interactive authentication where users are prompted for Multi-factor authentication will be more resilient to specific dependencies being unavailable at the time of the request. You can [optimize reauthentication prompts for Multi-factor authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md). ++In addition to individual user resiliency described above, enterprises should plan contingencies for large-scale disruptions such as operational errors that introduce a misconfiguration, a natural disaster, or an enterprise-wide resource outage to an on-premises federation service (especially when used for Multi-factor authentication). ++## How do I implement resilient credentials? ++* Deploy [Passwordless credentials](../authentication/howto-authentication-passwordless-deployment.md) such as Windows Hello for Business, Phone Authentication, and FIDO2 security keys to reduce dependencies. +* Deploy the [Microsoft Authenticator App](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) as a second factor. +* Turn on [password hash synchronization](../hybrid/whatis-phs.md) for hybrid accounts that are synchronized from Windows Server Active Directory. This option can be enabled alongside federation services such as Active Directory Federation Services (AD FS) and provides a fallback in case the federation service fails. +* [Analyze usage of Multi-factor authentication methods](/samples/azure-samples/azure-mfa-authentication-method-analysis/azure-mfa-authentication-method-analysis/) to improve user experience. +* [Implement a resilient access control strategy](../authentication/concept-resilient-controls.md) ++## Next steps +### Resilience resources for administrators and architects + +* [Build resilience with device states](resilience-with-device-states.md) +* [Build resilience by using Continuous Access Evaluation (CAE)](resilience-with-continuous-access-evaluation.md) +* [Build resilience in external user authentication](resilience-b2b-authentication.md) +* [Build resilience in your hybrid authentication](resilience-in-hybrid.md) +* [Build resilience in application access with Application Proxy](resilience-on-premises-access.md) ++### Resilience resources for developers ++* [Build IAM resilience in your applications](resilience-app-development-overview.md) +* [Build resilience in your CIAM systems](resilience-b2c.md) |
active-directory | Resilience In Hybrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-in-hybrid.md | + + Title: Build more resilient hybrid authentication in Azure Active Directory +description: A guide for architects and IT administrators on building a resilient hybrid infrastructure. ++++++ Last updated : 11/16/2022+++++# Build resilience in your hybrid architecture ++Hybrid authentication allows users to access cloud-based resources with their identities mastered on premises. A hybrid infrastructure includes both cloud and on premises components. ++* Cloud components include Azure Active Directory (Azure AD), Azure resources and services, your organization's cloud-based apps, and SaaS applications. +* on premises components include on premises applications, resources like SQL databases, and an identity provider like Windows Server Active Directory. ++> [!IMPORTANT] +> As you plan for resilience in your hybrid infrastructure, it's key to minimize dependencies and single points of failure. ++Microsoft offers three mechanisms for hybrid authentication. The options are listed in order of resilience. We recommend that you implement password hash synchronization, if possible. ++* [Password hash synchronization](../hybrid/whatis-phs.md) (PHS) uses Azure AD Connect to sync the identity and a hash-of-the-hash of the password to Azure AD. It enables users to sign in to cloud-based resources with their password mastered on premises. PHS has on premises dependencies only for synchronization, not for authentication. +* [Pass-through Authentication](../hybrid/how-to-connect-pta.md) (PTA) redirects users to Azure AD for sign-in. Then, the username and password are validated against Active Directory on premises through an agent that is deployed in the corporate network. PTA has an on premises footprint of its Azure AD PTA agents that reside on servers on premises. +* [Federation](../hybrid/whatis-fed.md) customers deploy a federation service such as Active Directory Federation Services (ADFS). Then Azure AD validates the SAML assertion produced by the federation service. Federation has the highest dependency on on-premises infrastructure and, therefore, more failure points. ++You may be using one or more of these methods in your organization. For more information, see [Choose the right authentication method for your Azure AD hybrid identity solution](../hybrid/choose-ad-authn.md). This article contains a decision tree that can help you decide on your methodology. ++## Password hash synchronization ++The simplest and most resilient hybrid authentication option for Azure AD is [Password Hash Synchronization](../hybrid/whatis-phs.md). It doesn't have any on premises identity infrastructure dependency when processing authentication requests. After identities with password hashes are synchronized to Azure AD, users can authenticate to cloud resources with no dependency on the on premises identity components. ++![Architecture diagram of PHS](./media/resilience-in-hybrid/admin-resilience-password-hash-sync.png) ++If you choose this authentication option, you won't experience disruption when on premises identity components become unavailable. On premises disruption can occur for many reasons, including hardware failure, power outages, natural disasters, and malware attacks. ++### How do I implement PHS? ++To implement PHS, see the following resources: ++* [Implement password hash synchronization with Azure AD Connect](../hybrid/how-to-connect-password-hash-synchronization.md) +* [Enable password hash synchronization](../hybrid/how-to-connect-password-hash-synchronization.md) ++If your requirements are such that you can't use PHS, use Pass-through Authentication. ++## Pass-through Authentication ++Pass-through Authentication has a dependency on authentication agents that reside on premises on servers. A persistent connection, or service bus, is present between Azure AD and the on premises PTA agents. The firewall, servers hosting the authentication agents, and the on premises Windows Server Active Directory (or other identity provider) are all potential failure points. ++![Architecture diagram of PTA](./media/resilience-in-hybrid/admin-resilience-pass-through-authentication.png) ++### How do I implement PTA? ++To implement Pass-through Authentication, see the following resources. ++* [How Pass-through Authentication works](../hybrid/how-to-connect-pta-how-it-works.md) +* [Pass-through Authentication security deep dive](../hybrid/how-to-connect-pta-security-deep-dive.md) +* [Install Azure AD Pass-through Authentication](../hybrid/how-to-connect-pta-quick-start.md) ++* If you're using PTA, define a [highly available topology](../hybrid/how-to-connect-pta-quick-start.md). ++ ## Federation ++Federation involves the creation of a trust relationship between Azure AD and the federation service, which includes the exchange of endpoints, token signing certificates, and other metadata. When a request comes to Azure AD, it reads the configuration and redirects the user to the endpoints configured. At that point, the user interacts with the federation service, which issues a SAML assertion that is validated by Azure AD. ++The following diagram shows a topology of an enterprise AD FS deployment that includes redundant federation and web application proxy servers across multiple on premises data centers. This configuration relies on enterprise networking infrastructure components like DNS, Network Load Balancing with geo-affinity capabilities, and firewalls. All on premises components and connections are susceptible to failure. Visit the [AD FS Capacity Planning Documentation](/windows-server/identity/ad-fs/design/planning-for-ad-fs-server-capacity) for more information. ++> [!NOTE] +> Federation has the highest number of on premises dependencies and, therefore, the most potential points of failure. While this diagram shows AD FS, other on premises identity providers are subject to similar design considerations to achieve high availability, scalability, and fail over. ++![Architecture diagram of federation](./media/resilience-in-hybrid/admin-resilience-federation.png) ++ ### How do I implement federation? ++If you're implementing a federated authentication strategy or want to make it more resilient, see the following resources. ++* [What is federated authentication](../hybrid/whatis-fed.md) +* [How federation works](../hybrid/how-to-connect-fed-whatis.md) +* [Azure AD federation compatibility list](../hybrid/how-to-connect-fed-compatibility.md) +* Follow the [AD FS capacity planning documentation](/windows-server/identity/ad-fs/design/planning-for-ad-fs-server-capacity) +* [Deploying AD FS in Azure IaaS](/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs) +* [Enable PHS](../hybrid/tutorial-phs-backup.md) along with your federation ++## Next steps ++### Resilience resources for administrators and architects + +* [Build resilience with credential management](resilience-in-credentials.md) +* [Build resilience with device states](resilience-with-device-states.md) +* [Build resilience by using Continuous Access Evaluation (CAE)](resilience-with-continuous-access-evaluation.md) +* [Build resilience in external user authentication](resilience-b2b-authentication.md) +* [Build resilience in application access with Application Proxy](resilience-on premises-access.md) ++### Resilience resources for developers ++* [Build IAM resilience in your applications](resilience-app-development-overview.md) +* [Build resilience in your CIAM systems](resilience-b2c.md) |
active-directory | Resilience In Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-in-infrastructure.md | + + Title: Build resilience in your IAM infrastructure with Azure Active Directory +description: A guide for architects and IT administrators on building resilience to disruption of their IAM infrastructure. ++++++ Last updated : 11/16/2022+++++# Build resilience in your identity and access management infrastructure ++Azure Active Directory (Azure AD) is a global cloud identity and access management system that provides critical services such as authentication and authorization to your organization's resources. This article provides you with guidance to understand, contain, and mitigate the risk of disruption of authentication or authorization services for resources that rely on Azure AD. ++The document set is designed for ++* Identity Architects +* Identity Service Owners +* Identity Operations teams ++Also see the documentation for [application developers](./resilience-app-development-overview.md) and for [Azure AD B2C systems](resilience-b2c.md). ++## What is resilience? ++In the context of your identity infrastructure, resilience is the ability to endure disruption to services like authentication and authorization, or failure of other components, with minimal or no effect on your business, users, and operations. The effect of disruption can be severe and resilience requires diligent planning. ++## Why worry about disruption? ++Every call to the authentication system is subject to disruption if any component of the call fails. When authentication is disrupted, because of the underlying component failures, your users won't access their applications. Therefore, reducing the number of authentication calls and number of dependencies in those calls is important to your resilience. Application developers can assert some control over how often tokens are requested. For example, work with your developers to ensure they're using Azure AD Managed Identities for their applications wherever possible. ++In a token-based authentication system like Azure AD, a user's application (client) must acquire a security token from the identity system before it can access an application or other resource. During the validity period, a client can present the same token multiple times to access the application. ++When the token presented to the application expires, the application rejects the token, and the client must acquire a new token from Azure AD. Acquiring a new token potentially requires user interaction, such as credential prompts or meeting other requirements of the authentication system. Reducing the frequency of authentication calls with longer-lived tokens decreases unnecessary interactions. However, you must balance token life with the risk created by fewer policy evaluations. For more information on managing token lifetimes, see this article on [optimizing reauthentication prompts](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md). ++## Ways to increase resilience +The following diagram shows six concrete ways you can increase resilience. Each method is explained in detail in the articles linked in the following Next steps portion of this article. + +![Diagram showing overview of admin resilience](./media/resilience-in-infrastructure/admin-resilience-overview.png) ++## Next steps ++## Resilience resources for administrators and architects + +* [Build resilience with credential management](resilience-in-credentials.md) +* [Build resilience with device states](resilience-with-device-states.md) +* [Build resilience by using Continuous Access Evaluation (CAE)](resilience-with-continuous-access-evaluation.md) +* [Build resilience in external user authentication](resilience-b2b-authentication.md) +* [Build resilience in your hybrid authentication](resilience-in-hybrid.md) +* [Build resilience in application access with Application Proxy](resilience-on-premises-access.md) ++## Resilience resources for developers ++* [Build IAM resilience in your applications](resilience-app-development-overview.md) +* [Build resilience in your CIAM systems](resilience-b2c.md) |
active-directory | Resilience On Premises Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-on-premises-access.md | + + Title: Build resilience in application access with Application Proxy +description: A guide for architects and IT administrators on using Application Proxy for resilient access to on-premises applications ++++++ Last updated : 11/16/2022+++++# Build resilience in application access with Application Proxy ++Application Proxy is a feature of Azure Active Directory (Azure AD) that enables users to access on premises web applications from a remote client. Application Proxy includes the Application Proxy service in the cloud and the Application Proxy connectors that run on an on premises server. ++Users access on premises resources through a URL published via Application Proxy. They're redirected to the Azure AD sign-in page. The Application Proxy service in Azure AD then sends a token to the Application Proxy connector in the corporate network that passes the token to the on premises Active Directory. The authenticated user can then access the on premises resource. In the diagram below, [connectors](../app-proxy/application-proxy-connectors.md) are shown in a [connector group](../app-proxy/application-proxy-connector-groups.md). ++> [!IMPORTANT] +> When you publish your applications via Application Proxy, you must implement [capacity planning and appropriate redundancy for the Application Proxy connectors](../app-proxy/application-proxy-connectors.md#capacity-planning). ++![Architecture diagram of Application y](./media/resilience-on-prem-access/admin-resilience-app-proxy.png)) ++## How do I implement Application Proxy? ++To implement remote access with Azure AD Application Proxy, see the following resources. ++* [Planning an Application Proxy deployment](../app-proxy/application-proxy-deployment-plan.md) +* [High availability and load balancing best practices](../app-proxy/application-proxy-high-availability-load-balancing.md) +* [Configure proxy servers](../app-proxy/application-proxy-configure-connectors-with-proxy-servers.md) +* [Design a resilient access control strategy](../authentication/concept-resilient-controls.md) ++## Next steps ++### Resilience resources for administrators and architects + +* [Build resilience with credential management](resilience-in-credentials.md) +* [Build resilience with device states](resilience-with-device-states.md) +* [Build resilience by using Continuous Access Evaluation (CAE)](resilience-with-continuous-access-evaluation.md) +* [Build resilience in external user authentication](resilience-b2b-authentication.md) +* [Build resilience in your hybrid authentication](resilience-in-hybrid.md) ++### Resilience resources for developers ++* [Build IAM resilience in your applications](resilience-app-development-overview.md) +* [Build resilience in your CIAM systems](resilience-b2c.md) |
active-directory | Resilience Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-overview.md | + + Title: Resilience in identity and access management with Azure Active Directory +description: Learn how to build resilience into identity and access management. Resilience helps endure disruption to system components and recover with minimal effort. ++++++++ Last updated : 08/26/2022++++ - it-pro + - seodec18 + - kr2b-contr-experiment ++++# Building resilience into identity and access management with Azure Active Directory ++Identity and access management (IAM) is a framework of processes, policies, and technologies. IAM facilitates the management of identities and what they access. It includes the many components supporting the authentication and authorization of user and other accounts in your system. ++IAM resilience is the ability to endure disruption to system components and recover with minimal impact to your business, users, customers, and operations. Reducing dependencies, complexity, and single-points-of-failure, while ensuring comprehensive error handling, increases your resilience. ++Disruption can come from any component of your IAM systems. To build a resilient IAM system, assume disruptions will occur and plan for them. ++When planning the resilience of your IAM solution, consider the following elements: ++* Your applications that rely on your IAM system +* The public infrastructures your authentication calls use, including telecom companies, Internet service providers, and public key providers +* Your cloud and on-premises identity providers +* Other services that rely on your IAM, and the APIs that connect them +* Any other on-premises components in your system ++Whatever the source, recognizing and planning for the contingencies is important. However, adding other identity systems, and their resultant dependencies and complexity, may reduce your resilience rather than increase it. ++To build more resilience in your systems, review the following articles: ++* [Build resilience in your IAM infrastructure](resilience-in-infrastructure.md) +* [Build IAM resilience in your applications](resilience-app-development-overview.md) +* [Build resilience in your Customer Identity and Access Management (CIAM) systems](resilience-b2c.md) |
active-directory | Resilience With Continuous Access Evaluation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-with-continuous-access-evaluation.md | + + Title: Build resilience by using Continuous Access Evaluation in Azure Active Directory +description: A guide for architects and IT administrators on using CAE ++++++ Last updated : 11/16/2022+++++# Build resilience by using Continuous Access Evaluation ++[Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) allows Azure Active Directory (Azure AD) applications to subscribe to critical events that can then be evaluated and enforced. CAE includes evaluation of the following events: ++* User account deleted or disabled +* Password for user changed +* MFA enabled for user +* Administrator explicitly revokes a token +* Elevated user risk detected ++As a result, applications can reject unexpired tokens based on the events signaled by Azure AD as depicted in the following diagram. ++![conceptualiagram of CAE](./media/resilience-with-cae/admin-resilience-continuous-access-evaluation.png) ++## How does CAE help? ++The CAE mechanism allows Azure AD to issue longer-lived tokens while enabling applications to revoke access and force reauthentication only when needed. The net result of this pattern is fewer calls to acquire tokens, which means that the end-to-end flow is more resilient. ++To use CAE, both the service and the client must be CAE-capable. Microsoft 365 services such as Exchange Online, Teams, and SharePoint Online support CAE. On the client side, browser-based experiences that use these Office 365 services (such as Outlook Web App) and specific versions of Office 365 native clients are CAE-capable. More Microsoft cloud services will become CAE-capable. ++Microsoft is working with the industry to build [standards](https://openid.net/wg/sse/) that will allow third party applications to use CAE capability. You can also develop applications that are CAE-capable. For more information about CAE-capable application development, see [How to build resilience in your application](resilience-app-development-overview.md). ++## How do I implement CAE? ++* [Update your code to use CAE-enabled APIs](../develop/app-resilience-continuous-access-evaluation.md). +* [Enable CAE](../conditional-access/concept-continuous-access-evaluation.md) in the Azure AD Security Configuration. +* Ensure that your organization is using [compatible versions](../conditional-access/concept-continuous-access-evaluation.md) of Microsoft Office native applications. +* [Optimize your reauthentication prompts](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md). +++## Next steps ++### Resilience resources for administrators and architects + +* [Build resilience with credential management](resilience-in-credentials.md) +* [Build resilience with device states](resilience-with-device-states.md) +* [Build resilience in external user authentication](resilience-b2b-authentication.md) +* [Build resilience in your hybrid authentication](resilience-in-hybrid.md) +* [Build resilience in application access with Application Proxy](resilience-on-premises-access.md) ++### Resilience resources for developers ++* [Build IAM resilience in your applications](resilience-app-development-overview.md) +* [Build resilience in your CIAM systems](resilience-b2c.md) |
active-directory | Resilience With Device States | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-with-device-states.md | + + Title: Build resilience by using device states in Azure Active Directory +description: A guide for architects and IT administrators to building resilience by using device states ++++++ Last updated : 11/16/2022+++++# Build resilience with device states ++By enabling [device states](../devices/overview.md) with Azure Active Directory (Azure AD), administrators can author [Conditional Access policies](../conditional-access/overview.md) that control access to applications based on device state. Enabling device states satisfies strong authentication requirements for resource access, reduces multi-factor authentication (MFA) requests, and improves resiliency. ++The following flow chart presents ways to onboard devices in Azure AD that enable device states. You can use more than one in your organization. ++![flow chart for choosing device states](./media/resilience-with-device-states/admin-resilience-devices.png) ++When you use [device states](../devices/overview.md), in most cases users will experience single sign-on to resources through a [Primary Refresh Token](../devices/concept-primary-refresh-token.md) (PRT). The PRT contains claims about the user and the device. You can use these claims to get authentication tokens to access applications from the device. The PRT is valid for 14 days and is continuously renewed as long as the user actively uses the device, providing users a resilient experience. For more information about how a PRT can get multi-factor authentication claims, see [When does a PRT get an MFA claim](../devices/concept-primary-refresh-token.md). ++## How do device states help? ++When a PRT requests access to an application, its device, session, and MFA claims are trusted by Azure AD. When administrators create policies that require either a device-based control or a multi-factor authentication control, then the policy requirement can be met through its device state without attempting MFA. Users won't see more MFA prompts on the same device. This increases resilience to a disruption of the Azure AD Multi-Factor Authentication service or dependencies such as local telecom providers. ++## How do I implement device states? ++* Enable [hybrid Azure AD Joined](../devices/hybrid-azuread-join-plan.md) and [Azure AD Join](../devices/device-join-plan.md) for company-owned Windows devices and require they be joined, if possible. If not possible, require they be registered. If there are older versions of Windows in your organization, upgrade those devices to use Windows 10. +* Standardize user browser access to use either [Microsoft Edge](/deployedge/microsoft-edge-security-identity) or Google Chrome with [supported](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji) [extensions](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb) that enable seamless SSO to web applications using the PRT. +* For personal or company-owned iOS and Android devices, deploy the [Microsoft Authenticator App](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc). In addition to MFA and password-less sign-in capabilities, the Microsoft Authenticator app enables single sign-on across native applications through [brokered authentication](../develop/msal-android-single-sign-on.md) with fewer authentication prompts for end users. +* For personal or company-owned iOS and Android devices, use [mobile application management](/mem/intune/apps/app-management) to securely access company resources with fewer authentication requests. +* For macOS devices, use the [Microsoft Enterprise SSO plug-in for Apple devices (preview)](../develop/apple-sso-plugin.md) to register the device and provide SSO across browser and native Azure AD applications. Then, based on your environment, follow the steps specific to Microsoft Intune or Jamf Pro. ++## Next steps ++### Resilience resources for administrators and architects + +* [Build resilience with credential management](resilience-in-credentials.md) +* [Build resilience by using Continuous Access Evaluation (CAE)](resilience-with-continuous-access-evaluation.md) +* [Build resilience in external user authentication](resilience-b2b-authentication.md) +* [Build resilience in your hybrid authentication](resilience-in-hybrid.md) +* [Build resilience in application access with Application Proxy](resilience-on-premises-access.md) ++### Resilience resources for developers ++* [Build IAM resilience in your applications](resilience-app-development-overview.md) +* [Build resilience in your CIAM systems](resilience-b2c.md) |
active-directory | Resilience With Monitoring Alerting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-with-monitoring-alerting.md | + + Title: Resilience through monitoring and analytics using Azure AD B2C +description: Resilience through monitoring and analytics using Azure AD B2C ++++++++ Last updated : 12/01/2022+++++# Resilience through monitoring and analytics ++Monitoring maximizes the availability and performance of your applications and services. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your infrastructure and applications. Alerts proactively notify you when issues are found with your service or applications. They allow you to identify and address issues before the end users of your service notice them. [Azure AD Log Analytics](https://azure.microsoft.com/services/monitor/?OCID=AID2100131_SEM_6d16332c03501fc9c1f46c94726d2264:G:s&ef_id=6d16332c03501fc9c1f46c94726d2264:G:s&msclkid=6d16332c03501fc9c1f46c94726d2264#features) helps you analyze, search the audit logs and sign-in logs, and build custom views. ++Watch this video to learn how to set up monitoring and reporting in Azure AD B2C using Azure Monitor. ++>[!Video https://www.youtube.com/embed/Mu9GQy-CbXI] ++## Monitor and get notified through alerts ++Monitoring your system and infrastructure is critical to ensure the overall health of your services. It starts with the definition of business metrics, such as, new user arrival, end user's authentication rates, and conversion. Configure such indicators to monitor. If you're planning for an upcoming surge because of promotion or holiday traffic, revise your estimates specifically for the event and corresponding benchmark for the business metrics. After the event, fall back to the previous benchmark. ++Similarly, to detect failures or performance disruptions, setting up a good baseline and then defining alerting is an indispensable practice to respond to emerging issues promptly. ++![Image shows monitoring and analytics components](media/resilience-with-monitoring-alerting/monitoring-analytics-architecture.png) ++### How to implement monitoring and alerting ++- **Monitoring**: Use [Azure Monitor](../../active-directory-b2c/azure-monitor.md) to continuously monitor health against key Service Level Objectives (SLO) and get notification whenever a critical change happens. Begin by identifying Azure AD B2C policy or an application as a critical component of your business whose health needs to be monitored to maintain SLO. Identify key indicators that align with your SLOs. +For example, track the following metrics, since a sudden drop in either will lead to a loss in business. ++ - **Total requests**: The total "n" number of requests sent to Azure AD B2C policy. ++ - **Success rate (%)**: Successful requests/Total number of requests. ++ Access the [key indicators](../../active-directory-b2c/view-audit-logs.md) in [application insights](../../active-directory-b2c/analytics-with-application-insights.md) where Azure AD B2C policy-based logs, [audit logs](../../active-directory-b2c/analytics-with-application-insights.md), and sign-in logs are stored. ++ - **Visualizations**: Using Log analytics build dashboards to visually monitor the key indicators. ++ - **Current period**: Create temporal charts to show changes in the Total requests and Success rate (%) in the current period, for example, current week. ++ - **Previous period**: Create temporal charts to show changes in the Total requests and Success rate (%) over some previous period for reference purposes, for example, last week. ++- **Alerting**: Using log analytics define [alerts](../../azure-monitor/alerts/alerts-log.md) that get triggered when there are sudden changes in the key indicators. These changes may negatively impact the SLOs. Alerts use various forms of notification methods including email, SMS, and webhooks. Start by defining a criterion that acts as a threshold against which alert will be triggered. For example: + - Alert against abrupt drop in Total requests: Trigger an alert when number of total requests drop abruptly. For example, when there's a 25% drop in the total number of requests compared to previous period, raise an alert. + - Alert against significant drop in Success rate (%): Trigger an alert when success rate of the selected policy significantly drops. + - Upon receiving an alert, troubleshoot the issue using [Log Analytics](../reports-monitoring/howto-install-use-log-analytics-views.md), [Application Insights](../../active-directory-b2c/troubleshoot-with-application-insights.md), and [VS Code extension](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) for Azure AD B2C. After you resolve the issue and deploy an updated application or policy, it continues to monitor the key indicators until they return back to normal range. ++- **Service alerts**: Use the [Azure AD B2C service level alerts](../../service-health/service-health-overview.md) to get notified of service issues, planned maintenance, health advisory, and security advisory. ++- **Reporting**: [By using log analytics](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md), build reports that help you gain understanding about user insights, technical challenges, and growth opportunities. + - **Health Dashboard**: Create [custom dashboards using Azure Dashboard](../../azure-monitor/app/tutorial-app-dashboards.md) feature, which supports adding charts using Log Analytics queries. For example, identify pattern of successful and failed sign-ins, failure reasons and telemetry about devices used to make the requests. + - **Abandon Azure AD B2C journeys**: Use the [workbook](https://github.com/azure-ad-b2c/siem#list-of-abandon-journeys) to track the list of abandoned Azure AD B2C journeys where user started the sign-in or sign-up journey but never finished it. It provides you details about policy ID and breakdown of steps that are taken by the user before abandoning the journey. + - **Azure AD B2C monitoring workbooks**: Use the [monitoring workbooks](https://github.com/azure-ad-b2c/siem) that include Azure AD B2C dashboard, Multi-factor authentication (MFA) operations, Conditional Access report, and Search logs by correlationId. This practice provides better insights into the health of your Azure AD B2C environment. + +## Next steps ++- [Resilience resources for Azure AD B2C developers](resilience-b2c.md) + - [Resilient end-user experience](resilient-end-user-experience.md) + - [Resilient interfaces with external processes](resilient-external-processes.md) + - [Resilience through developer best practices](resilience-b2c-developer-best-practices.md) +- [Build resilience in your authentication infrastructure](resilience-in-infrastructure.md) +- [Increase resilience of authentication and authorization in your applications](resilience-app-development-overview.md) |
active-directory | Resilient End User Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilient-end-user-experience.md | + + Title: Resilient end-user experience using Azure AD B2C +description: Methods to build resilience in end-user experience using Azure AD B2C ++++++++ Last updated : 12/01/2022+++++# Resilient end-user experience ++The sign-up and sign-in end-user experience is made up of the following elements: ++- The interfaces the user interacts with ΓÇô such as CSS, HTML, and JavaScript ++- The user flows and custom policies you create ΓÇô such as sign-up, sign-in, and profile edit ++- The identity providers (IDPs) for your application ΓÇô such as local account username/password, Outlook, Facebook, and Google ++![Image shows end-user experience components](media/resilient-end-user-experiences/end-user-experience-architecture.png) ++## Choose between user flow and custom policy ++To help you set up the most common identity tasks, Azure AD B2C provides built-in configurable [user flows](../../active-directory-b2c/user-flow-overview.md). You can also build your own [custom policies](../../active-directory-b2c/custom-policy-overview.md) that offer you maximum flexibility. However, it's recommended to use custom policies only to address complex scenarios. ++### How to decide between user flow and custom policy ++Choose built-in user flows if your business requirements can be met by them. Since extensively tested by Microsoft, you can minimize the testing needed for validating policy-level functional, performance, or scale of these identity user flows. You still need to test your applications for functionality, performance, and scale. ++Should you [choose custom policies](../../active-directory-b2c/user-flow-overview.md) because of your business requirements, make sure you perform policy-level testing for functional, performance, or scale in addition to application-level testing. ++See the article that [compares user flows and custom polices](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies) to help you decide. ++## Choose multiple IDPs ++When using an [external identity provider](../../active-directory-b2c/add-identity-provider.md) such as Facebook, make sure to have a fallback plan in case the external provider becomes unavailable. ++### How to set up multiple IDPs ++As part of the external identity provider registration process, include a verified identity claim such as the user's mobile number or email address. Commit the verified claims to the underlying Azure AD B2C directory instance. If the external provider is unavailable, revert to the verified identity claim, and fall back to the phone number as an authentication method. Another option is to send the user a one-time passcode to allow the user to sign in. ++ Follow these steps to [build alternate authentication paths](https://github.com/azure-ad-b2c/samples/tree/master/policies/idps-filter): ++ 1. Configure your sign-up policy to allow sign up by local account and external IDPs. ++ 2. Configure a profile policy to allow users to [link the other identity to their account](https://github.com/Azure-Samples/active-directory-b2c-advanced-policies/tree/master/account-linking) after they sign in. ++ 3. Notify and allow users to [switch to an alternate IDP](../../active-directory-b2c/customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) during an outage. ++## Availability of Multi-factor authentication ++When using a [phone service for Multi-factor authentication (MFA)](../../active-directory-b2c/phone-authentication-user-flows.md), make sure to consider an alternative service provider. The local Telco or phone service provider may experience disruptions in their service. ++### How to choose an alternate MFA ++The Azure AD B2C service uses a built-in phone-based MFA provider, to deliver time-based One-time passcodes (OTPs). It is in the form of a voice call and text message to user's pre-registered phone number. The following alternative methods are available for various scenarios: ++When you use **user flows**, there are two methods to build resilience: ++- **Change user flow configuration**: Upon detecting a disruption in the phone-based OTP delivery, change the OTP delivery method from phone-based to email-based and redeploy the user flow, leaving the applications unchanged. ++![screenshot shows sign-in sign-up](media/resilient-end-user-experiences/create-sign-in.png) ++- **Change applications**: For each identity task such as sign-up and sign-in, define two sets of user flows. Configure first set to use phone-based OTP and the second to email-based OTP. Upon detecting a disruption in the phone-based OTP delivery, change and redeploy the applications to switch from the first set of user flows to the second, leaving the user flows unchanged. ++When you use **custom policies**, there are four methods to build resilience. Below list is in the order of complexity and you'll need to redeploy updated policies. ++- **Enable user selection of either phone-based OTP or email-based OTP**: Expose both options to the users and enable users to self-select one of the options. There's no need to make changes to the policies or applications. ++- **Dynamically switch between phone-based OTP and email-based OTP**: Collect both phone and email information at sign-up. Define custom policy in advance to conditionally switch during a phone disruption, from phone-based to email-based OTP delivery. There's no need to make changes to the policies or applications. ++- **Use an Authenticator app**: Update custom policy to use an [Authenticator app](https://github.com/azure-ad-b2c/samples/tree/master/policies/custom-mfa-totp). If your normal MFA is either phone-based or email-based OTP, then redeploy your custom policies to switch to use the Authenticator app. ++>[!Note] +>Users need to configure Authenticator app integration during the sign-up. ++- **Use Security Questions**: If none of the above methods are applicable, implement Security Questions as a backup. Set up Security Questions for users during onboarding or profile edit and store the answers in a separate database other than the directory. This method doesn't meet the MFA requirement of "something you have" for example, phone, but offers a secondary "something that you know". ++## Use a content delivery network ++Content delivery networks (CDNs) are better performant and less expensive than blob stores for storage of custom user flow UI. The web page content is delivered faster from a geographically distributed network of highly available servers. ++Periodically test your CDN's availability and the performance of content distribution through end-to-end scenario and load testing. If you're planning for an upcoming surge because of promotion or holiday traffic, revise your estimates for load testing. + +## Next steps ++- [Resilience resources for Azure AD B2C developers](resilience-b2c.md) + + - [Resilient interfaces with external processes](resilient-external-processes.md) + - [Resilience through developer best practices](resilience-b2c-developer-best-practices.md) + - [Resilience through monitoring and analytics](resilience-with-monitoring-alerting.md) +- [Build resilience in your authentication infrastructure](resilience-in-infrastructure.md) +- [Increase resilience of authentication and authorization in your applications](resilience-app-development-overview.md) |
active-directory | Resilient External Processes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilient-external-processes.md | + + Title: Resilient interfaces with external processes using Azure AD B2C +description: Methods to build resilient interfaces with external processes ++++++++ Last updated : 12/01/2022+++++# Resilient interfaces with external processes ++In this article, we provide you guidance on how to plan for and implement the RESTful APIs in the user journey and make your application more resilient to API failures. ++![Image shows interfaces with external process components](media/resilient-external-processes/external-processes-architecture.png) ++## Ensure correct placement of the APIs ++Identity experience framework (IEF) policies allow you to call an external system using a [RESTful API technical profile](../../active-directory-b2c/restful-technical-profile.md). External systems aren't controlled by the IEF runtime environment and are a potential failure point. ++### How to manage external systems using APIs ++- While calling an interface to access certain data, check whether the data is going to drive the authentication decision. Assess whether the information is essential to the core functionality of the application. For example, an e-commerce vs. a secondary functionality such as an administration. If the information isn't needed for authentication and only required for secondary scenarios, then consider moving the call to the application logic. ++- If the data that is necessary for authentication is relatively static and small, and has no other business reason to be externalized from the directory, then consider having it in the directory. ++- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and cripple your application. For example, using CAPTCHA in your sign in, sign up flow can help. ++- Use [API connectors of built-in sign-up user flow](../../active-directory-b2c/api-connectors-overview.md) wherever possible to integrate with web APIs either After federating with an identity provider during sign-up or before creating the user. Since the user flows are already extensively tested, it's likely that you don't have to perform user flow-level functional, performance, or scale testing. You still need to test your applications for functionality, performance, and scale. ++- Azure AD RESTful API [technical profiles](../../active-directory-b2c/restful-technical-profile.md) don't provide any caching behavior. Instead, RESTful API profile implements a retry logic and a timeout that is built into the policy. ++- For APIs that need writing data, queue up a task to have such tasks executed by a background worker. Services like [Azure queues](../../storage/queues/storage-queues-introduction.md) can be used. This practice will make the API return efficiently and increase the policy execution performance. ++## API error handling ++As the APIs live outside the Azure AD B2C system, it's needed to have proper error handling within the technical profile. Make sure the end user is informed appropriately and the application can deal with failure gracefully. ++### How to gracefully handle API errors ++- An API could fail for various reasons, make your application resilient to such failures. [Return an HTTP 4XX error message](../../active-directory-b2c/restful-technical-profile.md#returning-validation-error-message) if the API is unable to complete the request. In the Azure AD B2C policy, try to gracefully handle the unavailability of the API and perhaps render a reduced experience. ++- [Handle transient errors gracefully](../../active-directory-b2c/restful-technical-profile.md#error-handling). The RESTful API profile allows you to configure error messages for various [circuit breakers](/azure/architecture/patterns/circuit-breaker). ++- Proactively monitor and using Continuous Integration/Continuous Delivery (CICD), rotate the API access credentials such as passwords and certificates used by the [Technical profile engine](../../active-directory-b2c/restful-technical-profile.md). ++## API management - best practices ++While you deploy the REST APIs and configure the RESTful technical profile, following the recommended best practices will help you from not making common mistakes and things being overlooked. ++### How to manage APIs ++- API Management (APIM) publishes, manages, and analyzes your APIs. APIM also handles authentication to provide secure access to backend services and microservices. Use an API gateway to scale out API deployments, caching, and load balancing. ++- Recommendation is to get the right token at the beginning of the user journey instead of calling multiple times for each API and [secure an Azure APIM API](../../active-directory-b2c/secure-api-management.md?tabs=app-reg-ga). ++## Next steps ++- [Resilience resources for Azure AD B2C developers](resilience-b2c.md) + - [Resilient end-user experience](resilient-end-user-experience.md) + - [Resilience through developer best practices](resilience-b2c-developer-best-practices.md) + - [Resilience through monitoring and analytics](resilience-with-monitoring-alerting.md) +- [Build resilience in your authentication infrastructure](resilience-in-infrastructure.md) +- [Increase resilience of authentication and authorization in your applications](resilience-app-development-overview.md) |
active-directory | Road To The Cloud Establish | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/road-to-the-cloud-establish.md | + + Title: Road to the cloud - Establish a footprint for moving identity and access management from Active Directory to Azure AD +description: Establish an Azure AD footprint as part of planning your migration of IAM from Active Directory to Azure AD. +documentationCenter: '' +++++ Last updated : 07/27/2023++++# Establish an Azure AD footprint ++Before you migrate identity and access management (IAM) from Active Directory to Azure Active Directory (Azure AD), you need to set up Azure AD. ++## Required tasks ++If you're using Microsoft Office 365, Exchange Online, or Teams, then you're already using Azure AD. Your next step is to establish more Azure AD capabilities: ++* Establish hybrid identity synchronization between Active Directory and Azure AD by using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) or [Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md). ++* [Select authentication methods](../hybrid/choose-ad-authn.md). We strongly recommend password hash synchronization. ++* Secure your hybrid identity infrastructure by following [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md). ++## Optional tasks ++The following functions aren't specific or mandatory to move from Active Directory to Azure AD, but we recommend incorporating them into your environment. These items are also recommended in the [Zero Trust](/security/zero-trust/) guidance. ++### Deploy passwordless authentication ++In addition to the security benefits of [passwordless credentials](../authentication/concept-authentication-passwordless.md), passwordless authentication simplifies your environment because the management and registration experience is already native to the cloud. Azure AD provides passwordless credentials that align with various use cases. Use the information in this article to plan your deployment: [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md). ++After you roll out passwordless credentials to your users, consider reducing the use of password credentials. You can use the [reporting and insights dashboard](../authentication/howto-authentication-methods-activity.md) to continue to drive the use of passwordless credentials and reduce the use of passwords in Azure AD. ++>[!IMPORTANT] +>During your application discovery, you might find applications that have a dependency or assumptions around passwords. Users of these applications need to have access to their passwords until those applications are updated or migrated. ++### Configure hybrid Azure AD join for existing Windows clients ++You can configure hybrid Azure AD join for existing Active Directory-joined Windows clients to benefit from cloud-based security features such as [co-management](/mem/configmgr/comanage/overview), conditional access, and Windows Hello for Business. New devices should be Azure AD joined and not hybrid Azure AD joined. ++To learn more, check [Plan your hybrid Azure Active Directory join implementation](../devices/hybrid-azuread-join-plan.md). ++## Next steps ++* [Introduction](road-to-the-cloud-introduction.md) +* [Cloud transformation posture](road-to-the-cloud-posture.md) +* [Implement a cloud-first approach](road-to-the-cloud-implement.md) +* [Transition to the cloud](road-to-the-cloud-migrate.md) |
active-directory | Road To The Cloud Implement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/road-to-the-cloud-implement.md | + + Title: Road to the cloud - Implement a cloud-first approach when moving identity and access management from Active Directory to Azure AD +description: Implement a cloud-first approach as part of planning your migration of IAM from Active Directory to Azure AD. +documentationCenter: '' +++++ Last updated : 07/27/2023++++# Implement a cloud-first approach ++It's mainly a process and policy-driven phase to stop, or limit as much as possible, adding new dependencies to Active Directory and implement a cloud-first approach for new demand of IT solutions. ++It's key at this point to identify the internal processes that would lead to adding new dependencies on Active Directory. For example, most organizations would have a change management process that has to be followed before the implementation of new scenarios, features, and solutions. We strongly recommend making sure that these change approval processes are updated to: ++- Include a step to evaluate whether the proposed change would add new dependencies on Active Directory. +- Request the evaluation of Azure Active Directory (Azure AD) alternatives when possible. ++## Users and groups ++You can enrich user attributes in Azure AD to make more user attributes available for inclusion. Examples of common scenarios that require rich user attributes include: ++* App provisioning: The data source of app provisioning is Azure AD, and necessary user attributes must be in there. ++* Application authorization: A token that Azure AD issues can include claims generated from user attributes so that applications can make authorization decisions based on the claims in the token. It can also contain attributes coming from external data sources through a [custom claims provider](../develop/custom-claims-provider-overview.md). ++* Group membership population and maintenance: Dynamic groups enable dynamic population of group membership based on user attributes, such as department information. ++These two links provide guidance on making schema changes: ++* [Understand the Azure AD schema and custom expressions](../cloud-sync/concept-attributes.md) ++* [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md) ++These links provide more information on this topic but aren't specific to changing the schema: ++* [Use Azure AD schema extension attributes in claims - Microsoft identity platform](../develop/active-directory-schema-extensions.md) ++* [What are custom security attributes in Azure AD (preview)?](../fundamentals/custom-security-attributes-overview.md) ++* [Customize Azure Active Directory attribute mappings in application provisioning](../app-provisioning/customize-application-attributes.md) ++* [Provide optional claims to Azure AD apps - Microsoft identity platform](../develop/active-directory-optional-claims.md) ++These links provide more information about groups: ++* [Create or edit a dynamic group and get status in Azure AD](../enterprise-users/groups-create-rule.md) ++* [Use self-service groups for user-initiated group management](../enterprise-users/groups-self-service-management.md) ++* [Attribute-based application provisioning with scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) or [What is Azure AD entitlement management?](../governance/entitlement-management-overview.md) (for application access) ++* [Compare groups](/microsoft-365/admin/create-groups/compare-groups) ++* [Restrict guest access permissions in Azure Active Directory](../enterprise-users/users-restrict-guest-permissions.md) ++You and your team might feel compelled to change your current employee provisioning to use cloud-only accounts at this stage. The effort is nontrivial but doesn't provide enough business value. We recommend that you plan this transition at a different phase of your transformation. ++## Devices ++Client workstations are traditionally joined to Active Directory and managed via Group Policy objects (GPOs) or device management solutions such as Microsoft Configuration Manager. Your teams will establish a new policy and process to prevent newly deployed workstations from being domain joined. Key points include: ++* Mandate [Azure AD join](../devices/concept-azure-ad-join.md) for new Windows client workstations to achieve "no more domain join." ++* Manage workstations from the cloud by using unified endpoint management (UEM) solutions such as [Intune](/mem/intune/fundamentals/what-is-intune). ++[Windows Autopilot](/mem/autopilot/windows-autopilot) can help you establish a streamlined onboarding and device provisioning, which can enforce these directives. ++[Windows Local Administrator Password Solution](../devices/howto-manage-local-admin-passwords.md) (LAPS) enables a cloud-first solution to manage the passwords of local administrator accounts. ++For more information, see [Learn more about cloud-native endpoints](/mem/cloud-native-endpoints-overview). ++## Applications ++Traditionally, application servers are often joined to an on-premises Active Directory domain so that they can use Windows Integrated Authentication (Kerberos or NTLM), directory queries through LDAP, and server management through GPO or Microsoft Configuration Manager. ++The organization has a process to evaluate Azure AD alternatives when it's considering new services, apps, or infrastructure. Directives for a cloud-first approach to applications should be as follows. (New on-premises applications or legacy applications should be a rare exception when no modern alternative exists.) ++* Provide a recommendation to change the procurement policy and application development policy to require modern protocols (OIDC/OAuth2 and SAML) and authenticate by using Azure AD. New apps should also support [Azure AD app provisioning](../app-provisioning/what-is-hr-driven-provisioning.md) and have no dependency on LDAP queries. Exceptions require explicit review and approval. ++ > [!IMPORTANT] + > Depending on the anticipated demands of applications that require legacy protocols, you can choose to deploy [Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md) when more current alternatives won't work. ++* Provide a recommendation to create a policy to prioritize use of cloud-native alternatives. The policy should limit deployment of new application servers to the domain. Common cloud-native scenarios to replace Active Directory-joined servers include: ++ * File servers: ++ * SharePoint or OneDrive provides collaboration support across Microsoft 365 solutions and built-in governance, risk, security, and compliance. ++ * [Azure Files](../../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard SMB or NFS protocol. Customers can use native [Azure AD authentication to Azure Files](../../virtual-desktop/create-profile-container-azure-ad.md) over the internet without line of sight to a domain controller. ++ * Azure AD works with third-party applications in the Microsoft [application gallery](/microsoft-365/enterprise/integrated-apps-and-azure-ads). ++ * Print servers: ++ * If your organization has a mandate to procure [Universal Print](/universal-print/)-compatible printers, see [Partner integrations](/universal-print/fundamentals/universal-print-partner-integrations). ++ * Bridge with the [Universal Print connector](/universal-print/fundamentals/universal-print-connector-overview) for incompatible printers. ++## Next steps ++* [Introduction](road-to-the-cloud-introduction.md) +* [Cloud transformation posture](road-to-the-cloud-posture.md) +* [Establish an Azure AD footprint](road-to-the-cloud-establish.md) +* [Transition to the cloud](road-to-the-cloud-migrate.md) |
active-directory | Road To The Cloud Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/road-to-the-cloud-introduction.md | + + Title: Road to the cloud - Introduction to moving identity and access management from AD to Azure AD +description: Learn how to plan a migration of IAM from Active Directory to Azure AD. +documentationCenter: '' +++++ Last updated : 07/27/2023++++# Road to the cloud: Introduction ++Some organizations set goals to remove Active Directory and their on-premises IT footprint. Others take advantage of some cloud-based capabilities to reduce the Active Directory footprint, but not to completely remove their on-premises environments. ++This content provides guidance to move: ++* *From* Active Directory and other non-cloud-based services, either on-premises or infrastructure as a service (IaaS), that provide identity management (IDM), identity and access management (IAM), and device management. ++* *To* Azure Active Directory (Azure AD) and other Microsoft cloud-native solutions for IDM, IAM, and device management. ++>[!NOTE] +> In this content, *Active Directory* refers to Windows Server Active Directory Domain Services. ++Transformation must be aligned with and achieve business objectives, including increased productivity, reduced costs and complexity, and improved security posture. To better understand the costs versus value of moving to the cloud, see [Forrester TEI for Microsoft Azure Active Directory](https://www.microsoft.com/security/business/forrester-tei-study) and [Cloud economics](https://azure.microsoft.com/overview/cloud-economics/). ++## Next steps ++* [Cloud transformation posture](road-to-the-cloud-posture.md) +* [Establish an Azure AD footprint](road-to-the-cloud-establish.md) +* [Implement a cloud-first approach](road-to-the-cloud-implement.md) +* [Transition to the cloud](road-to-the-cloud-migrate.md) |
active-directory | Road To The Cloud Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/road-to-the-cloud-migrate.md | + + Title: Road to the cloud - Move identity and access management from Active Directory to an Azure AD migration workstream +description: Learn to plan your migration workstream of IAM from Active Directory to Azure AD. +documentationCenter: '' +++++ Last updated : 07/27/2023++++# Transition to the cloud ++After you align your organization toward halting growth of the Active Directory footprint, you can focus on moving the existing on-premises workloads to Azure Active Directory (Azure AD). This article describes the various migration workstreams. You can execute the workstreams in this article based on your priorities and resources. ++A typical migration workstream has the following stages: ++* **Discover**: Find out what you currently have in your environment. ++* **Pilot**: Deploy new cloud capabilities to a small subset of users, applications, or devices, depending on the workstream. ++* **Scale out**: Expand the pilot to complete the transition of a capability to the cloud. ++* **Cut over (when applicable)**: Stop using the old on-premises workload. ++## Users and groups ++### Enable password self-service ++We recommend a [passwordless environment](../authentication/concept-authentication-passwordless.md). Until then, you can migrate password self-service workflows from on-premises systems to Azure AD to simplify your environment. Azure AD [self-service password reset (SSPR)](../authentication/concept-sspr-howitworks.md) gives users the ability to change or reset their password, with no administrator or help desk involvement. ++To enable self-service capabilities, choose the appropriate [authentication methods](../authentication/concept-authentication-methods.md) for your organization. After the authentication methods are updated, you can enable user self-service password capability for your Azure AD authentication environment. For deployment guidance, see [Deployment considerations for Azure Active Directory self-service password reset](../authentication/howto-sspr-deployment.md). ++Additional considerations include: ++* Deploy [Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md) in a subset of domain controllers with **Audit** mode to gather information about the impact of modern policies. +* Gradually enable [combined registration for SSPR and Azure AD Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md). For example, roll out by region, subsidiary, or department for all users. +* Go through a cycle of password change for all users to flush out weak passwords. After the cycle is complete, implement the policy expiration time. +* Switch the Password Protection configuration in the domain controllers that have the mode set to **Enforced**. For more information, see [Enable on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md). ++>[!NOTE] +>* We recommend user communications and evangelizing for a smooth deployment. See [Sample SSPR rollout materials](https://www.microsoft.com/download/details.aspx?id=56768). +>* If you use Azure AD Identity Protection, enable [password reset as a control in Conditional Access policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) for users marked as risky. ++### Move management of groups ++To transform groups and distribution lists: ++* For security groups, use your existing business logic that assigns users to security groups. Migrate the logic and capability to Azure AD and dynamic groups. ++* For self-managed group capabilities provided by Microsoft Identity Manager, replace the capability with self-service group management. ++* You can [convert distribution lists to Microsoft 365 groups](/microsoft-365/admin/manage/upgrade-distribution-lists) in Outlook. This approach is a great way to give your organization's distribution lists all the features and functionality of Microsoft 365 groups. ++* Upgrade your [distribution lists to Microsoft 365 groups in Outlook](https://support.microsoft.com/office/7fb3d880-593b-4909-aafa-950dd50ce188) and [decommission your on-premises Exchange server](/exchange/decommission-on-premises-exchange). ++### Move provisioning of users and groups to applications ++You can simplify your environment by removing application provisioning flows from on-premises identity management (IDM) systems such as Microsoft Identity Manager. Based on your application discovery, categorize your application based on the following characteristics: ++* Applications in your environment that have a provisioning integration with the [Azure AD application gallery](https://www.microsoft.com/security/business/identity-access-management/integrated-apps-azure-ad). ++* Applications that aren't in the gallery but support the SCIM 2.0 protocol. These applications are natively compatible with the Azure AD cloud provisioning service. ++* On-premises applications that have an ECMA connector available. These applications can be integrated with [Azure AD on-premises application provisioning](../app-provisioning/on-premises-application-provisioning-architecture.md). ++For more information, check [Plan an automatic user-provisioning deployment for Azure Active Directory](../app-provisioning/plan-auto-user-provisioning.md). ++### Move to cloud HR provisioning ++You can reduce your on-premises footprint by moving the HR provisioning workflows from on-premises IDM systems, such as Microsoft Identity Manager, to Azure AD. Two account types are available for Azure AD cloud HR provisioning: ++* For new employees who are exclusively using applications that use Azure AD, you can choose to provision *cloud-only accounts*. This provisioning helps you contain the footprint of Active Directory. ++* For new employees who need access to applications that have dependency on Active Directory, you can provision *hybrid accounts*. ++Azure AD cloud HR provisioning can also manage Active Directory accounts for existing employees. For more information, see [Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) and [Plan the deployment project](../app-provisioning/plan-auto-user-provisioning.md). ++### Move lifecycle workflows ++Evaluate your existing joiner/mover/leaver workflows and processes for applicability and relevance to your Azure AD cloud environment. You can then simplify these workflows and [create new ones](../governance/create-lifecycle-workflow.md) using [lifecycle workflows](../governance/what-are-lifecycle-workflows.md). ++### Move external identity management ++If your organization provisions accounts in Active Directory or other on-premises directories for external identities such as vendors, contractors, or consultants, you can simplify your environment by managing those third-party user objects natively in the cloud. Here are some possibilities: ++* For new external users, use [Azure AD External Identities](../external-identities/external-identities-overview.md), which stops the Active Directory footprint of users. ++* For existing Active Directory accounts that you provision for external identities, you can remove the overhead of managing local credentials (for example, passwords) by configuring them for business-to-business (B2B) collaboration. Follow the steps in [Invite internal users to B2B collaboration](../external-identities/invite-internal-users.md). ++* Use [Azure AD entitlement management](../governance/entitlement-management-overview.md) to grant access to applications and resources. Most companies have dedicated systems and workflows for this purpose that you can now move out of on-premises tools. ++* Use [access reviews](../governance/access-reviews-external-users.md) to remove access rights and/or external identities that are no longer needed. ++## Devices ++### Move non-Windows workstations ++You can integrate non-Windows workstations with Azure AD to enhance the user experience and to benefit from cloud-based security features such as conditional access. ++* For macOS: ++ * Register macOS to Azure AD and [enroll/manage them by using a mobile device management solution](/mem/intune/enrollment/macos-enroll). ++ * Deploy the [Microsoft Enterprise SSO (single sign-on) plug-in for Apple devices](../develop/apple-sso-plugin.md). ++ * Plan to deploy [Platform SSO for macOS 13](https://techcommunity.microsoft.com/t5/microsoft-endpoint-manager-blog/microsoft-simplifies-endpoint-manager-enrollment-for-apple/ba-p/3570319). ++* For Linux, you can [sign in to a Linux virtual machine (VM) by using Azure Active Directory credentials](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md). ++### Replace other Windows versions for workstations ++If you have the following operating systems on workstations, consider upgrading to the latest versions to benefit from cloud-native management (Azure AD join and unified endpoint management): ++* Windows 7 or 8.x ++* Windows Server ++### VDI solution ++This project has two primary initiatives: ++* **New deployments**: Deploy a cloud-managed virtual desktop infrastructure (VDI) solution, such as Windows 365 or Azure Virtual Desktop, that doesn't require on-premises Active Directory. ++* **Existing deployments**: If your existing VDI deployment is dependent on Active Directory, use business objectives and goals to determine whether you maintain the solution or migrate it to Azure AD. ++For more information, see: ++* [Deploy Azure AD-joined VMs in Azure Virtual Desktop](../../virtual-desktop/deploy-azure-ad-joined-vm.md) ++* [Windows 365 planning guide](/windows-365/enterprise/planning-guide) ++## Applications ++To help maintain a secure environment, Azure AD supports modern authentication protocols. To transition application authentication from Active Directory to Azure AD, you must: ++* Determine which applications can migrate to Azure AD with no modification. ++* Determine which applications have an upgrade path that enables you to migrate with an upgrade. ++* Determine which applications require replacement or significant code changes to migrate. ++The outcome of your application discovery initiative is to create a prioritized list for migrating your application portfolio. The list contains applications that: ++* Require an upgrade or update to the software, and an upgrade path is available. ++* Require an upgrade or update to the software, but an upgrade path isn't available. ++By using the list, you can further evaluate the applications that don't have an existing upgrade path. Determine whether business value warrants updating the software or if it should be retired. If the software should be retired, decide whether you need a replacement. ++Based on the results, you might redesign aspects of your transformation from Active Directory to Azure AD. There are approaches that you can use to extend on-premises Active Directory to Azure infrastructure as a service (IaaS) (lift and shift) for applications with unsupported authentication protocols. We recommend that you set a policy that requires an exception to use this approach. ++### Application discovery ++After you've segmented your app portfolio, you can prioritize migration based on business value and business priority. You can use tools to create or refresh your app inventory. ++There are three main ways to categorize your apps: ++* **Modern authentication apps**: These applications use modern authentication protocols (such as OIDC, OAuth2, SAML, or WS-Federation) or that use a federation service such as Active Directory Federation Services (AD FS). ++* **Web access management (WAM) tools**: These applications use headers, cookies, and similar techniques for SSO. These apps typically require a WAM identity provider, such as Symantec SiteMinder. ++* **Legacy apps**: These applications use legacy protocols such as Kerberos, LDAP, Radius, Remote Desktop, and NTLM (not recommended). ++Azure AD can be used with each type of application to provide levels of functionality that results in different migration strategies, complexity, and trade-offs. Some organizations have an application inventory that can be used as a discovery baseline. (It's common that this inventory isn't complete or updated.) ++To discover modern authentication apps: ++* If you're using AD FS, use the [AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md). ++* If you're using a different identity provider, use the logs and configuration. ++The following tools can help you discover applications that use LDAP: ++* [Event1644Reader](/troubleshoot/windows-server/identity/event1644reader-analyze-ldap-query-performance): Sample tool for collecting data on LDAP queries made to domain controllers by using field engineering logs. ++* [Microsoft 365 Defender for Identity](/defender-for-identity/monitored-activities): Security solution that uses a sign-in operations monitoring capability. (Note that it captures binds by using LDAP, not Secure LDAP.) ++* [PSLDAPQueryLogging](https://github.com/RamblingCookieMonster/PSLDAPQueryLogging): GitHub tool for reporting on LDAP queries. ++### Migrate AD FS or other federation services ++When you plan your migration to Azure AD, consider migrating the apps that use modern authentication protocols (such as SAML and OpenID Connect) first. You can reconfigure these apps to authenticate with Azure AD either via a built-in connector from the Azure App Gallery or via registration in Azure AD. ++After you move SaaS applications that were federated to Azure AD, there are a few steps to decommission the on-premises federation system: ++* [Move application authentication to Azure Active Directory](../manage-apps/migrate-adfs-apps-to-azure.md) ++* [Migrate from Azure AD Multi-Factor Authentication Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md) ++* [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md) ++* [Move remote access to internal applications](#move-remote-access-to-internal-applications), if you're using Azure AD Application Proxy ++>[!IMPORTANT] +>If you're using other features, verify that those services are relocated before you decommission Active Directory Federation Services. ++### Move WAM authentication apps ++This project focuses on migrating SSO capability from WAM systems to Azure AD. To learn more, see [Migrate applications from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/resources/migrating-applications-from-symantec-siteminder-to-azure-active-directory/). ++### Define an application server management strategy ++In terms of infrastructure management, on-premises environments often use a combination of Group Policy objects (GPOs) and Microsoft Configuration Manager features to segment management duties. For example, duties can be segmented into security policy management, update management, configuration management, and monitoring. ++Active Directory is for on-premises IT environments, and Azure AD is for cloud-based IT environments. One-to-one parity of features isn't present here, so you can manage application servers in several ways. ++For example, Azure Arc helps bring many of the features that exist in Active Directory together into a single view when you use Azure AD for identity and access management (IAM). You can also use Azure Active Directory Domain Services (Azure AD DS) to domain-join servers in Azure AD, especially when you want those servers to use GPOs for specific business or technical reasons. ++Use the following table to determine what Azure-based tools you can use to replace the on-premises environment: ++| Management area | On-premises (Active Directory) feature | Equivalent Azure AD feature | +| - | - | -| +| Security policy management| GPO, Microsoft Configuration Manager| [Microsoft 365 Defender for Cloud](https://azure.microsoft.com/services/security-center/) | +| Update management| Microsoft Configuration Manager, Windows Server Update Services| [Azure Automation Update Management](../../automation/update-management/overview.md) | +| Configuration management| GPO, Microsoft Configuration Manager| [Azure Automation State Configuration](../../automation/automation-dsc-overview.md) | +| Monitoring| System Center Operations Manager| [Azure Monitor Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) | ++Here's more information that you can use for application server management: ++* [Azure Arc](https://azure.microsoft.com/services/azure-arc/) enables Azure features for non-Azure VMs. For example, you can use it to get Azure features for Windows Server when it's used on-premises or on Amazon Web Services, or [authenticate to Linux machines with SSH](/azure/azure-arc/servers/ssh-arc-overview?tabs=azure-cli). ++* [Manage and secure your Azure VM environment](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/). ++* If you must wait to migrate or perform a partial migration, you can use GPOs with [Azure AD DS](https://azure.microsoft.com/services/active-directory-ds/). ++If you require management of application servers with Microsoft Configuration Manager, you can't achieve this requirement by using Azure AD DS. Microsoft Configuration Manager isn't supported to run in an Azure AD DS environment. Instead, you need to extend your on-premises Active Directory instance to a domain controller running on an Azure VM. Or, you need to deploy a new Active Directory instance to an Azure IaaS virtual network. ++### Define the migration strategy for legacy applications ++Legacy applications have dependencies like these to Active Directory: ++* User authentication and authorization: Kerberos, NTLM, LDAP bind, ACLs. ++* Access to directory data: LDAP queries, schema extensions, read/write of directory objects. ++* Server management: As determined by the [server management strategy](#define-an-application-server-management-strategy). ++To reduce or eliminate those dependencies, you have three main approaches. ++#### Approach 1 ++In the most preferred approach, you undertake projects to migrate from legacy applications to SaaS alternatives that use modern authentication. Have the SaaS alternatives authenticate to Azure AD directly: ++1. Deploy Azure AD DS into an Azure virtual network and [extend the schema](/azure/active-directory-domain-services/concepts-custom-attributes) to incorporate additional attributes needed by the applications. ++2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to Azure AD DS. ++3. Publish legacy apps to the cloud by using Azure AD Application Proxy or a [secure hybrid access](../manage-apps/secure-hybrid-access.md) partner. ++4. As legacy apps retire through attrition, eventually decommission Azure AD DS running in the Azure virtual network. ++>[!NOTE] +>* Use Azure AD DS if the dependencies are aligned with [common deployment scenarios for Azure AD DS](../../active-directory-domain-services/scenarios.md). +>* To validate if Azure AD DS is a good fit, you might use tools like [Service Map in Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) and [automatic dependency mapping with Service Map and Live Maps](https://techcommunity.microsoft.com/t5/system-center-blog/automatic-dependency-mapping-with-service-map-and-live-maps/ba-p/351867). +>* Validate that your SQL Server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide). ++#### Approach 2 ++If the first approach isn't possible and an application has a strong dependency on Active Directory, you can extend on-premises Active Directory to Azure IaaS. ++You can replatform to support modern serverless hosting--for example, use platform as a service (PaaS). Or, you can update the code to support modern authentication. You can also enable the app to integrate with Azure AD directly. [Learn about Microsoft Authentication Library in the Microsoft identity platform](../develop/msal-overview.md). ++1. Connect an Azure virtual network to the on-premises network via virtual private network (VPN) or Azure ExpressRoute. ++2. Deploy new domain controllers for the on-premises Active Directory instance as virtual machines into the Azure virtual network. ++3. Lift and shift legacy apps to VMs on the Azure virtual network that are domain joined. ++4. Publish legacy apps to the cloud by using Azure AD Application Proxy or a [secure hybrid access](../manage-apps/secure-hybrid-access.md) partner. ++5. Eventually, decommission the on-premises Active Directory infrastructure and run Active Directory in the Azure virtual network entirely. ++6. As legacy apps retire through attrition, eventually decommission the Active Directory instance running in the Azure virtual network. ++#### Approach 3 ++If the first migration isn't possible and an application has a strong dependency on Active Directory, you can deploy a new Active Directory instance to Azure IaaS. Leave the applications as legacy applications for the foreseeable future, or sunset them when the opportunity arises. ++This approach enables you to decouple the app from the existing Active Directory instance to reduce surface area. We recommend that you consider it only as a last resort. ++1. Deploy a new Active Directory instance as virtual machines in an Azure virtual network. ++2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to the new Active Directory instance. ++3. Publish legacy apps to the cloud by using Azure AD Application Proxy or a [secure hybrid access](../manage-apps/secure-hybrid-access.md) partner. ++4. As legacy apps retire through attrition, eventually decommission the Active Directory instance running in the Azure virtual network. ++#### Comparison of strategies ++| Strategy | Azure AD DS | Extend Active Directory to IaaS | Independent Active Directory instance in IaaS | +| - | - | - | - | +| Decoupling from on-premises Active Directory| Yes| No| Yes | +| Allowing schema extensions| No| Yes| Yes | +| Full administrative control| No| Yes| Yes | +| Potential reconfiguration of apps required (for example, ACLs or authorization)| Yes| No| Yes | ++### Move VPN authentication ++This project focuses on moving your VPN authentication to Azure AD. It's important to know that different configurations are available for VPN gateway connections. You need to determine which configuration best fits your needs. For more information on designing a solution, see [VPN gateway design](../../vpn-gateway/design.md). ++Here are key points about usage of Azure AD for VPN authentication: ++* Check if your VPN providers support modern authentication. For example: ++ * [Tutorial: Azure Active Directory SSO integration with Cisco AnyConnect](../saas-apps/cisco-anyconnect.md) ++ * [Tutorial: Azure Active Directory SSO integration with Palo Alto Networks GlobalProtect](../saas-apps/palo-alto-networks-globalprotect-tutorial.md) ++* For Windows 10 devices, consider integrating [Azure AD support into the built-in VPN client](/windows-server/remote/remote-access/vpn/ad-ca-vpn-connectivity-windows10). ++* After you evaluate this scenario, you can implement a solution to remove your dependency with on-premises to authenticate to VPN. ++### Move remote access to internal applications ++To simplify your environment, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) or [secure hybrid access](../manage-apps/secure-hybrid-access.md) partners to provide remote access. This allows you to remove the dependency on on-premises reverse proxy solutions. ++It's important to mention that enabling remote access to an application by using the preceding technologies is an interim step. You need to do more work to completely decouple the application from Active Directory. ++Azure AD DS allows you to migrate application servers to the cloud IaaS and decouple from Active Directory, while using Azure AD Application Proxy to enable remote access. To learn more about this scenario, check [Deploy Azure AD Application Proxy for Azure Active Directory Domain Services](../../active-directory-domain-services/deploy-azure-app-proxy.md). ++## Next steps ++* [Introduction](road-to-the-cloud-introduction.md) +* [Cloud transformation posture](road-to-the-cloud-posture.md) +* [Establish an Azure AD footprint](road-to-the-cloud-establish.md) +* [Implement a cloud-first approach](road-to-the-cloud-implement.md) |
active-directory | Road To The Cloud Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/road-to-the-cloud-posture.md | + + Title: Road to the cloud - Determine cloud transformation posture when moving identity and access management from Active Directory to Azure AD +description: Determine your cloud transformation posture when planning your migration of IAM from Active Directory to Azure AD. +documentationCenter: '' +++++ Last updated : 07/27/2023++++# Cloud transformation posture ++Active Directory, Azure Active Directory (Azure AD), and other Microsoft tools are at the core of identity and access management (IAM). For example, Active Directory Domain Services (AD DS) and Microsoft Configuration Manager provide device management in Active Directory. In Azure AD, Intune provides the same capability. ++As part of most modernization, migration, or Zero Trust initiatives, organizations shift IAM activities from using on-premises or infrastructure-as-a-service (IaaS) solutions to using built-for-the-cloud solutions. For an IT environment that uses Microsoft products and services, Active Directory and Azure AD play a role. ++Many companies that migrate from Active Directory to Azure AD start with an environment that's similar to the following diagram. The diagram overlays three pillars: ++* **Applications**: Includes applications, resources, and their underlying domain-joined servers. ++* **Devices**: Focuses on domain-joined client devices. ++* **Users and Groups**: Represents human and workload identities and attributes for resource access and group membership for governance and policy creation. ++[![Architectural diagram that depicts the common technologies contained in the pillars of applications, devices, and users.](media/road-to-cloud-posture/road-to-the-cloud-start.png)](media/road-to-cloud-posture/road-to-the-cloud-start.png#lightbox) ++Microsoft has modeled five states of transformation that commonly align with the business goals of customers. As the goals of customers mature, it's typical for them to shift from one state to the next at a pace that suits their resources and culture. ++The five states have exit criteria to help you determine where your environment resides today. Some projects, such as application migration, span all five states. Other projects span a single state. ++The content then provides more detailed guidance that's organized to help with intentional changes to people, process, and technology. The guidance can help you: ++* Establish an Azure AD footprint. ++* Implement a cloud-first approach. ++* Start to migrate out of your Active Directory environment. ++Guidance is organized by user management, device management, and application management according to the preceding pillars. ++Organizations that are formed in Azure AD rather than in Active Directory don't have the legacy on-premises environment that more established organizations must contend with. For them, or for customers who are completely re-creating their IT environment in the cloud, becoming 100 percent cloud-centric can happen as the new IT environment is established. ++For customers who have an established on-premises IT capability, the transformation process introduces complexity that requires careful planning. Also, because Active Directory and Azure AD are separate products targeted at different IT environments, they don't have like-for-like features. For example, Azure AD doesn't have the notion of Active Directory domain and forest trusts. ++## Five states of transformation ++In enterprise-sized organizations, IAM transformation, or even transformation from Active Directory to Azure AD, is typically a multi-year effort with multiple states. You analyze your environment to determine your current state, and then set a goal for your next state. Your goal might remove the need for Active Directory entirely, or you might decide not to migrate some capability to Azure AD and leave it in place. ++The states logically group initiatives into projects toward completing a transformation. During the state transitions, you put interim solutions in place. The interim solutions enable the IT environment to support IAM operations in both Active Directory and Azure AD. The interim solutions must also enable the two environments to interoperate. ++The following diagram shows the five states: ++[![Diagram that shows five network architectures: cloud attached, hybrid, cloud first, Active Directory minimized, and 100% cloud.](media/road-to-cloud-posture/road-to-the-cloud-five-states.png)](media/road-to-cloud-posture/road-to-the-cloud-five-states.png#lightbox) ++>[!NOTE] +> The states in this diagram represent a logical progression of cloud transformation. Your ability to move from one state to the next depends on the functionality that you've implemented and the capabilities within that functionality to move to the cloud. ++### State 1: Cloud attached ++In the cloud-attached state, organizations have created an Azure AD tenant to enable user productivity and collaboration tools. The tenant is fully operational. ++Most companies that use Microsoft products and services in their IT environment are already in or beyond this state. In this state, operational costs might be higher because there's an on-premises environment and a cloud environment to maintain and make interactive. People must have expertise in both environments to support their users and the organization. ++In this state: ++* Devices are joined to Active Directory and managed through Group Policy or on-premises device management tools. +* Users are managed in Active Directory, provisioned via on-premises identity management (IDM) systems, and synchronized to Azure AD through Azure AD Connect. +* Apps are authenticated to Active Directory and to federation servers like Active Directory Federation Services (AD FS) through a web access management (WAM) tool, Microsoft 365, or other tools such as SiteMinder and Oracle Access Manager. ++### State 2: Hybrid ++In the hybrid state, organizations start to enhance their on-premises environment through cloud capabilities. The solutions can be planned to reduce complexity, increase security posture, and reduce the footprint of the on-premises environment. ++During the transition and while operating in this state, organizations grow the skills and expertise for using Azure AD for IAM solutions. Because user accounts and device attachments are relatively easy and a common part of day-to-day IT operations, most organizations have used this approach. ++In this state: ++* Windows clients are hybrid Azure AD joined. ++* Non-Microsoft platforms based on software as a service (SaaS) start being integrated with Azure AD. Examples are Salesforce and ServiceNow. ++* Legacy apps are authenticating to Azure AD via Application Proxy or partner solutions that offer secure hybrid access. ++* Self-service password reset (SSPR) and password protection for users are enabled. ++* Some legacy apps are authenticated in the cloud through Azure AD DS and Application Proxy. ++### State 3: Cloud first ++In the cloud-first state, the teams across the organization build a track record of success and start planning to move more challenging workloads to Azure AD. Organizations typically spend the most time in this state of transformation. As complexity, the number of workloads, and the use of Active Directory grow over time, an organization needs to increase its effort and its number of initiatives to shift to the cloud. ++In this state: ++* New Windows clients are joined to Azure AD and are managed through Intune. +* ECMA connectors are used to provision users and groups for on-premises apps. +* All apps that previously used an AD DS-integrated federated identity provider, such as AD FS, are updated to use Azure AD for authentication. If you used password-based authentication through that identity provider for Azure AD, it's migrated to password hash synchronization. +* Plans to shift file and print services to Azure AD are being developed. +* Azure AD provides a business-to-business (B2B) collaboration capability. +* New groups are created and managed in Azure AD. ++### State 4: Active Directory minimized ++Azure AD provides most IAM capability, whereas edge cases and exceptions continue to use on-premises Active Directory. A state of minimizing Active Directory is more difficult to achieve, especially for larger organizations that have significant on-premises technical debt. ++Azure AD continues to evolve as your organization's transformation matures, bringing new features and tools that you can use. Organizations are required to deprecate capabilities or build new capabilities to provide replacement. ++In this state: ++* New users provisioned through the HR provisioning capability are created directly in Azure AD. ++* A plan to move apps that depend on Active Directory and are part of the vision for the future-state Azure AD environment is being executed. A plan to replace services that won't move (file, print, or fax services) is in place. ++* On-premises workloads have been replaced with cloud alternatives such as Windows Virtual Desktop, Azure Files, or Universal Print. Azure SQL Managed Instance replaces SQL Server. ++### State 5: 100% cloud ++In the 100%-cloud state, Azure AD and other Azure tools provide all IAM capability. This state is the long-term aspiration for many organizations. ++In this state: ++* No on-premises IAM footprint is required. ++* All devices are managed in Azure AD and cloud solutions such as Intune. ++* The user identity lifecycle is managed through Azure AD. ++* All users and groups are cloud native. ++* Network services that rely on Active Directory are relocated. ++## Transformation analogy ++The transformation between the states is similar to moving locations: ++1. **Establish a new location**: You purchase your destination and establish connectivity between the current location and the new location. These activities enable you to maintain your productivity and ability to operate. For more information, see [Establish an Azure AD footprint](road-to-the-cloud-establish.md). The results transition you to state 2. ++1. **Limit new items in the old location**: You stop investing in the old location and set a policy to stage new items in the new location. For more information, see [Implement a cloud-first approach](road-to-the-cloud-implement.md). These activities set the foundation to migrate at scale and reach state 3. ++1. **Move existing items to the new location**: You move items from the old location to the new location. You assess the business value of the items to determine if you move them as is, upgrade them, replace them, or deprecate them. For more information, see [Transition to the cloud](road-to-the-cloud-migrate.md). ++ These activities enable you to complete state 3 and reach states 4 and 5. Based on your business objectives, you decide what end state you want to target. ++Transformation to the cloud isn't only the identity team's responsibility. The organization needs coordination across teams to define policies that include people and process change, along with technology. Using a coordinated approach helps ensure consistent progress and reduces the risk of regressing to on-premises solutions. Involve teams that manage: ++* Devices/endpoints +* Networks +* Security/risk +* Application owners +* Human resources +* Collaboration +* Procurement +* Operations ++### High-level journey ++As organizations start a migration of IAM to Azure AD, they must determine the prioritization of efforts based on their specific needs. Operational staff and support staff must be trained to perform their jobs in the new environment. The following chart shows the high-level journey for migration from Active Directory to Azure AD: +++* **Establish an Azure AD footprint**: Initialize your new Azure AD tenant to support the vision for your end-state deployment. Adopt a [Zero Trust](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/) approach and a security model that [helps protect your tenant from on-premises compromise](../fundamentals/protect-m365-from-on-premises-attacks.md) early in your journey. ++* **Implement a cloud-first approach**: Establish a policy that all new devices, apps, and services should be cloud-first. New applications and services that use legacy protocols (for example, NTLM, Kerberos, or LDAP) should be by exception only. ++* **Transition to the cloud**: Shift the management and integration of users, apps, and devices away from on-premises and over to cloud-first alternatives. Optimize user provisioning by taking advantage of [cloud-first provisioning capabilities](../governance/what-is-provisioning.md) that integrate with Azure AD. ++The transformation changes how users accomplish tasks and how support teams provide user support. The organization should design and implement initiatives or projects in a way that minimizes the impact on user productivity. ++As part of the transformation, the organization introduces self-service IAM capabilities. Some parts of the workforce more easily adapt to the self-service user environment that's prevalent in cloud-based businesses. ++Aging applications might need to be updated or replaced to operate well in cloud-based IT environments. Application updates or replacements can be costly and time-consuming. The planning and other stages must also take the age and capability of the organization's applications into account. ++## Next steps ++* [Introduction](road-to-the-cloud-introduction.md) +* [Establish an Azure AD footprint](road-to-the-cloud-establish.md) +* [Implement a cloud-first approach](road-to-the-cloud-implement.md) +* [Transition to the cloud](road-to-the-cloud-migrate.md) |
active-directory | Secure Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-best-practices.md | + + Title: Best practices to secure with Azure Active Directory +description: Best practices we recommend you follow to secure your isolated environments in Azure Active Directory. +++++++ Last updated : 7/5/2022+++++++# Best practices for all isolation architectures ++The following are design considerations for all isolation configurations. Throughout this content, there are many links. We link to content, rather than duplicate it here, so you'll always have access to the most up-to-date information. ++For general guidance on how to configure Azure Active Directory (Azure AD) tenants (isolated or not), refer to the [Azure AD feature deployment guide](../fundamentals/active-directory-deployment-checklist-p2.md). ++>[!NOTE] +>For all isolated tenants we suggest you use clear and differentiated branding to help avoid human error of working in the wrong tenant. ++## Isolation security principles ++When designing isolated environments, it's important to consider the following principles: ++* **Use only modern authentication** - Applications deployed in isolated environments must use claims-based modern authentication (for example, SAML, * Auth, OAuth2, and OpenID Connect) to use capabilities such as federation, Azure AD B2B collaboration, delegation, and the consent framework. This way, legacy applications that have dependency on legacy authentication methods such as NT LAN Manager (NTLM) won't carry forward in isolated environments. ++* **Enforce strong authentication** - Strong authentication must always be used when accessing the isolated environment services and infrastructure. Whenever possible, [passwordless authentication](../authentication/concept-authentication-passwordless.md) such as [Windows for Business Hello](/windows/security/identity-protection/hello-for-business/hello-overview) or a [FIDO2 security keys](../authentication/howto-authentication-passwordless-security-key.md)) should be used. ++* **Deploy secure workstations** - [Secure workstations](/security/compass/privileged-access-devices) provide the mechanism to ensure that the platform and the identity that platform represents is properly attested and secured against exploitation. Two other approaches to consider are: ++ * Use Windows 365 Cloud PCs (Cloud PC) with the Microsoft Graph API. ++ * Use [Conditional Access](../conditional-access/concept-condition-filters-for-devices.md) and filter for devices as a condition. ++* **Eliminate legacy trust mechanisms** - Isolated directories and services shouldn't establish trust relationships with other environments through legacy mechanisms such as Active Directory trusts. All trusts between environments should be established with modern constructs such as federation and claims-based identity. ++* **Isolate services** - Minimize the surface attack area by protecting underlying identities and service infrastructure from exposure. Enable access only through modern authentication for services and secure remote access (also protected by modern authentication) for the infrastructure. ++* **Directory-level role assignments** - Avoid or reduce numbers of directory-level role assignments (User Administrator on directory scope instead of AU-scoping) or service-specific directory roles with control plane actions (Knowledge Admin with permissions to manage security group memberships). ++In addition to the guidance in the [Azure Active Directory general operations guide](../fundamentals/ops-guide-ops.md), we also recommend the following considerations for isolated environments. ++## Human identity provisioning ++### Privileged Accounts ++Provision accounts in the isolated environment for administrative personnel and IT teams who operate the environment. This enables you to add stronger security policies such as device-based access control for [secure workstations](/security/compass/privileged-access-deployment). As discussed in previous sections, nonproduction environments can potentially utilize Azure AD B2B collaboration to onboard privileged accounts to the non-production tenants using the same posture and security controls designed for privileged access in their production environment. ++Cloud-only accounts are the simplest way to provision human identities in an Azure AD tenant and it's a good fit for green field environments. However, if there's an existing on-premises infrastructure that corresponds to the isolated environment (for example, pre-production or management Active Directory forest), you could consider synchronizing identities from there. This holds especially true if the on-premises infrastructure described herein is used for IaaS solutions that require server access to manage the solution data plane. For more information on this scenario, see [Protecting Microsoft 365 from on-premises attacks](../fundamentals/protect-m365-from-on-premises-attacks.md). Synchronizing from isolated on-premises environments might also be needed if there are specific regulatory compliance requirements such as smart-card only authentication. ++>[!NOTE] +>There are no technical controls to do identity proofing for Azure AD B2B accounts. External identities provisioned with Azure AD B2B are bootstrapped with a single factor. The mitigation is for the organization to have a process to proof the required identities prior to a B2B invitation being issued, and regular access reviews of external identities to manage the lifecycle. Consider enabling a Conditional Access policy to control the MFA registration. ++### Outsourcing high risk roles ++To mitigate inside threats, it's possible to outsource access to the global administrator, and privileged role administrator roles to be managed service provider using Azure B2B collaboration or delegating access through a CSP partner or lighthouse. This access can be controlled by in-house staff via approval flows in Azure Privileged Identity Management (PIM). This approach can greatly reduce inside threats. This is an approach that you can use to meet compliance demands for customers that have concerns. ++## Nonhuman identity provisioning ++### Emergency access accounts ++Provision [emergency access accounts](../roles/security-emergency-access.md) for "break glass" scenarios where normal administrative accounts can't be used in the event you're accidentally locked out of your Azure AD organization. For on-premises environments using federation systems such as Active Directory Federation Services (AD FS) for authentication, maintain alternate cloud-only credentials for your global administrators to ensure service delivery during an on-premises infrastructure outage. ++### Azure managed identities ++Use [Azure managed identities](../managed-identities-azure-resources/overview.md) for Azure resources that require a service identity. Check the [list of services that support managed identities](../managed-identities-azure-resources/managed-identities-status.md) when designing your Azure solutions. ++If managed identities aren't supported or not possible, consider [provisioning service principal objects](../develop/app-objects-and-service-principals.md). ++### Hybrid service accounts ++Some hybrid solutions might require access to both on-premises and cloud resources. An example of a use case would be an Identity Governance solution that uses a service account on premises for access to AD DS and requires access to Azure AD. ++On-premises service accounts typically don't have the ability to sign in interactively, which means that in cloud scenarios they can't fulfill strong credential requirements such as multi-factor authentication (MFA). In this scenario, don't use a service account that has been synced from on-premises, but instead use a managed identity \ service principal. For service principal (SP), use a certificate as a credential, or [protect the SP with Conditional Access](../conditional-access/workload-identity.md). ++If there are technical constraints that don't make this possible and the same account must be used for both on-premises and cloud, then implement compensating controls such as Conditional Access to lock down the hybrid account to come from a specific network location. ++## Resource assignment ++An enterprise solution may be composed of multiple Azure resources and its access should be managed and governed as a logical unit of assignment - a resource group. In that scenario, Azure AD security groups can be created and associated with the proper permissions and role assignment across all solution resources, so that adding or removing users from those groups results in allowing or denying access to the entire solution. ++We recommend you use security groups to grant access to Microsoft services that rely on licensing to provide access (for example, Dynamics 365, Power BI). ++Azure AD cloud native groups can be natively governed from the cloud when combined with [Azure AD access reviews](../governance/access-reviews-overview.md) and [Azure AD entitlement management](../governance/access-reviews-overview.md). Organizations who already have on-premises group governance tools can continue to use those tools and rely on identity synchronization with Azure AD Connect to reflect group membership changes. ++Azure AD also supports direct user assignment to third-party SaaS services (for example, Salesforce, Service Now) for single sign-on and identity provisioning. Direct assignments to resources can be natively governed from the cloud when combined with [Azure AD access reviews](../governance/access-reviews-overview.md) and [Azure AD entitlement management](../fundamentals/ops-guide-ops.md). Direct assignment might be a good fit for end-user facing assignment. ++Some scenarios might require granting access to on-premises resources through on-premises Active Directory security groups. For those cases, consider the synchronization cycle to Azure AD when designing processes SLA. ++## Authentication management ++This section describes the checks to perform and actions to take for credential management and access policies based on your organization's security posture. ++### Credential management ++#### Strong credentials ++All human identities (local accounts and external identities provisioned through B2B collaboration) in the isolated environment must be provisioned with strong authentication credentials such as multi-factor authentication or a FIDO key. Environments with an underlying on-premises infrastructure with strong authentication such as smart card authentication can continue using smart card authentication in the cloud. ++#### Passwordless credentials ++A [passwordless solution](../authentication/concept-authentication-passwordless.md) is the best solution for ensuring the most convenient and secure method of authentication. Passwordless credentials such as [FIDO security keys](../authentication/howto-authentication-passwordless-security-key.md) and [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) are recommended for human identities with privileged roles. ++#### Password protection ++If the environment is synchronized from an on-premises Active Directory forest, you should deploy [Azure AD password protection](../authentication/concept-password-ban-bad-on-premises.md) to eliminate weak passwords in your organization. [Azure AD smart lockout](../authentication/howto-password-smart-lockout.md) should also be used in hybrid or cloud-only environments to lock out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. ++#### Self-service password management ++Users needing to change or reset their passwords is one of the biggest sources of volume and cost of help desk calls. In addition to cost, changing the password as a tool to mitigate a user risk is a fundamental step in improving the security posture of your organization. At a minimum, deploy [Self-Service Password Management](../authentication/howto-sspr-deployment.md) for human and test accounts with passwords to deflect help desk calls. ++#### External identities passwords ++By using Azure AD B2B collaboration, an [invitation and redemption process](../external-identities/what-is-b2b.md) lets external users such as partners, developers, and subcontractors use their own credentials to access your company's resources. This mitigates the need to introduce more passwords into the isolated tenants. ++>[!Note] +>Some applications, infrastructure, or workflows might require a local credential. Evaluate this on a case-by-case basis. ++#### Service principals credentials ++For scenarios where service principals are needed, use certificate credentials for service principals or [Conditional Access for workload identities](../conditional-access/workload-identity.md). If necessary, use client secrets as an exception to organizational policy. ++In both cases, Azure Key Vault can be used with Azure managed identities, so that the runtime environment (for example, an Azure function) can retrieve the credential from the key vault. ++Check this example to [create service principals with self-signed certificate](../develop/howto-authenticate-service-principal-powershell.md) for authentication of service principals with certificate credentials. ++### Access policies ++In the following sections are recommendations for Azure solutions. For general guidance on Conditional Access policies for individual environments, check the [CA Best practices](../conditional-access/overview.md), [Azure AD Operations Guide](../fundamentals/ops-guide-auth.md), and [Conditional Access for Zero Trust](/azure/architecture/guide/security/conditional-access-zero-trust): ++* Define [Conditional Access policies](../conditional-access/workload-identity.md) for the [Microsoft Azure Management](../authentication/howto-password-smart-lockout.md) cloud app to enforce identity security posture when accessing Azure Resource Manager. This should include controls on MFA and device-based controls to enable access only through secure workstations (more on this in the Privileged Roles section under Identity Governance). Additionally, use [Conditional Access to filter for devices](../conditional-access/concept-condition-filters-for-devices.md). ++* All applications onboarded to isolated environments must have explicit Conditional Access policies applied as part of the onboarding process. ++* Define Conditional Access policies for [security information registration](../conditional-access/howto-conditional-access-policy-registration.md) that reflects a secure root of trust process on-premises (for example, for workstations in physical locations, identifiable by IP addresses, that employees must visit in person for verification). ++* Consider managing Conditional Access policies at scale with automation using [MS Graph CA API](../conditional-access/howto-conditional-access-apis.md)). For example, you can use the API to configure, manage, and monitor CA policies consistently across tenants. ++* Consider using Conditional Access to restrict workload identities. Create a policy to limit or better control access based on location or other relevant circumstances. ++### Authentication Challenges ++* External identities provisioned with Azure AD B2B might need to reprovision multi-factor authentication (MFA) credentials in the resource tenant. This might be necessary if a cross-tenant access policy hasn't been set up with the resource tenant. This means that onboarding to the system is bootstrapped with a single factor. With this approach, the risk mitigation is for the organization to have a process to proof the user and credential risk profile prior to a B2B invitation being issued. Additionally, define Conditional Access to the registration process as described previously. ++* Use [External identities cross-tenant access settings](../external-identities/cross-tenant-access-overview.md) to manage how they collaborate with other Azure AD organizations and other Microsoft Azure clouds through B2B collaboration and [B2B direct connect](../external-identities/cross-tenant-access-settings-b2b-direct-connect.md). ++* For specific device configuration and control, you can use device filters in Conditional Access policies to [target or exclude specific devices](../conditional-access/concept-condition-filters-for-devices.md). This enables you to restrict access to Azure management tools from a designated secure admin workstation (SAW). Other approaches you can take include using [Azure Virtual desktop](../../virtual-desktop/environment-setup.md), [Azure Bastion](../../bastion/bastion-overview.md), or [Cloud PC](/graph/cloudpc-concept-overview). ++* Billing management applications such as Azure EA portal or MCA billing accounts aren't represented as cloud applications for Conditional Access targeting. As a compensating control, define separate administration accounts and target Conditional Access policies to those accounts using an "All Apps" condition. ++## Identity Governance ++### Privileged roles ++Below are some identity governance principles to consider across all the tenant configurations for isolation. ++* **No standing access** - No human identities should have standing access to perform privileged operations in isolated environments. Azure Role-based access control (RBAC) integrates with [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM). PIM provides just-in-time activation determined by security gates such as Multi-Factor Authentication, approval workflow, and limited duration. ++* **Number of admins** - Organizations should define minimum and maximum number of humans holding a privileged role to mitigate business continuity risks. With too few privileged roles, there may not be enough time-zone coverage. Mitigate security risks by having as few administrators as possible, following the least-privilege principle. ++* **Limit privileged access** - Create cloud-only and role-assignable groups for high privilege or sensitive privileges. This offers protection of the assigned users and the group object. ++* **Least privileged access** - Identities should only be granted the permissions needed to perform the privileged operations per their role in the organization. ++ * Azure RBAC [custom roles](../../role-based-access-control/custom-roles.md) allow designing least privileged roles based on organizational needs. We recommend that custom roles definitions are authored or reviewed by specialized security teams and mitigate risks of unintended excessive privileges. Authoring of custom roles can be audited through [Azure Policy](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json). ++ * To mitigate accidental use of roles that aren't meant for wider use in the organization, use Azure Policy to define explicitly which role definitions can be used to assign access. Learn more from this [GitHub Sample](https://github.com/Azure/azure-policy/tree/master/samples/Authorization/allowed-role-definitions). ++* **Privileged access from secure workstations** - All privileged access should occur from secure, locked down devices. Separating these sensitive tasks and accounts from daily use workstations and devices protect privileged accounts from phishing attacks, application and OS vulnerabilities, various impersonation attacks, and credential theft attacks such as keystroke logging, [Pass-the-Hash](https://aka.ms/AzureADSecuredAzure/27a), and Pass-The-Ticket. ++Some approaches you can use for [using secure devices as part of your privileged access story](/security/compass/privileged-access-devices) include using Conditional Access policies to [target or exclude specific devices](../conditional-access/concept-condition-filters-for-devices.md), using [Azure Virtual desktop](../../virtual-desktop/environment-setup.md), [Azure Bastion](../../bastion/bastion-overview.md), or [Cloud PC](/graph/cloudpc-concept-overview), or creating Azure-managed workstations or privileged access workstations. ++* **Privileged role process guardrails** - Organizations must define processes and technical guardrails to ensure that privileged operations can be executed whenever needed while complying with regulatory requirements. Examples of guardrails criteria include: ++ * Qualification of humans with privileged roles (for example, full-time employee/vendor, clearance level, citizenship) ++ * Explicit incompatibility of roles (also known as separation of duties). Examples include teams with Azure AD directory roles shouldn't be responsible for managing Azure Resource Manager privileged roles, etc. ++ * Whether direct user or groups assignments are preferred for which roles. ++### Resource access ++* **Attestation** - Identities that hold privileged roles should be reviewed periodically to keep membership current and justified. [Azure AD Access Reviews](../governance/access-reviews-overview.md) integrate with Azure RBAC roles, group memberships and Azure AD B2B external identities. ++* **Lifecycle** - Privileged operations might require access to multiple resources such as line of business applications, SaaS Applications, and Azure resource groups and subscriptions. [Azure AD Entitlement Management](../governance/entitlement-management-overview.md) allows defining access packages that represent a set resource that is assigned to users as a unit, establish a validity period, approval workflows, etc. ++### Governance challenges ++* The Azure Enterprise (Azure EA) Agreement portal doesn't integrate with Azure RBAC or Conditional Access. The mitigation for this is to use dedicated administration accounts that can be targeted with policies and additional monitoring. ++* The Azure EA Enterprise portal doesn't provide an audit log. To mitigate this, consider an automated governed process to provision subscriptions with the considerations described above and use dedicated EA accounts and audit the authentication logs. ++* [Microsoft Customer Agreement](../../cost-management-billing/understand/mca-overview.md) (MCA) roles don't integrate natively with PIM. To mitigate this, use dedicated MCA accounts and monitor usage of these accounts. ++* Monitoring IAM assignments outside Azure AD PIM isn't automated through Azure Policies. The mitigation is to not grant Subscription Owner or User Access Administrator roles to engineering teams. Instead create groups assigned to least privileged roles such as Contributor and delegate the management of those groups to engineering teams. ++* Privileged roles in Azure AD B2C tenants aren't integrated with Azure AD PIM. The mitigation is to create dedicated accounts in the organization's Azure AD tenant, onboard them in the Azure AD B2C tenant and apply conditional access policies to these dedicated administration accounts. ++* Azure AD B2C tenant privileged roles aren't integrated with Azure AD Access Reviews. The mitigation is to create dedicated accounts in the organization's Azure AD tenant, add these accounts to a group and perform regular access reviews on this group. ++* There are no technical controls to subordinate the creation of tenants to an organization. However, the activity is recorded in the Audit log. The onboarding to the billing plane is a compensating control at the gate. This needs to be complemented with monitoring and alerts instead. ++* There's no out-of-the box product to implement the subscription provisioning workflow recommended above. Organizations need to implement their own workflow. ++## Tenant and subscription lifecycle management ++### Tenant lifecycle ++* We recommend implementing a process to request a new corporate Azure AD tenant. The process should account for: ++ * Business justification to create it. Creating a new Azure AD tenant will increase complexity significantly, so it's key to ascertain if a new tenant is necessary. ++ * The Azure cloud in which it should be created (for example, Commercial, Government, etc.). ++ * Whether this is production or not production ++ * Directory data residency requirements ++ * Global Administrators who will manage it ++ * Training and understanding of common security requirements. ++* Upon approval, the Azure AD tenant will be created, configured with necessary baseline controls, and onboarded in the billing plane, monitoring, etc. ++* Regular review of the Azure AD tenants in the billing plane needs to be implemented to detect and discover tenant creation outside the governed process. Refer to the *Inventory and Visibility* section of this document for further details. ++* Azure AD B2C tenant creation can be controlled using Azure Policy. The policy executes when an Azure subscription is associated to the B2C tenant (a pre-requisite for billing). Customers can limit the creation of Azure AD B2C tenants to specific management groups. ++### Subscription lifecycle ++Below are some considerations when designing a governed subscription lifecycle process: ++* Define a taxonomy of applications and solutions that require Azure resources. All teams requesting subscriptions should supply their "product identifier" when requesting subscriptions. This information taxonomy will determine: ++ * Azure AD tenant to provision the subscription ++ * Azure EA account to use for subscription creation ++ * Naming convention ++ * Management group assignment ++ * Other aspects such as tagging, cross-charging, product-view usage, etc. ++* Don't allow ad-hoc subscription creation through the portals or by other means. Instead consider managing [subscriptions programmatically using Azure Resource Manager](../../cost-management-billing/manage/programmatically-create-subscription.md) and pulling consumption and billing reports [programmatically](/rest/api/consumption/). This can help limit subscription provisioning to authorized users and enforce your policy and taxonomy goals. Guidance on following [AZOps principals](https://github.com/azure/azops/wiki/introduction) can be used to help create a practical solution. ++* When a subscription is provisioned, create Azure AD cloud groups to hold standard Azure Resource Manager Roles needed by application teams such as Contributor, Reader and approved custom roles. This enables you to manage Azure RBAC role assignments with governed privileged access at scale. ++ 1. Configure the groups to become eligible for Azure RBAC roles using Azure AD PIM with the corresponding controls such as activation policy, access reviews, approvers, etc. ++ 1. Then [delegate the management of the groups](../enterprise-users/groups-self-service-management.md) to solution owners. ++ 1. As a guardrail, don't assign product owners to User Access Administrator or Owner roles to avoid inadvertent direct assignment of roles outside Azure AD PIM, or potentially changing the subscription to a different tenant altogether. ++ 1. For customers who choose to enable cross-tenant subscription management in non-production tenants through Azure Lighthouse, make sure that the same access policies from the production privileged account (for example, privileged access only from [secured workstations](/security/compass/privileged-access-deployment)) are enforced when authenticating to manage subscriptions. ++* If your organization has pre-approved reference architectures, the subscription provisioning can be integrated with resource deployment tools such as [Azure Blueprints](../../governance/blueprints/overview.md) or [Terraform](https://www.terraform.io). ++* Given the tenant affinity to Azure Subscriptions, subscription provisioning should be aware of multiple identities for the same human actor (employee, partner, vendor, etc.) across multiple tenants and assign access accordingly. ++### Azure AD B2C tenants ++* In an Azure AD B2C tenant, the built-in roles don't support PIM. To increase security, we recommend using Azure AD B2B collaboration to onboard the engineering teams managing Customer Identity Access Management (CIAM) from your Azure tenant, and assign them to Azure AD B2C privileged roles. ++* Following the emergency access guidelines for Azure AD above, consider creating equivalent [emergency access accounts](../roles/security-emergency-access.md) in addition to the external administrators described above. ++* We recommend the logical ownership of the underlying Azure AD subscription of the B2C tenant aligns with the CIAM engineering teams, in the same way that the rest of Azure subscriptions are used for the B2C solutions. ++## Operations ++The following are additional operational considerations for Azure AD, specific to multiple isolated environments. Check the [Azure Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/), the [Microsoft cloud security benchmark](/security/benchmark/azure/) and [Azure AD Operations guide](./ops-guide-ops.md) for detailed guidance to operate individual environments. ++### Cross-environment roles and responsibilities ++**Enterprise-wide SecOps architecture** - Members of operations and security teams from all environments in the organization should jointly define the following: ++* Principles to define when environments need to be created, consolidated, or deprecated. ++* Principles to define management group hierarchy on each environment. ++* Billing plane (EA portal / MCA) security posture, operational posture, and delegation approach. ++* Tenant creation process. ++* Enterprise application taxonomy. ++* Azure subscription provisioning process. ++* Isolation and administration autonomy boundaries and risk assessment across teams and environments. ++* Common baseline configuration and security controls (technical and compensating) and operational baselines to be used in all environments. ++* Common standard operational procedures and tooling that spans multiple environments (for example, monitoring, provisioning). ++* Agreed upon delegation of roles across multiple environments. ++* Segregation of duty across environments. ++* Common supply chain management for privileged workstations. ++* Naming conventions. ++* Cross-environment correlation mechanisms. ++**Tenant creation** - A specific team should own creating the tenant following standardized procedures defined by enterprise-wide SecOps architecture. This includes: ++* Underlying license provisioning (for example, Microsoft 365). ++* Onboarding to corporate billing plan (for example, Azure EA or MCA). ++* Creation of Azure management group hierarchy. ++* Configuration of management policies for various perimeters including identity, data protection, Azure, etc. ++* Deployment of security stack per agreed upon cybersecurity architecture, including diagnostic settings, SIEM onboarding, CASB onboarding, PIM onboarding, etc. ++* Configuration of Azure AD roles based on agreed upon delegation. ++* Configuration and distribution of initial privileged workstations. ++* Provisioning emergency access accounts. ++* Configuration of identity provisioning stack. ++**Cross-environment tooling architecture** - Some tools such as identity provisioning and source control pipelines might need to work across multiple environments. These tools should be considered critical to the infrastructure and must be architected, designed, implemented, and managed as such. As a result, architects from all environments should be involved whenever cross-environment tools need to be defined. ++### Inventory and visibility ++**Azure subscription discovery** - For each discovered tenant, an Azure AD global administrator can [elevate access](../../role-based-access-control/elevate-access-global-admin.md) to gain visibility of all subscriptions in the environment. This elevation will assign the global administrator the User Access Administrator built-in role at the root management group. ++>[!NOTE] +>This action is highly privileged and might give the admin access to subscriptions that hold extremely sensitive information if that data has not been properly isolated. ++**Enabling read access to discover resources** - Management groups enable RBAC assignment at scale across multiple subscriptions. Customers can grant a Reader role to a centralized IT team by configuring a role assignment in the root management group, which will propagate to all subscriptions in the environment. ++**Resource discovery** - After gaining resource Read access in the environment, [Azure Resource Graph](../../governance/resource-graph/overview.md) can be used to query resources in the environment. ++### Logging and monitoring ++**Central security log management** - Ingest logs from each environment in a [centralized way](/security/benchmark/azure/security-control-logging-monitoring), following consistent best practices across environments (for example, diagnostics settings, log retention, SIEM ingestion, etc.). [Azure Monitor](../../azure-monitor/overview.md) can be used to ingest logs from different sources such as endpoint devices, network, operating systems' security logs, etc. ++Detailed information on using automated or manual processes and tools to monitor logs as part of your security operations is available at [Azure Active Directory security operation guide](https://github.com/azure/azops/wiki/introduction). ++Some environments might have regulatory requirements that limit which data (if any) can leave a given environment. If centralized monitoring across environments isn't possible, teams should have operational procedures to correlate activities of identities across environments for auditing and forensics purposes such as cross-environment lateral movement attempts. It's recommended that the object unique identifiers human identities belonging to the same person is discoverable, potentially as part of the identity provisioning systems. ++The log strategy must include the following Azure AD logs for each tenant used in the organization: ++* Sign-in activity ++* Audit logs ++* Risk events ++Azure AD provides [Azure Monitor integration](../reports-monitoring/concept-activity-logs-azure-monitor.md) for the sign-in activity log and audit logs. Risk events can be ingested through [Microsoft Graph API](/graph/tutorial-riskdetection-api). ++The following diagram shows the different data sources that need to be incorporated as part of the monitoring strategy: ++Azure AD B2C tenants can be [integrated with Azure Monitor](../../active-directory-b2c/azure-monitor.md). We recommend monitoring of Azure AD B2C using the same criteria discussed above for Azure AD. ++Subscriptions that have enabled cross-tenant management with Azure Lighthouse can enable cross-tenant monitoring if the logs are collected by Azure Monitor. The corresponding Log Analytics workspaces can reside in the resource tenant and can be analyzed centrally in the managing tenant using Azure Monitor workbooks. To learn more, check [Monitor delegated resources at scale - Azure Lighthouse](../../lighthouse/how-to/monitor-at-scale.md). ++### Hybrid infrastructure OS security logs ++All hybrid identity infrastructure OS logs should be archived and carefully monitored as a Tier 0 system, given the surface area implications. This includes: ++* AD FS servers and Web Application Proxy ++* Azure AD Connect ++* Application Proxy Agents ++* Password write-back agents ++* Password Protection Gateway machines ++* NPS that has the Azure AD Multi-Factor Authentication RADIUS extension ++[Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md) must be deployed to monitor identity synchronization and federation (when applicable) for all environments. ++**Log storage retention** - All environments should have a cohesive log storage retention strategy, design, and implementation to facilitate a consistent toolset (for example, SIEM systems such as Azure Sentinel), common queries, investigation, and forensics playbooks. Azure Policy can be used to set up diagnostic settings. ++**Monitoring and log reviewing** - The operational tasks around identity monitoring should be consistent and have owners in each environment. As described above, strive to consolidate these responsibilities across environments to the extent allowed by regulatory compliance and isolation requirements. ++The following scenarios must be explicitly monitored and investigated: ++* **Suspicious activity** - All [Azure AD risk events](../identity-protection/overview-identity-protection.md) should be monitored for suspicious activity. All tenants should define the network [named locations](../conditional-access/location-condition.md) to avoid noisy detections on location-based signals. [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) is natively integrated with Azure Security Center. It's recommended that any risk detection investigation includes all the environments the identity is provisioned (for example, if a human identity has an active risk detection in the corporate tenant, the team operating the customer facing tenant should also investigate the activity of the corresponding account in that environment). ++* **User entity behavioral analytics (UEBA) alerts** - UEBA should be used to get insightful information based on anomaly detection. [Microsoft Microsoft 365 Defender for Cloud Apps](https://www.microsoft.com/security/business/siem-and-xdr/microsoft-defender-cloud-apps) provides [UEBA in the cloud](/defender-cloud-apps/tutorial-ueba). Customers can integrate [on-premises UEBA from Microsoft Microsoft 365 Defender for Identity](/defender-cloud-apps/mdi-integration). MCAS reads signals from Azure AD Identity Protection. ++* **Emergency access accounts activity** - Any access using [emergency access accounts](../fundamentals/security-operations-privileged-accounts.md) should be monitored and [alerts](../users-groups-roles/directory-emergency-access.md) created for investigations. This monitoring must include: ++ * Sign-ins ++ * Credential management ++ * Any updates on group memberships ++ * Application Assignments ++* **Billing management accounts** - Given the sensitivity of accounts with billing management roles in Azure EA or MCA, and their significant privilege, it's recommended to monitor and alert: ++ * Sign in attempts by accounts with billing roles. ++ * Any attempt to authenticate to applications other than the EA Portal. ++ * Any attempt to authenticate to applications other than Azure Resource Management if using dedicated accounts for MCA billing tasks. ++ * Assignment to Azure resources using dedicated accounts for MCA billing tasks. ++* **Privileged role activity** - Configure and review security [alerts generated by Azure AD PIM](../privileged-identity-management/pim-how-to-configure-security-alerts.md). If locking down direct RBAC assignments isn't fully enforceable with technical controls (for example, Owner role has to be granted to product teams to do their job), then monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly to access the subscription with Azure RBAC. ++* **Classic role assignments** - Organizations should use the modern Azure RBAC role infrastructure instead of the classic roles. As a result, the following events should be monitored: ++ * Assignment to classic roles at the subscription level ++* **Tenant-wide configurations** - Any tenant-wide configuration service should generate alerts in the system. ++ * Updating Custom Domains ++ * Updating branding ++ * Azure AD B2B allow/block list ++ * Azure AD B2B allowed identity providers (SAML IDPs through direct federation or Social Logins) ++ * Conditional Access Policies changes ++* **Application and service principal objects** ++ * New Applications / Service principals that might require Conditional Access policies ++ * Application Consent activity ++* **Management group activity** - The following Identity Aspects of management groups should be monitored: ++ * RBAC role assignments at the MG ++ * Azure Policies applied at the MG ++ * Subscriptions moved between MGs ++ * Any changes to security policies to the Root MG ++* **Custom roles** ++ * Updates of the custom role definitions ++ * New custom roles created ++* **Custom governance rules** - If your organizations established any separation of duties rules (for example, a holder of a Global Administrator tenant GA can't be owner/contributor of subscriptions), create alerts or configure periodic reviews to detect violations. ++**Other monitoring considerations** - Azure subscriptions that contain resources used for Log Management should be considered as critical infrastructure (Tier 0) and locked down to the Security Operations team of the corresponding environment. Consider using tools such as Azure Policy to enforce additional controls to these subscriptions. ++### Operational tools ++**Cross-environment** tooling design considerations: ++* Whenever possible, operational tools that will be used across multiple tenants should be designed to run as an Azure AD multi-tenant application to avoid redeployment of multiple instances on each tenant and avoid operational inefficiencies. The implementation should include authorization logic in to ensure that isolation between users and data is preserved. ++* Add alerts and detections to monitor any cross-environment automation (for example, identity provisioning) and threshold limits for fail-safes. For example, you may want an alert if deprovisioning of user accounts reaches a specific level, as it may indicate a bug or operational error that could have broad impact. ++* Any automation that orchestrates cross-environment tasks should be operated as highly privileged system. This system should be homed to the highest security environment and pull from outside sources if data from other environments is required. Data validation and thresholds need to be applied to maintain system integrity. A common cross-environment task is identity lifecycle management to remove identities from all environments for a terminated employee. ++**IT service management tools** - Organizations using IT Service Management (ITSM) systems such as ServiceNow should configure [Azure AD PIM role activation settings](../privileged-identity-management/pim-how-to-change-default-settings.md) to request a ticket number as part of the activation purposes. ++Similarly, Azure Monitor can be integrated with ITSM systems through the [IT Service Management Connector](../../azure-monitor/alerts/itsmc-overview.md). ++**Operational practices** - Minimize operational activities that require direct access to the environment to human identities. Instead model them as Azure Pipelines that execute common operations (for example, add capacity to a PaaS solution, run diagnostics, etc.) and model direct access to the Azure Resource Manager interfaces to "break glass" scenarios. ++### Operations challenges ++* Activity of Service Principal Monitoring is limited for some scenarios ++* Azure AD PIM alerts don't have an API. The mitigation is to have a regular review of those PIM alerts. ++* Azure EA Portal doesn't provide monitoring capabilities. The mitigation is to have dedicated administration accounts and monitor the account activity. ++* MCA doesn't provide audit logs for billing tasks. The mitigation is to have dedicated administration accounts and monitor the account activity. ++* Some services in Azure needed to operate the environment need to be redeployed and reconfigured across environments as they can't be multi-tenant or multi-cloud. ++* There's no full API coverage across Microsoft Online Services to fully achieve infrastructure as code. The mitigation is to use APIs as much as possible and use portals for the remainder. This [Open-Source initiative](https://microsoft365dsc.com/) to help you with determining an approach that might work for your environment. ++* There's no programmatic capability to discover resource tenants that have delegated subscription access to identities in a managing tenant. For example, if an email address enabled a security group in the contoso.com tenant to manage subscriptions in the fabrikam.com tenant, administrators in the contoso.com don't have an API to discover that this delegation took place. ++* Specific account activity monitoring (for example, break-glass account, billing management account) isn't provided out of the box. The mitigation is for customers to create their own alert rules. ++* Tenant-wide configuration monitoring isn't provided out of the box. The mitigation is for customers to create their own alert rules. ++## Next steps ++* [Introduction to delegated administration and isolated environments](secure-introduction.md) ++* [Azure AD fundamentals](../fundamentals/secure-fundamentals.md) ++* [Azure resource management fundamentals](secure-resource-management.md) ++* [Resource isolation in a single tenant](secure-single-tenant.md) ++* [Resource isolation with multiple tenants](secure-multiple-tenants.md) |
active-directory | Secure External Access Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-external-access-resources.md | + + Title: Plan an Azure Active Directory B2B collaboration deployment +description: A guide for architects and IT administrators on securing and governing external access to internal resources +++++++ Last updated : 4/28/2023+++++++# Plan an Azure Active Directory B2B collaboration deployment ++Secure collaboration with your external partners ensures they have correct access to internal resources, and for the expected duration. Learn about governance practices to reduce security risks, meet compliance goals, and ensure accurate access. ++## Governance benefits ++Governed collaboration improves clarity of ownership of access, reduces exposure of sensitive resources, and enables you to attest to access policy. ++* Manage external organizations, and their users who access resources +* Ensure access is correct, reviewed, and time bound +* Empower business owners to manage collaboration with delegation ++## Collaboration methods ++Traditionally, organizations use one of two methods to collaborate: ++* Create locally managed credentials for external users, or +* Establish federations with partner identity providers (IdP) ++Both methods have drawbacks. For more information, see the following table. ++| Area of concern | Local credentials | Federation | +|-||| +| Security | - Access continues after external user terminates<br> - UserType is Member by default, which grants too much default access | - No user-level visibility <br> - Unknown partner security posture| +| Expense | - Password and multi-factor authentication (MFA) management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | Small partners can't afford the infrastructure, lack expertise, and might use consumer email| +| Complexity | Partner users manage more credentials | Complexity grows with each new partner, and increased for partners| ++Azure Active Directory (Azure AD) B2B integrates with other tools in Azure AD, and Microsoft 365 services. Azure AD B2B simplifies collaboration, reduces expense, and increases security. ++## Azure AD B2B benefits ++- If the home identity is disabled or deleted, external users can't access resources +- User home IdP handles authentication and credential management +- Resource tenant controls guest-user access and authorization +- Collaborate with users who have an email address, but no infrastructure +- IT departments don't connect out-of-band to set up access or federation +- Guest user access is protected by the same security processes as internal users +- Clear end-user experience with no extra credentials required +- Users collaborate with partners without IT department involvement +- Guest default permissions in the Azure AD directory aren't limited or highly restricted ++## Next steps ++* [Determine your security posture for external access](1-secure-access-posture.md) +* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) +* [Create a security plan for external access](3-secure-access-plan.md) +* [Securing external access with groups](4-secure-access-groups.md) +* [Transition to governed collaboration with Azure Active Directory B2B collaboration](5-secure-access-b2b.md) +* [Manage external access with entitlement management](6-secure-access-entitlement-managment.md) +* [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) +* [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md) +* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) +* [Convert local guest accounts](10-secure-local-guest.md) +* [Onboard external users to Line-of-business applications](11-onboard-external-user.md) + |
active-directory | Secure Fundamentals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-fundamentals.md | + + Title: Fundamentals of securing with Azure Active Directory +description: Fundamentals of securing your tenants in Azure Active Directory. +++++++ Last updated : 7/5/2022+++++++# Azure Active Directory fundamentals ++Azure Active Directory (Azure AD) provides an identity and access boundary for Azure resources and trusting applications. Most environment-separation requirements can be fulfilled with delegated administration in a single Azure AD tenant. This configuration reduces management overhead of your systems. However, some specific cases, for example complete resource and identity isolation, require multiple tenants. ++You must determine your environment separation architecture based on your needs. Areas to consider include: ++* **Resource separation**. If a resource can change directory objects such as user objects, and the change would interfere with other resources, the resource may need to be isolated in a multi-tenant architecture. ++* **Configuration separation**. Tenant-wide configurations affect all resources. The effect of some tenant-wide configurations can be scoped with conditional access (CA) policies and other methods. If you have a need for different tenant configurations that can't be scoped with CA policies, you may need a multi-tenant architecture. ++* **Administrative separation**. You can delegate the administration of management groups, subscriptions, resource groups, resources, and some policies within a single tenant. A Global Administrator always has access to everything within the tenant. If you need to ensure that the environment doesn't share administrators with another environment, you need a multi-tenant architecture. ++To stay secure, you must follow best practices for identity provisioning, authentication management, identity governance, lifecycle management, and operations consistently across all tenants. ++## Terminology ++This list of terms is commonly associated with Azure AD and relevant to this content: ++**Azure AD tenant**. A dedicated and trusted instance of Azure AD that is automatically created when your organization signs up for a Microsoft cloud service subscription. Examples of subscriptions include Microsoft Azure, Microsoft Intune, or Microsoft 365. An Azure AD tenant generally represents a single organization or security boundary. The Azure AD tenant includes the users, groups, devices, and applications used to perform identity and access management (IAM) for tenant resources. ++**Environment**. In the context of this content, an environment is a collection of Azure subscriptions, Azure resources, and applications that are associated with one or more Azure AD tenets. The Azure AD tenant provides the identity control plane to govern access to these resources. ++**Production environment**. In the context of this content, a production environment is the live environment with the infrastructure and services that end users directly interact with. For example, a corporate or customer-facing environment. ++**Non-production environment**. In the context of this content, a nonproduction environment refers to an environment used for: ++* Development ++* Testing ++* Lab purposes ++Non-production environments are commonly referred to as sandbox environments. ++**Identity**. An identity is a directory object that can be authenticated and authorized for access to a resource. Identity objects exist for human identities and non-human identities. Non-human entities include: ++* Application objects ++* Workload identities (formerly described as service principles) ++* Managed identities ++* Devices ++**Human identities** are user objects that generally represent people in an organization. These identities are either created and managed directly in Azure AD or are synchronized from an on-premises Active Directory to Azure AD for a given organization. These types of identities are referred to as **local identities**. There can also be user objects invited from a partner organization or a social identity provider using [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). In this content, we refer to these types of identity as **external identities**. ++**Non-human identities** include any identity not associated with a human. This type of identity is an object such as an application that requires an identity to run. In this content, we refer to this type of identity as a **workload identity**. Various terms are used to describe this type of identity, including [application objects and service principals](../../marketplace/manage-aad-apps.md). ++* **Application object**. An Azure AD application is defined by its application object. The object resides in the Azure AD tenant where the application registered. The tenant is known as the application's "home" tenant. ++ * **Single-tenant** applications are created to only authorize identities coming from the "home" tenant. ++ * **Multi-tenant** applications allow identities from any Azure AD tenant to authenticate. ++* **Service principal object**. Although there are [exceptions](../../marketplace/manage-aad-apps.md), application objects can be considered the *definition* of an application. Service principal objects can be considered an instance of an application. Service principals generally reference an application object, and one application object is referenced by multiple service principals across directories. ++**Service principal objects** are also directory identities that can perform tasks independently from human intervention. The service principal defines the access policy and permissions for a user or application in the Azure AD tenant. This mechanism enables core features such as authentication of the user or application during sign-in and authorization during resource access. ++Azure AD allows application and service principal objects to authenticate with a password (also known as an application secret), or with a certificate. The use of passwords for service principals is discouraged and [we recommend using a certificate](../develop/howto-create-service-principal-portal.md) whenever possible. ++* **Managed identities for Azure resources**. Managed identities are special service principals in Azure AD. This type of service principal can be used to authenticate against services that support Azure AD authentication without needing to store credentials in your code or handle secrets management. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) ++* **Device identity**: A device identity verifies the device in the authentication flow has undergone a process to attest the device is legitimate and meets the technical requirements. Once the device has successfully completed this process, the associated identity can be used to further control access to an organization's resources. With Azure AD, devices can authenticate with a certificate. ++Some legacy scenarios required a human identity to be used in *non-human* scenarios. For example, when service accounts being used in on-premises applications such as scripts or batch jobs require access to Azure AD. This pattern isn't recommended and we recommend you use [certificates](../authentication/concept-certificate-based-authentication-technical-deep-dive.md). However, if you do use a human identity with password for authentication, protect your Azure AD accounts with [Azure Active Directory Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). ++**Hybrid identity**. A hybrid identity is an identity that spans on-premises and cloud environments. This provides the benefit of being able to use the same identity to access on-premises and cloud resources. The source of authority in this scenario is typically an on-premises directory, and the identity lifecycle around provisioning, de-provisioning and resource assignment is also driven from on-premises. For more information, see [Hybrid identity documentation](../hybrid/index.yml). ++**Directory objects**. An Azure AD tenant contains the following common objects: ++* **User objects** represent human identities and non-human identities for services that currently don't support service principals. User objects contain attributes that have the required information about the user including personal details, group memberships, devices, and roles assigned to the user. ++* **Device objects** represent devices that are associated with an Azure AD tenant. Device objects contain attributes that have the required information about the device. This includes the operating system, associated user, compliance state, and the nature of the association with the Azure AD tenant. This association can take multiple forms depending on the nature of the interaction and trust level of the device. ++ * **Hybrid Domain Joined**. Devices that are owned by the organization and [joined](../devices/concept-hybrid-join.md) to both the on-premises Active Directory and Azure AD. Typically a device purchased and managed by an organization and managed by System Center Configuration Manager. ++ * **Azure AD Domain Joined**. Devices that are owned by the organization and joined to the organization's Azure AD tenant. Typically a device purchased and managed by an organization that is joined to Azure AD and managed by a service such as [Microsoft Intune](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/microsoft-intune). ++ * **Azure AD Registered**. Devices not owned by the organization, for example, a personal device, used to access company resources. Organizations may require the device be enrolled via [Mobile Device Management (MDM)](https://www.microsoft.com/itshowcase/mobile-device-management-at-microsoft), or enforced through [Mobile Application Management (MAM)](/office365/enterprise/office-365-client-support-mobile-application-management) without enrollment to access resources. This capability can be provided by a service such as Microsoft Intune. ++* **Group objects** contain objects for the purposes of assigning resource access, applying controls, or configuration. Group objects contain attributes that have the required information about the group including the name, description, group members, group owners, and the group type. Groups in Azure AD take multiple forms based on an organization's requirements and can be mastered in Azure AD or synchronized from on-premises Active Directory Domain Services (AD DS). ++ * **Assigned groups**. In Assigned groups, users are added to or removed from the group manually, synchronized from on-premises AD DS, or updated as part of an automated scripted workflow. An assigned group can be synchronized from on-premises AD DS or can be homed in Azure AD. ++ * **Dynamic membership groups**. In Dynamic groups, users are assigned to the group automatically based on defined attributes. This allows group membership to be dynamically updated based on data held within the user objects. A dynamic group can only be homed in Azure AD. ++**Microsoft Account (MSA)**. You can create Azure subscriptions and tenants using Microsoft Accounts (MSA). A Microsoft Account is a personal account (as opposed to an organizational account) and is commonly used by developers and for trial scenarios. When used, the personal account is always made a guest in an Azure AD tenant. ++## Azure AD functional areas ++These are the functional areas provided by Azure AD that are relevant to isolated environments. To learn more about the capabilities of Azure AD, see [What is Azure Active Directory?](../fundamentals/active-directory-whatis.md). ++### Authentication ++**Authentication**. Azure AD provides support for authentication protocols compliant with open standards such as Open ID Connect, OAuth and SAML. Azure AD also provides capabilities to allow organizations to federate existing on-premises identity providers such as Active Directory Federation Services (AD FS) to authenticate access to Azure AD integrated applications. ++Azure AD provides industry-leading strong authentication options that organizations can use to secure access to resources. Azure Active Directory Multi-Factor Authentication, device authentication and password-less capabilities allow organizations to deploy strong authentication options that suit their workforce's requirements. ++**Single sign-on (SSO)**. With single sign-on, users sign in once with one account to access all resources that trust the directory such as domain-joined devices, company resources, software as a service (SaaS) applications, and all Azure AD integrated applications. For more information, see [single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md). ++### Authorization ++**Resource access assignment**. Azure AD provides and secures access to resources. Assigning access to a resource in Azure AD can be done in two ways: ++* **User assignment**: The user is directly assigned access to the resource and the appropriate role or permission is assigned to the user. ++* **Group assignment**: A group containing one or more users is assigned to the resource and the appropriate role or permission is assigned to the group ++**Application access policies**. Azure AD provides capabilities to further control and secure access to your organization's applications. ++**Conditional Access**. Azure AD Conditional Access policies are tools to bring user and device context into the authorization flow when accessing Azure AD resources. Organizations should explore use of Conditional Access policies to allow, deny, or enhance authentication based on user, risk, device, and network context. For more information, see the [Azure AD Conditional Access documentation](../conditional-access/index.yml). ++**Azure AD Identity Protection**. This feature enables organizations to automate the detection and remediation of identity-based risks, investigate risks, and export risk detection data to third-party utilities for further analysis. For more information, see [overview on Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). ++### Administration ++**Identity management**. Azure AD provides tools to manage the lifecycle of user, group, and device identities. [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) enables organizations to extend current, on-premises identity management solution to the cloud. Azure AD Connect manages the provisioning, de-provisioning, and updates to these identities in Azure AD. ++Azure AD also provides a portal and the Microsoft Graph API to allow organizations to manage identities or integrate Azure AD identity management into existing workflows or automation. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](/graph/use-the-api). ++**Device management**. Azure AD is used to manage the lifecycle and integration with cloud and on-premises device management infrastructures. It also is used to define policies to control access from cloud or on-premises devices to your organizational data. Azure AD provides the lifecycle services of devices in the directory and the credential provisioning to enable authentication. It also manages a key attribute of a device in the system that is the level of trust. This detail is important when designing a resource access policy. For more information, see [Azure AD Device Management documentation](../devices/index.yml). ++**Configuration management**. Azure AD has service elements that need to be configured and managed to ensure the service is configured to an organization's requirements. These elements include domain management, SSO configuration, and application management to name but a few. Azure AD provides a portal and the Microsoft Graph API to allow organizations to manage these elements or integrate into existing processes. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](/graph/use-the-api). ++### Governance ++**Identity lifecycle**. Azure AD provides capabilities to create, retrieve, delete, and update identities in the directory, including external identities. Azure AD also [provides services to automate the identity lifecycle](../app-provisioning/how-provisioning-works.md) to ensure it's maintained in line with your organization's needs. For example, using Access Reviews to remove external users who haven't signed in for a specified period. ++**Reporting and analytics**. An important aspect of identity governance is visibility into user actions. Azure AD provides insights into your environment's security and usage patterns. These insights include detailed information on: ++* What your users access ++* Where they access it from ++* The devices they use ++* Applications used to access ++Azure AD also provides information on the actions that are being performed within Azure AD, and reports on security risks. For more information, see [Azure Active Directory reports and monitoring](../reports-monitoring/index.yml). ++**Auditing**. Auditing provides traceability through logs for all changes done by specific features within Azure AD. Examples of activities found in audit logs include changes made to any resources within Azure AD like adding or removing users, apps, groups, roles, and policies. Reporting in Azure AD enables you to audit sign-in activities, risky sign-ins, and users flagged for risk. For more information, see [Audit activity reports in the Azure portal](../reports-monitoring/concept-audit-logs.md). ++**Access certification**. Access certification is the process to prove that a user is entitled to have access to a resource at a point in time. Azure AD Access Reviews continually review the memberships of groups or applications and provide insight to determine whether access is required or should be removed. This enables organizations to effectively manage group memberships, access to enterprise applications, and role assignments to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews?](../governance/access-reviews-overview.md) ++**Privileged access**. [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM) provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions to Azure resources. It's used to protect privileged accounts by lowering the exposure time of privileges and increasing visibility into their use through reports and alerts. ++### Self-service management ++**Credential registration**. Azure AD provides capabilities to manage all aspects of user identity lifecycle and self-service capabilities to reduce the workload of an organization's helpdesk. ++**Group management**. Azure AD provides capabilities that enable users to request membership in a group for resource access and to create groups that can be used for securing resources or collaboration. These capabilities can be controlled by the organization so that appropriate controls are put in place. ++### Consumer Identity and Access Management (IAM) ++**Azure AD B2C**. Azure AD B2C is a service that can be enabled in an Azure subscription to provide identities to consumers for your organization's customer-facing applications. This is a separate island of identity and these users don't appear in the organization's Azure AD tenant. Azure AD B2C is managed by administrators in the tenant associated with the Azure subscription. ++## Next steps ++* [Introduction to delegated administration and isolated environments](secure-introduction.md) ++* [Azure resource management fundamentals](secure-resource-management.md) ++* [Resource isolation in a single tenant](secure-single-tenant.md) ++* [Resource isolation with multiple tenants](secure-multiple-tenants.md) ++* [Best practices](secure-best-practices.md) |
active-directory | Secure Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-introduction.md | + + Title: Delegated administration to secure with Azure Active Directory +description: Introduction to delegated administration and isolated environments in Azure Active Directory. +++++++ Last updated : 7/5/2022+++++++# Introduction to delegated administration and isolated environments ++An Azure Active Directory (Azure AD) single-tenant architecture with delegated administration is often adequate for separating environments. As detailed in other sections of this article, Microsoft provides many tools to do this. However, there may be times when your organization requires a degree of isolation beyond what can be achieved in a single tenant. ++Before discussing specific architectures, it's important to understand: ++* How a typical single tenant works. ++* How administrative units in Azure AD work. ++* The relationships between Azure resources and Azure AD tenants. ++* Common requirements driving isolation. ++## Azure AD tenant as a security boundary ++An Azure AD tenant provides identity and access management (IAM) capabilities to applications and resources used by the organization. ++An identity is a directory object that can be authenticated and authorized for access to a resource. Identity objects exist for human identities and non-human identities. To differentiate between human and non-human identities, human identities are referred to as identities and non-human identities are referred to as workload identities. Non-human entities include application objects, service principals, managed identities, and devices. The terminology is inconsistent across the industry, but generally a workload identity is something you need for your software entity to authenticate with some system. ++To distinguish between human and non-human identities, different terms are emerging across the IT industry to distinguish between the two: ++* **Identity** - Identity started by describing the Active Directory (AD) and Azure AD object used by humans to authenticate. In this series of articles, identity refers to objects that represent humans. ++* **Workload identity** - In Azure Active Directory (Azure AD), workload identities are applications, service principals, and managed identities. The workload identity is used to authenticate and access other services and resources. ++For more information on workload identities, see [What are workload identities](../develop/workload-identities-overview.md). ++The Azure AD tenant is an identity security boundary that is under the control of global administrators. Within this security boundary, administration of subscriptions, management groups, and resource groups can be delegated to segment administrative control of Azure resources. While not directly interacting, these groupings are dependent on tenant-wide configurations of policies and settings. And those settings and configurations are under the control of the Azure AD Global Administrators. ++Azure AD is used to grant objects representing identities access to applications and Azure resources. In that sense both Azure resources and applications trusting Azure AD are resources that can be managed with Azure AD. In the following diagram, The Azure AD tenant boundary shows the Azure AD identity objects and the configuration tools. Below the directory are the resources that use the identity objects for identity and access management. Following best practices, the environment is set up with a test environment to test the proper operation of IAM. ++![Diagram that shows shows Azure AD tenant boundary.](media/secure-introduction/tenant-boundary.png) ++### Access to apps that use Azure AD ++Identities can be granted access to many types of applications. Examples include: ++* Microsoft productivity services such as Exchange Online, Microsoft Teams, and SharePoint Online ++* Microsoft IT services such as Azure Sentinel, Microsoft Intune, and Microsoft 365 Defender ATP ++* Microsoft Developer tools such as Azure DevOps and Microsoft Graph API ++* SaaS solutions such as Salesforce and ServiceNow ++* On-premises applications integrated with hybrid access capabilities such as Azure AD Application Proxy ++* Custom in-house developed applications ++Applications that use Azure AD require directory objects to be configured and managed in the trusted Azure AD tenant. Examples of directory objects include application registrations, service principals, groups, and [schema attribute extensions](/graph/extensibility-overview). ++### Access to Azure resources ++Users, groups, and service principal objects (workload identities) in the Azure AD tenant are granted roles by using [Azure Role Based Access Control](../../role-based-access-control/overview.md) (RBAC) and [Azure attribute-based access control](../../role-based-access-control/conditions-overview.md) (ABAC). ++* Azure RBAC enables you to provide access based on role as determined by security principal, role definition, and scope. ++* Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions. A role assignment condition is another check that you can optionally add to your role assignment to provide more fine-grained access control. ++Azure resources, resource groups, subscriptions, and management groups are accessed through using these assigned RBAC roles. For example, the following diagram shows distribution of administrative capability in Azure AD using role-based access control. ++![Diagram that shows Azure AD role hierarchy.](media/secure-introduction/role-hierarchy.png) ++Azure resources that [support Managed Identities](../managed-identities-azure-resources/overview.md) allow resources to authenticate, be granted access to, and be assigned roles to other resources within the Azure AD tenant boundary. ++Applications using Azure AD for sign-in may also use Azure resources such as compute or storage as part of its implementation. For example, a custom application that runs in Azure and trusts Azure AD for authentication has directory objects and Azure resources. ++Lastly, all Azure resources in the Azure AD tenant affect tenant-wide [Azure Quotas and Limits](../../azure-resource-manager/management/azure-subscription-service-limits.md). ++### Access to Directory Objects ++As outlined in the previous diagram, identities, resources, and their relationships are represented in an Azure AD tenant as directory objects. Examples of directory objects include users, groups, service principals, and app registrations. ++Having a set of directory objects in the Azure AD tenant boundary engenders the following Capabilities: ++* Visibility. Identities can discover or enumerate resources, users, groups, access usage reporting and audit logs based on their permissions. For example, a member of the directory can discover users in the directory per Azure AD [default user permissions](../fundamentals/users-default-permissions.md). ++* Applications can affect objects. Applications can manipulate directory objects through Microsoft Graph as part of their business logic. Typical examples include reading/setting user attributes, updating user's calendar, sending emails on behalf of the user, etc. Consent is necessary to allow applications to affect the tenant. Administrators can consent for all users. For more information, see [Permissions and consent in the Microsoft identity platform](../develop/v2-admin-consent.md). ++>[!NOTE] +>Use caution when using application permissions. For example, with Exchange Online, you should [scope application permissions to specific mailboxes and permissions](/graph/auth-limit-mailbox-access). ++* Throttling and service limits. Runtime behavior of a resource might trigger [throttling](/graph/throttling) in order to prevent overuse or service degradation. Throttling can occur at the application, tenant, or entire service level. Most commonly it occurs when an application has a large number of requests within or across tenants. Similarly, there are [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md) that might affect the runtime behavior of applications. ++## Administrative units for role management ++Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](../roles/permissions-reference.md) role to regional support specialists, so they can manage users only in the region that they support. An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. An administrative unit can contain only: ++* Users ++* Groups ++* Devices ++In the following diagram, administrative units are used to segment the Azure AD tenant further based on the business or organizational structure. This is useful when different business units or groups have dedicated IT support staff. The administrative units can be used to provide privileged permissions that are limited to a designated administrative unit. ++![Diagram that shows Azure AD Administrative units.](media/secure-introduction/administrative-units.png) ++For more information on administrative units, see [Administrative units in Azure Active Directory](../roles/administrative-units.md). ++### Common reasons for resource isolation ++Sometimes a group of resources should be isolated from other resources for security or other reasons, such as the resources have unique access requirements. This is a good use case for using administrative units. You must determine which users and security principals should have resource access and in what roles. Reasons to isolate resources might include: ++* Developer teams need the flexibility to safely iterate during the software development lifecycle of apps. But the development and testing of apps that write to Azure AD can potentially affect the Azure AD tenant through write operations. Some examples of this include: ++ * New applications that may change Office 365 content such as SharePoint sites, OneDrive, MS Teams, etc. ++ * Custom applications that can change data of users with MS Graph or similar APIs at scale (for example, applications that are granted Directory.ReadWrite.All) ++ * DevOps scripts that update large sets of objects as part of a deployment lifecycle. ++ * Developers of Azure AD integrated apps need the ability to create user objects for testing, and those user objects shouldn't have access to production resources. ++* Nonproduction Azure resources and applications that may affect other resources. For example, a new beta version of a SaaS application may need to be isolated from the production instance of the application and production user objects ++* Secret resources that should be shielded from discovery, enumeration, or takeover from existing administrators for regulatory or business critical reasons. ++## Configuration in a tenant ++Configuration settings in Azure AD can affect any resource in the Azure AD tenant through targeted, or tenant-wide management actions. Examples of tenant-wide settings include: ++* **External identities**: Global administrators for the tenant identify and control the external identities that can be provisioned in the tenant. ++ * Whether to allow external identities in the tenant. ++ * From which domain(s) external identities can be added. ++ * Whether users can invite users from other tenants. ++* **Named Locations**: Global administrators can create named locations, which can then be used to ++ * Block sign-ins from specific locations. ++ * Trigger conditional access policies such as MFA. ++ * Bypass security requirements ++>[!NOTE] +>Using [Named Locations](../conditional-access/location-condition.md) can present some challenges to your [zero-trust journey](https://www.microsoft.com/security/business/zero-trust). Verify that using Named Locations fits into your security strategy and principles. +Allowed authentication methods: Global administrators set the authentication methods allowed for the tenant. ++* **Self-service options**. Global Administrators set self-service options such as self-service-password reset and create Microsoft 365 groups at the tenant level. ++The implementation of some tenant-wide configurations can be scoped as long as they don't get overridden by global administration policies. For example: ++* If the tenant is configured to allow external identities, a resource administrator can still exclude those identities from accessing a resource. ++* If the tenant is configured to allow personal device registration, a resource administrator can exclude those devices from accessing specific resources. ++* If named locations are configured, a resource administrator can configure policies either allowing or excluding access from those locations. ++### Common reasons for configuration isolation ++Configurations, controlled by Global Administrators, affect resources. While some tenant-wide configuration can be scoped with policies to not apply or partially apply to a specific resource, others can't. If a resource has configuration needs that are unique, isolate it in a separate tenant. Examples of configuration isolation scenarios include: ++* Resources having requirements that conflict with existing tenant-wide security or collaboration postures. (for example allowed authentication types, device management policies, ability to self-service, identity proofing for external identities, etc.). ++* Compliance requirements that scope certification to the entire environment, including all resources and the Azure AD tenant itself, especially when those requirements conflict with or must exclude other organizational resources. ++* External user access requirements that conflict with production or sensitive resource policies. ++* Organizations that span multiple countries/regions, and companies hosted in a single Azure AD Tenant. For example, what settings and licenses are used in different countries/regions, or business subsidiaries. ++## Administration in a tenant ++Identities with privileged roles in the Azure AD tenant have the visibility and permissions to execute the configuration tasks described in the previous sections. Administration includes both the administration of identity objects such as users, groups, and devices, and the scoped implementation of tenant-wide configurations for authentication, authorization, etc. ++### Administration of directory objects ++Administrators manage how identity objects can access resources, and under what circumstances. They also can disable, delete, or modify directory objects based on their privileges. Identity objects include: ++* **Organizational identities**, such as the following, are represented by user objects: ++ * Administrators ++ * Organizational users ++ * Organizational developers ++ * Service Accounts ++ * Test users ++* **External identities** represent users from outside the organization such as: ++ * Partners, suppliers, or vendors that are provisioned with accounts local to the organization environment ++ * Partners, suppliers, or vendors that are provisioned via Azure B2B collaboration ++* **Groups** are represented by objects such as: ++ * Security groups ++ * [Microsoft 365 groups](/microsoft-365/community/all-about-groups) ++ * Dynamic Groups ++ * Administrative Units ++* **Devices** are represented by objects such as: ++ * Hybrid Azure AD joined devices (On-premises computers synchronized from on-premises Active Directory) ++ * Azure AD joined devices ++ * Azure AD registered mobile devices used by employees to access their workplace applications. ++ * Azure AD registered down-level devices (legacy). For example, Windows 2012 R2. ++* **Workload Identities** + * Managed identities ++ * Service principals ++ * Applications ++In a hybrid environment, identities are typically synchronized from the on-premises Active Directory environment using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md). ++### Administration of identity services ++Administrators with appropriate permissions can also manage how tenant-wide policies are implemented at the level of resource groups, security groups, or applications. When considering administration of resources, keep the following in mind. Each can be a reason to keep resources together, or to isolate them. ++* A **Global Administrator** can take control of any Azure subscription linked to the Tenant. ++* An **identity assigned an Authentication Administrator role** can require nonadministrators to reregister for MFA or FIDO authentication. ++* A **Conditional Access (CA) Administrator** can create CA policies that require users signing-in to specific apps to do so only from organization-owned devices. They can also scope configurations. For example, even if external identities are allowed in the tenant, they can exclude those identities from accessing a resource. ++* A **Cloud Application Administrator** can consent to application permissions on behalf of all users. ++### Common reasons for administrative isolation ++Who should have the ability to administer the environment and its resources? There are times when administrators of one environment must not have access to another environment. Examples include: ++* Separation of tenant-wide administrative responsibilities to further mitigate the risk of security and operational errors affecting critical resources. ++* Regulations that constrain who can administer the environment based on conditions such as citizenship, residency, clearance level, etc. that can't be accommodated with staff. ++## Security and operational considerations ++Given the interdependence between an Azure AD tenant and its resources, it's critical to understand the security and operational risks of compromise or error. If you're operating in a federated environment with synchronized accounts, an on-premises compromise can lead to an Azure AD compromise. ++* **Identity compromise** - Within the boundary of a tenant, any identity can be assigned any role, given the one providing access has sufficient privileges. While the effect of compromised non-privileged identities is largely contained, compromised administrators can have broad implications. For example, if an Azure AD global administrator account is compromised, Azure resources can become compromised. To mitigate risk of identity compromise, or bad actors, implement [tiered administration](/security/compass/privileged-access-access-model) and ensure that you follow principles of least privilege for [Azure AD Administrator Roles](../roles/delegate-by-task.md). Similarly, ensure that you create CA policies that specifically exclude test accounts and test service principals from accessing resources outside of the test applications. For more information on privileged access strategy, see [Privileged access: Strategy](/security/compass/privileged-access-strategy). ++* **Federated environment compromise** ++* **Trusting resource compromise** - Human identities aren't the only security consideration. Any compromised component of the Azure AD tenant can affect trusting resources based on its level of permissions at the tenant and resource level. The effect of a compromised component of an Azure AD trusting resource is determined by the privileges of the resource; resources that are deeply integrated with the directory to perform write operations can have profound impact in the entire tenant. Following [guidance for zero trust](/azure/architecture/guide/security/conditional-access-zero-trust) can help limit the impact of compromise. ++* **Application development** - Early stages of the development lifecycle for applications with writing privileges to Azure AD, where bugs can unintentionally write changes to the Azure AD objects, present a risk. Follow [Microsoft Identity platform best practices](../develop/identity-platform-integration-checklist.md) during development to mitigate these risks. ++* **Operational error** - A security incident can occur not only due to bad actors, but also because of an operational error by tenant administrators or the resource owners. These risks occur in any architecture. Mitigate these risks with separation of duties, tiered administration, following principles of least privilege, and following best practices before trying to mitigate by using a separate tenant. ++Incorporating zero-trust principles into your Azure AD design strategy can help guide your design to mitigate these considerations. For more information, visit [Embrace proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust). ++## Next steps ++* [Azure AD fundamentals](../fundamentals/secure-fundamentals.md) ++* [Azure resource management fundamentals](secure-resource-management.md) ++* [Resource isolation in a single tenant](secure-single-tenant.md) ++* [Resource isolation with multiple tenants](secure-multiple-tenants.md) ++* [Best practices](secure-best-practices.md) |
active-directory | Secure Multiple Tenants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-multiple-tenants.md | + + Title: Resource isolation with multiple tenants to secure with Azure Active Directory +description: Introduction to resource isolation with multiple tenants in Azure Active Directory. +++++++ Last updated : 7/5/2022+++++++# Resource isolation with multiple tenants ++There are specific scenarios when delegating administration in a single tenant boundary doesn't meet your needs. In this section, there are requirements that may drive you to create a multi-tenant architecture. Multi-tenant organizations might span two or more Azure AD tenants. This can result in unique cross-tenant collaboration and management requirements. Multi-tenant architectures increase management overhead and complexity and should be used with caution. We recommend using a single tenant if your needs can be met with that architecture. For more detailed information, see [Multi-tenant user management](multi-tenant-user-management-introduction.md). ++A separate tenant creates a new boundary, and therefore decoupled management of Azure AD directory roles, directory objects, conditional access policies, Azure resource groups, Azure management groups, and other controls as described in previous sections. ++A separate tenant is useful for an organization's IT department to validate tenant-wide changes in Microsoft services such as, Intune, Azure AD Connect, or a hybrid authentication configuration while protecting an organization's users and resources. This includes testing service configurations that might have tenant-wide effects and can't be scoped to a subset of users in the production tenant. ++Deploying a non-production environment in a separate tenant might be necessary during development of custom applications that can change data of production user objects with MS Graph or similar APIs (for example, applications that are granted Directory.ReadWrite.All, or similar wide scope). ++>[!Note] +>Azure AD Connect synchronization to multiple tenants, which might be useful when deploying a non-production environment in a separate tenant. For more information, see [Azure AD Connect: Supported topologies](../hybrid/plan-connect-topologies.md). ++## Outcomes ++In addition to the outcomes achieved with a single tenant architecture as described previously, organizations can fully decouple the resource and tenant interactions: ++### Resource separation ++* **Visibility** - Resources in a separate tenant can't be discovered or enumerated by users and administrators in other tenants. Similarly, usage reports and audit logs are contained within the new tenant boundary. This separation of visibility allows organizations to manage resources needed for confidential projects. ++* **Object footprint** - Applications that write to Azure AD and/or other Microsoft Online services through Microsoft Graph or other management interfaces can operate in a separate object space. This enables development teams to perform tests during the software development lifecycle without affecting other tenants. ++* **Quotas** - Consumption of tenant-wide [Azure Quotas and Limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) is separated from that of the other tenants. ++### Configuration separation ++A new tenant provides a separate set of tenant-wide settings that can accommodate resources and trusting applications that have requirements that need different configurations at the tenant level. Additionally, a new tenant provides a new set of Microsoft Online services such as Office 365. ++### Administrative separation ++A new tenant boundary involves a separate set of Azure AD directory roles, which enables you to configure different sets of administrators. ++## Common usage ++The following diagram illustrates a common usage for resource isolation in multiple tenants: a pre-production or "sandbox" environment that requires more separation than can be achieved with delegated administration in a single tenant. ++ ![Diagram that shows common usage scenario.](media/secure-multiple-tenants/multiple-tenant-common-scenario.png) ++Contoso is an organization that augmented their corporate tenant architecture with a pre-production tenant called ContosoSandbox.com. The sandbox tenant is used to support ongoing development of enterprise solutions that write to Azure AD and Microsoft 365 using Microsoft Graph. These solutions are deployed in the corporate tenant. ++The sandbox tenant is brought online to prevent those applications under development from impacting production systems either directly or indirectly, by consuming tenant resources and affecting quotas, or throttling. ++Developers require access to the sandbox tenant during the development lifecycle, ideally with self-service access requiring additional permissions that are restricted in the production environment. Examples of these additional permissions might include creating, deleting, and updating user accounts, registering applications, provisioning and deprovisioning Azure resources, and changes to policies or overall configuration of the environment. ++In this example, Contoso uses [Azure AD B2B Collaboration](../external-identities/what-is-b2b.md) to provision users from the corporate tenant to enable users that can manage and access resources in applications in the sandbox tenant without managing multiple credentials. This capability is primarily oriented to cross-organization collaboration scenarios. However, enterprises with multiple tenants like Contoso can use this capability to avoid additional credential lifecycle administration and user experience complexities. ++Use [External Identities cross-tenant access](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) settings to manage how you collaborate with other Azure AD organizations through B2B collaboration. These settings determine both the level of inbound access users in external Azure AD organizations have to your resources, and the level of outbound access your users have to external organizations. They also let you trust multifactor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations. For details and planning considerations, see [Cross-tenant access in Azure AD External Identities](../external-identities/cross-tenant-access-overview.md). ++Another approach could have been to utilize the capabilities of Azure AD Connect to sync the same on-premises Azure AD credentials to multiple tenants, keeping the same password but differentiating on the users UPN domain. ++## Multi-tenant resource isolation ++With a new tenant, you have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions are managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Microsoft Intune. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business. ++This will allow users to continue to use their corporate credentials, while achieving the benefits of separation. ++Azure AD B2B collaboration in sandbox tenants should be configured to allow only identities from the corporate environment to be onboarded using Azure B2B [allow/deny lists](../external-identities/allow-deny-list.md). For tenants that you do want to allow for B2B consider using External Identities cross-tenant access settings for cross tenant multifactor authentication\Device trust. ++>[!IMPORTANT] +>Multi-tenant architectures with external identity access enabled provide only resource isolation, but don't enable identity isolation. Resource isolation using Azure AD B2B collaboration and Azure Lighthouse don't mitigate risks related to identities. ++If the sandbox environment shares identities with the corporate environment, the following scenarios are applicable to the sandbox tenant: ++* A malicious actor that compromises a user, a device, or hybrid infrastructure in the corporate tenant, and is invited into the sandbox tenant, might gain access to the sandbox tenant's apps and resources. ++* An operational error (for example, user account deletion or credential revocation) in the corporate tenant might affect the access of an invited user into the sandbox tenant. ++You must do the risk analysis and potentially consider identity isolation through multiple tenants for business-critical resources that require a highly defensive approach. Azure Privileged Identity Management can help mitigate some of the risks by imposing extra security for accessing business critical tenants and resources. ++### Directory objects ++The tenant you use to isolate resources may contain the same types of objects, Azure resources, and trusting applications as your primary tenant. You may need to provision the following object types: ++**Users and groups**: Identities needed by solution engineering teams, such as: ++* Sandbox environment administrators. ++* Technical owners of applications. ++* Line-of-business application developers. ++* Test end-user accounts. ++These identities might be provisioned for: ++* Employees who come with their corporate account through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). ++* Employees who need local accounts for administration, emergency administrative access, or other technical reasons. ++Customers who have or require non-production Active Directory on-premises can also synchronize their on-premises identities to the sandbox tenant if needed by the underlying resources and applications. ++**Devices**: The non-production tenant contains a reduced number of devices to the extent that are needed in the solution engineering cycle: ++* Administration workstations ++* Non-production computers and mobile devices needed for development, testing, and documentation ++### Applications ++Azure AD integrated applications: Application objects and service principals for: ++* Test instances of the applications that are deployed in production (for example, applications that write to Azure AD and Microsoft online services). ++* Infrastructure services to manage and maintain the non-production tenant, potentially a subset of the solutions available in the corporate tenant. ++Microsoft Online ++* Typically, the team that owns the Microsoft Online Services in production should be the one owning the non-production instance of those services. ++* Administrators of non-production test environments shouldn't be provisioning Microsoft Online Services unless those services are specifically being tested. This avoids inappropriate use of Microsoft services, for example setting up production SharePoint sites in a test environment. ++* Similarly, provisioning of Microsoft Online services that can be initiated by end users (also known as ad-hoc subscriptions) should be locked down. For more information, see [What is self-service sign-up for Azure Active Directory?](../enterprise-users/directory-self-service-signup.md). ++* Generally, all non-essential license features should be disabled for the tenant using group-based licensing. This should be done by the same team that manages licenses in the production tenant, to avoid misconfiguration by developers who might not know the effect of enabling licensed features. ++### Azure resources ++Any Azure resources needed by trusting applications may also be deployed. For example, databases, virtual machines, containers, Azure functions, etc. For your sandbox environment, you must weigh the cost savings of using less-expensive SKUs for products and services with the less security features available. ++The RBAC model for access control should still be employed in a non-production environment in case changes are replicated to production after tests have concluded. Failure to do so allows security flaws in the non-production environment to propagate to your production tenant. ++## Resource and identity isolation with multiple tenants ++### Isolation outcomes ++There are limited situations where resource isolation can't meet your requirements. You can isolate both resources and identities in a multi-tenant architecture by disabling all cross-tenant collaboration capabilities and effectively building a separate identity boundary. This approach is a defense against operational errors and compromise of user identities, devices, or hybrid infrastructure in corporate tenants. ++### Isolation common usage ++A separate identity boundary is typically used for business-critical applications and resources such as customer-facing services. In this scenario, Fabrikam has decided to create a separate tenant for their customer-facing SaaS product to avoid the risk of employee identity compromise affecting their SaaS customers. The following diagram illustrates this architecture: ++The FabrikamSaaS tenant contains the environments used for applications that are offered to customers as part of Fabrikam's business model. ++### Isolation of directory objects ++The directory objects in FabrikamSaas are as follows: ++Users and groups: Identities needed by solution IT teams, customer support staff, or other necessary personnel are created within the SaaS tenant. To preserve isolation, only local accounts are used, and Azure AD B2B collaboration isn't enabled. ++Azure AD B2C directory objects: If the tenant environments are accessed by customers, it may contain an Azure AD B2C tenant and its associated identity objects. Subscriptions that hold these directories are good candidates for an isolated consumer-facing environment. ++Devices: This tenant contains a reduced number of devices; only those that are needed to run customer-facing solutions: ++* Secure administration workstations. ++* Support personnel workstations (this can include engineers who are "on call" as described above). ++### Isolation of applications ++**Azure AD integrated applications**: Application objects and service principals for: ++* Production applications (for example, multi-tenant application definitions). ++* Infrastructure services to manage and maintain the customer-facing environment. ++**Azure Resources**: Hosts the IaaS, PaaS and SaaS resources of the customer-facing production instances. ++## Next steps ++* [Introduction to delegated administration and isolated environments](secure-introduction.md) ++* [Azure AD fundamentals](../fundamentals/secure-fundamentals.md) ++* [Azure resource management fundamentals](secure-resource-management.md) ++* [Resource isolation in a single tenant](secure-single-tenant.md) ++* [Best practices](secure-best-practices.md) |
active-directory | Secure Resource Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-resource-management.md | + + Title: Resource management fundamentals in Azure Active Directory +description: Introduction to resource management in Azure Active Directory. +++++++ Last updated : 3/23/2023++++++# Azure resource management fundamentals ++It's important to understand the structure and terms that are specific to Azure resources. The following image shows an example of the four levels of scope that are provided by Azure: ++![Diagram that shows Azure resource management model.](media/secure-resource-management/resource-management-terminology.png) ++## Terminology ++The following are some of the terms you should be familiar with: ++**Resource** - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources. ++**Resource group** - A container that holds related resources for an Azure solution such as a collection of virtual machines, associated VNets, and load balancers that require management by specific teams. The [resource group](../../azure-resource-manager/management/overview.md) includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. Resource groups can also be used to help with life-cycle management by deleting all resources that have the same lifespan at one time. This approach also provides security benefit by leaving no fragments that might be exploited. ++**Subscription** - From an organizational hierarchy perspective, a subscription is a billing and management container of resources and resource groups. An Azure subscription has a trust relationship with Azure AD. A subscription trusts Azure AD to authenticate users, services, and devices. ++>[!Note] +>A subscription may trust only one Azure AD tenant. However, each tenant may trust multiple subscriptions and subscriptions can be moved between tenants. ++**Management group** - [Azure management groups](../../governance/management-groups/overview.md) provide a hierarchical method of applying policies and compliance at different scopes above subscriptions. It can be at the tenant root management group (highest scope) or at lower levels in the hierarchy. You organize subscriptions into containers called "management groups" and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Note, policy definitions can be applied to a management group or subscription. ++**Resource provider** - A service that supplies Azure resources. For example, a common [resource provider](../../azure-resource-manager/management/resource-providers-and-types.md) is Microsoft. Compute, which supplies the virtual machine resource. Microsoft. Storage is another common resource provider. ++**Resource Manager template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, tenant, or management group. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../../azure-resource-manager/templates/overview.md). Additionally, the [Bicep language](../../azure-resource-manager/bicep/overview.md) can be used instead of JSON. ++## Azure Resource Management Model ++Each Azure subscription is associated with controls used by [Azure Resource Manager](../../azure-resource-manager/management/overview.md) (ARM). Resource Manager is the deployment and management service for Azure, it has a trust relationship with Azure AD for identity management for organizations, and the Microsoft Account (MSA) for individuals. Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. You use management features like access control, locks, and tags, to secure and organize your resources after deployment. ++>[!NOTE] +>Prior to ARM, there was another deployment model named Azure Service Manager (ASM) or "classic". To learn more, see [Azure Resource Manager vs. classic deployment](../../azure-resource-manager/management/deployment-models.md). Managing environments with the ASM model is out of scope of this content. ++Azure Resource Manager is the front-end service, which hosts the REST APIs used by PowerShell, the Azure portal, or other clients to manage resources. When a client makes a request to manage a specific resource, Resource Manager proxies the request to the resource provider to complete the request. For example, if a client makes a request to manage a virtual machine resource, Resource Manager proxies the request to the Microsoft. Compute resource provider. Resource Manager requires the client to specify an identifier for both the subscription and the resource group to manage the virtual machine resource. ++Before any resource management request can be executed by Resource Manager, a set of controls is checked. ++* **Valid user check** - The user requesting to manage the resource must have an account in the Azure AD tenant associated with the subscription of the managed resource. ++* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](../../role-based-access-control/overview.md). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. ++* **Azure policy check** - [Azure policies](../../governance/policy/overview.md) specify the operations allowed or explicitly denied for a specific resource. For example, a policy can specify that users are only allowed (or not allowed) to deploy a specific type of virtual machine. ++The following diagram summarizes the resource model we just described. ++![Diagram that shows Azure resource management with ARM and Azure AD.](media/secure-resource-management/resource-model.png) ++**Azure Lighthouse** - [Azure Lighthouse](../../lighthouse/overview.md) enables resource management across tenants. Organizations can delegate roles at the subscription or resource group level to identities in another tenant. ++Subscriptions that enable [delegated resource management](../../lighthouse/concepts/azure-delegated-resource-management.md) with Azure Lighthouse have attributes that indicate the tenant IDs that can manage subscriptions or resource groups, and mapping between the built-in RBAC role in the resource tenant to identities in the service provider tenant. At runtime, Azure Resource Manager will consume these attributes to authorize tokens coming from the service provider tenant. ++It's worth noting that Azure Lighthouse itself is modeled as an Azure resource provider, which means that aspects of the delegation across a tenant can be targeted through Azure Policies. ++**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business. ++## Azure resource management with Azure AD ++Now that you have a better understanding of the resource management model in Azure, let's briefly examine some of the capabilities of Azure AD that can provide identity and access management for Azure resources. ++### Billing ++Billing is important to resource management because some billing roles interact with or can manage resources. Billing works differently depending on the type of agreement that you have with Microsoft. ++#### Azure Enterprise Agreements ++Azure Enterprise Agreement (Azure EA) customers are onboarded to the Azure EA Portal upon execution of their commercial contract with Microsoft. Upon onboarding, an identity is associated to a "root" Enterprise Administrator billing role. The portal provides a hierarchy of management functions: ++* Departments help you segment costs into logical groupings and enable you to set a budget or quota at the department level. ++* Accounts are used to further segment departments. You can use accounts to manage subscriptions and to access reports. +The EA portal can authorize Microsoft Accounts (MSA) or Azure AD accounts (identified in the portal as "Work or School Accounts"). Identities with the role of "Account Owner" in the EA portal can create Azure subscriptions. ++#### Enterprise billing and Azure AD tenants ++When an Account Owner creates an Azure subscription within an enterprise agreement, the identity and access management of the subscription is configured as follows: ++* The Azure subscription is associated with the same Azure AD tenant of the Account Owner. ++* The account owner who created the subscription will be assigned the Service Administrator and Account Administrator roles. (The Azure EA Portal assigns Azure Service Manager (ASM) or "classic" roles to manage subscriptions. To learn more, see [Azure Resource Manager vs. classic deployment](../../azure-resource-manager/management/deployment-models.md).) ++An enterprise agreement can be configured to support multiple tenants by setting the authentication type of "Work or school account cross-tenant" in the Azure EA Portal. Given the above, organizations can set multiple accounts for each tenant, and multiple subscriptions for each account, as shown in the diagram below. ++![Diagram that shows Enterprise Agreement billing structure.](media/secure-resource-management/billing-tenant-relationship.png) ++It's important to note that the default configuration described above grants the Azure EA Account Owner privileges to manage the resources in any subscriptions they created. For subscriptions holding production workloads, consider decoupling billing and resource management by changing the service administrator of the subscription right after creation. ++ To further decouple and prevent the account owner from regaining service administrator access to the subscription, the subscription's tenant can be [changed](../fundamentals/active-directory-how-subscriptions-associated-directory.md) after creation. If the account owner doesn't have a user object in the Azure AD tenant the subscription is moved to, they can't regain the service owner role. ++To learn more, visit [Azure roles, Azure AD roles, and classic subscription administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md). ++### Microsoft Customer Agreement ++Customers enrolled with a [Microsoft Customer Agreement](../../cost-management-billing/understand/mca-overview.md) (MCA) have a different billing management system with its own roles. ++A [billing account](../../cost-management-billing/manage/understand-mca-roles.md) for the Microsoft Customer Agreement contains one or more [billing profiles](../../cost-management-billing/manage/understand-mca-roles.md) that allow managing invoices and payment methods. Each billing profile contains one or more [invoice sections](../../cost-management-billing/manage/understand-mca-roles.md) to organize costs on the billing profile's invoice. ++In a Microsoft Customer Agreement, billing roles come from a single Azure AD tenant. To provision subscriptions for multiple tenants, the subscriptions must be initially created in the same Azure AD Tenant as the MCA, and then changed. In the diagram below, the subscriptions for the Corporate IT pre-production environment were moved to the ContosoSandbox tenant after creation. ++ ![Diagram that shows MCA billing structure.](media/secure-resource-management/microsoft-customer-agreement.png) ++## RBAC and role assignments in Azure ++In the Azure AD Fundamentals section, you learned Azure RBAC is the authorization system that provides fine-grained access management to Azure resources, and includes many [built-in roles](../../role-based-access-control/built-in-roles.md). You can create [custom roles](../../role-based-access-control/custom-roles.md), and assign roles at different scopes. Permissions are enforced by assigning RBAC roles to objects requesting access to Azure resources. ++Azure AD roles operate on concepts like [Azure role-based access control](../../role-based-access-control/overview.md). The [difference between these two role-based access control systems](../../role-based-access-control/rbac-and-directory-admin-roles.md) is that Azure RBAC uses Azure Resource Management to control access to Azure resources such as virtual machines or storage, and Azure AD roles control access to Azure AD, applications, and Microsoft services such as Office 365. ++Both Azure AD roles and Azure RBAC roles integrate with Azure AD Privileged Identity Management to enable just-in-time activation policies such as approval workflow and MFA. ++## ABAC and role assignments in Azure ++[Attribute-based access control (ABAC)](../../role-based-access-control/conditions-overview.md) is an authorization system that defines access based on attributes associated with security principals, resources, and environment. With ABAC, you can grant a security principal access to a resource based on attributes. Azure ABAC refers to the implementation of ABAC for Azure. ++Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions. A role assignment condition is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. A condition filters down permissions granted as a part of the role definition and role assignment. For example, you can add a condition that requires an object to have a specific tag to read the object. You can't explicitly deny access to specific resources using conditions. ++## Conditional Access ++Azure AD [Conditional Access](../../role-based-access-control/conditional-access-azure-management.md) (CA) can be used to manage access to Azure management endpoints. CA policies can be applied to the Microsoft Azure Management cloud app to protect the Azure resource management endpoints such as: ++* Azure Resource Manager Provider (services) ++* Azure Resource Manager APIs ++* Azure PowerShell ++* Azure CLI ++* Azure portal ++![Diagram that shows the Conditional Access policy.](media/secure-resource-management/conditional-access.jpeg) ++For example, an administrator may configure a Conditional Access policy, which allows a user to sign in to the Azure portal only from approved locations, and also requires either multifactor authentication (MFA) or a hybrid Azure AD domain-joined device. ++## Azure Managed Identities ++A common challenge when building cloud applications is how to manage the credentials in your code for authenticating to cloud services. Keeping the credentials secure is an important task. Ideally, the credentials never appear on developer workstations and aren't checked into source control. [Managed identities for Azure resources](../managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication without any credentials in your code. ++There are two types of managed identities: ++* A system-assigned managed identity is enabled directly on an Azure resource. When the resource is enabled, Azure creates an identity for the resource in the associated subscription's trusted Azure AD tenant. After the identity is created, the credentials are provisioned onto the resource. The lifecycle of a system-assigned identity is directly tied to the Azure resource. If the resource is deleted, Azure automatically cleans up the credentials and the identity in Azure AD. ++* A user-assigned managed identity is created as a standalone Azure resource. Azure creates an identity in the Azure AD tenant that's trusted by the subscription with which the resource is associated. After the identity is created, the identity can be assigned to one or more Azure resources. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure resources to which it's assigned. ++Internally, managed identities are service principals of a special type, to only be used by specific Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed. Noe that authorization of Graph API permissions can only be done by PowerShell, so not all features of Managed Identity are accessible via the Portal UI. ++## Azure Active Directory Domain Services ++Azure Active Directory Domain Services (Azure AD DS) provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. Supported servers are moved from an on-premises AD DS forest and joined to an Azure AD DS managed domain and continue to use legacy protocols for authentication (for example, Kerberos authentication). ++## Azure AD B2C directories and Azure ++An Azure AD B2C tenant is linked to an Azure subscription for billing and communication purposes. Azure AD B2C tenants have a self-contained role structure in the directory, which is independent from the Azure RBAC privileged roles of the Azure subscription. ++When the Azure AD B2C tenant is initially provisioned, the user creating the B2C tenant must have contributor or owner permissions in the subscription. Upon creation, that user becomes the first Azure AD B2C tenant global administrator and they can later create other accounts and assign them to directory roles. ++It's important to note that the owners and contributors of the linked Azure AD subscription can remove the link between the subscription and the directory, which will affect the ongoing billing of the Azure AD B2C usage. ++## Identity considerations for IaaS solutions in Azure ++This scenario covers identity isolation requirements that organizations have for Infrastructure-as-a-Service (IaaS) workloads. ++There are three key options regarding isolation management of IaaS workloads: ++* Virtual machines joined to stand-alone Active Directory Domain Services (AD DS) ++* Azure Active Directory Domain Services (Azure AD DS) joined virtual machines ++* Sign-in to virtual machines in Azure using Azure AD authentication ++A key concept to address with the first two options is that there are two identity realms that are involved in these scenarios. ++* When you sign in to an Azure Windows Server VM via remote desktop protocol (RDP), you're generally logging on to the server using your domain credentials, which performs a Kerberos authentication against an on-premises AD DS domain controller or Azure AD DS. Alternatively, if the server isn't domain-joined then a local account can be used to sign in to the virtual machines. ++* When you sign in to the Azure portal to create or manage a VM, you're authenticating against Azure AD (potentially using the same credentials if you've synchronized the correct accounts), and this could result in an authentication against your domain controllers should you be using Active Directory Federation Services (AD FS) or PassThrough Authentication. ++### Virtual machines joined to standalone Active Directory Domain Services ++AD DS is the Windows Server based directory service that organizations have largely adopted for on-premises identity services. AD DS can be deployed when a requirement exists to deploy IaaS workloads to Azure that require identity isolation from AD DS administrators and users in another forest. ++![Diagram that shows AD DS virtual machine management](media/secure-resource-management/vm-to-standalone-domain-controller.jpeg) ++The following considerations need to be made in this scenario: ++AD DS domain controllers: a minimum of two AD DS domain controllers must be deployed to ensure that authentication services are highly available and performant. For more information, see [AD DS Design and Planning](/windows-server/identity/ad-ds/plan/ad-ds-design-and-planning). ++**AD DS Design and Planning** - A new AD DS forest must be created with the following services configured correctly: ++* **AD DS Domain Name Services (DNS)** - AD DS DNS must be configured for the relevant zones within AD DS to ensure that name resolution operates correctly for servers and applications. ++* **AD DS Sites and Services** - These services must be configured to ensure that applications have low latency and performant access to domain controllers. The relevant virtual networks, subnets, and data center locations that servers are located in should be configured in sites and services. ++* **AD DS FSMOs** - The Flexible Single Master Operation (FSMO) roles that are required should be reviewed and assigned to the appropriate AD DS domain controllers. ++* **AD DS Domain Join** - All servers (excluding "jumpboxes") that require AD DS for authentication, configuration and management need to be joined to the isolated forest. ++* **AD DS Group Policy (GPO)** - AD DS GPOs must be configured to ensure that the configuration meets the security requirements, and that the configuration is standardized across the forest and domain-joined machines. ++* **AD DS Organizational Units (OU)** - AD DS OUs must be defined to ensure grouping of AD DS resources into logical management and configuration silos for purposes of administration and application of configuration. ++* **Role-based access control** - RBAC must be defined for administration and access to resources joined to this forest. This includes: ++ * **AD DS Groups** - Groups must be created to apply appropriate permissions for users to AD DS resources. ++ * **Administration accounts** - As mentioned at the start of this section there are two administration accounts required to manage this solution. ++ * An AD DS administration account with the least privileged access required to perform the administration required in AD DS and domain-joined servers. ++ * An Azure AD administration account for Azure portal access to connect, manage, and configure virtual machines, VNets, network security groups and other required Azure resources. ++ * **AD DS user accounts** - Relevant user accounts need to be provisioned and added to correct groups to allow user access to applications hosted by this solution. ++**Virtual networks (VNets)** - Configuration guidance ++* **AD DS domain controller IP address** - The domain controllers shouldn't be configured with static IP addresses within the operating system. The IP addresses should be reserved on the Azure VNet to ensure they always stay the same and DC should be configured to use DHCP. ++* **VNet DNS Server** - DNS servers must be configured on VNets that are part of this isolated solution to point to the domain controllers. This is required to ensure that applications and servers can resolve the required AD DS services or other services joined to the AD DS forest. ++* **Network security groups (NSGs)** - The domain controllers should be located on their own VNet or subnet with NSGs defined to only allow access to domain controllers from required servers (for example, domain-joined machines or jumpboxes). Jumpboxes should be added to an application security group (ASG) to simplify NSG creation and administration. ++**Challenges**: The list below highlights key challenges with using this option for identity isolation: ++* An additional AD DS Forest to administer, manage and monitor resulting in more work for the IT team to perform. ++* Further infrastructure may be required for management of patching and software deployments. Organizations should consider deploying Azure Update Management, Group Policy (GPO) or System Center Configuration Manager (SCCM) to manage these servers. ++* Additional credentials for users to remember and use to access resources. ++>[!IMPORTANT] +>For this isolated model, it is assumed that there is no connectivity to or from the domain controllers from the customer's corporate network and that there are no trusts configured with other forests. A jumpbox or management server should be created to allow a point from which the AD DS domain controllers can be managed and administered. ++### Azure Active Directory Domain Services joined virtual machines ++When a requirement exists to deploy IaaS workloads to Azure that require identity isolation from AD DS administrators and users in another forest, then an Azure AD Domain Services (Azure AD DS) managed domain can be deployed. Azure AD DS is a service that provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. This provides an isolated domain without the technical complexities of building and managing your own AD DS. The following considerations need to be made. ++![Diagram that shows Azure AD DS virtual machine management.](media/secure-resource-management/vm-to-domain-services.png) ++**Azure AD DS managed domain** - Only one Azure AD DS managed domain can be deployed per Azure AD tenant and this is bound to a single VNet. It's recommended that this VNet forms the "hub" for Azure AD DS authentication. From this hub, "spokes" can be created and linked to allow legacy authentication for servers and applications. The spokes are additional VNets on which Azure AD DS joined servers are located and are linked to the hub using Azure network gateways or VNet peering. ++**Managed domain location** - A location must be set when deploying an Azure AD DS managed domain. The location is a physical region (data center) where the managed domain is deployed. It's recommended you: ++* Consider a location that is geographically closed to the servers and applications that require Azure AD DS services. ++* Consider regions that provide Availability Zones capabilities for high availability requirements. For more information, see [Regions and Availability Zones in Azure](../../reliability/availability-zones-service-support.md). ++**Object provisioning** - Azure AD DS synchronizes identities from the Azure AD that is associated with the subscription that Azure AD DS is deployed into. It's also worth noting that if the associated Azure AD has synchronization set up with Azure AD Connect (user forest scenario) then the life cycle of these identities can also be reflected in Azure AD DS. This service has two modes that can be used for provisioning user and group objects from Azure AD. ++* **All**: All users and groups are synchronized from Azure AD into Azure AD DS. ++* **Scoped**: Only users in scope of a group(s) are synchronized from Azure AD into Azure AD DS. ++When you first deploy Azure AD DS, an automatic one-way synchronization is configured to replicate the objects from Azure AD. This one-way synchronization continues to run in the background to keep the Azure AD DS managed domain up to date with any changes from Azure AD. No synchronization occurs from Azure AD DS back to Azure AD. For more information, see [How objects and credentials are synchronized in an Azure AD Domain Services managed domain](../../active-directory-domain-services/synchronization.md). ++It's worth noting that if you need to change the type of synchronization from All to Scoped (or vice versa), then the Azure AD DS managed domain will need to be deleted, recreated and configured. In addition, organizations should consider the use of "scoped" provisioning to reduce the identities to only those that need access to Azure AD DS resources as a good practice. ++**Group Policy Objects (GPO)** - To configure GPO in an Azure AD DS managed domain you must use Group Policy Management tools on a server that has been domain joined to the Azure AD DS managed domain. For more information, see [Administer Group Policy in an Azure AD Domain Services managed domain](../../active-directory-domain-services/manage-group-policy.md). ++**Secure LDAP** - Azure AD DS provides a secure LDAP service that can be used by applications that require it. This setting is disabled by default and to enable secure LDAP a certificate needs to be uploaded, in addition, the NSG that secures the VNet that Azure AD DS is deployed on to must allow port 636 connectivity to the Azure AD DS managed domains. For more information, see [Configure secure LDAP for an Azure Active Directory Domain Services managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md). ++**Administration** - To perform administration duties on Azure AD DS (for example, domain join machines or edit GPO), the account used for this task needs to be part of the Azure AD DC Administrators group. Accounts that are members of this group can't directly sign-in to domain controllers to perform management tasks. Instead, you create a management VM that is joined to the Azure AD DS managed domain, then install your regular AD DS management tools. For more information, see [Management concepts for user accounts, passwords, and administration in Azure Active Directory Domain Services](../../active-directory-domain-services/administration-concepts.md). ++**Password hashes** - For authentication with Azure AD DS to work, password hashes for all users need to be in a format that is suitable for NT LAN Manager (NTLM) and Kerberos authentication. To ensure authentication with Azure AD DS works as expected, the following prerequisites need to be performed. ++* **Users synchronized with Azure AD Connect (from AD DS)** - The legacy password hashes need to be synchronized from on-premises AD DS to Azure AD. ++* **Users created in Azure AD** - Need to reset their password for the correct hashes to be generated for usage with Azure AD DS. For more information, see [Enable synchronization of password hashes](../../active-directory-domain-services/tutorial-configure-password-hash-sync.md). ++**Network** - Azure AD DS is deployed on to an Azure VNet so considerations need to be made to ensure that servers and applications are secured and can access the managed domain correctly. For more information, see [Virtual network design considerations and configuration options for Azure AD Domain Services](../../active-directory-domain-services/network-considerations.md). ++* Azure AD DS must be deployed in its own subnet: Don't use an existing subnet or a gateway subnet. ++* **A network security group (NSG)** - is created during the deployment of an Azure AD DS managed domain. This network security group contains the required rules for correct service communication. Don't create or use an existing network security group with your own custom rules. ++* **Azure AD DS requires 3-5 IP addresses** - Make sure that your subnet IP address range can provide this number of addresses. Restricting the available IP addresses can prevent Azure AD DS from maintaining two domain controllers. ++* **VNet DNS Server** - As previously discussed about the "hub and spoke" model, it's important to have DNS configured correctly on the VNets to ensure that servers joined to the Azure AD DS managed domain have the correct DNS settings to resolve the Azure AD DS managed domain. Each VNet has a DNS server entry that is passed to servers as they obtain an IP address and these DNS entries need to be the IP addresses of the Azure AD DS managed domain. For more information, see [Update DNS settings for the Azure virtual network](../../active-directory-domain-services/tutorial-create-instance.md). ++**Challenges** - The following list highlights key challenges with using this option for Identity Isolation. ++* Some Azure AD DS configuration can only be administered from an Azure AD DS joined server. ++* Only one Azure AD DS managed domain can be deployed per Azure AD tenant. As we describe in this section the hub and spoke model is recommended to provide Azure AD DS authentication to services on other VNets. ++* Further infrastructure maybe required for management of patching and software deployments. Organizations should consider deploying Azure Update Management, Group Policy (GPO) or System Center Configuration Manager (SCCM) to manage these servers. ++For this isolated model, it's assumed that there's no connectivity to the VNet that hosts the Azure AD DS managed domain from the customer's corporate network and that there are no trusts configured with other forests. A jumpbox or management server should be created to allow a point from which the Azure AD DS can be managed and administered. ++### Sign into virtual machines in Azure using Azure Active Directory authentication ++When a requirement exists to deploy IaaS workloads to Azure that require identity isolation, then the final option is to use Azure AD for logon to servers in this scenario. This provides the ability to make Azure AD the identity realm for authentication purposes and identity isolation can be achieved by provisioning the servers into the relevant subscription, which is linked to the required Azure AD tenant. The following considerations need to be made. ++![Diagram that shows Azure AD authentication to Azure VMs.](media/secure-resource-management/sign-into-vm.png) ++**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../devices/howto-vm-sign-in-azure-ad-linux.md). ++**Credentials**: One of the key benefits of signing into virtual machines in Azure using Azure AD authentication is the ability to use the same federated or managed Azure AD credentials that you normally use for access to Azure AD services for sign-in to the virtual machine. ++>[!NOTE] +>The Azure AD tenant that is used for sign-in in this scenario is the Azure AD tenant that is associated with the subscription that the virtual machine has been provisioned into. This Azure AD tenant can be one that has identities synchronized from on-premises AD DS. Organizations should make an informed choice that aligns with their isolation principals when choosing which subscription and Azure AD tenant they wish to use for sign-in to these servers. ++**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../devices/howto-vm-sign-in-azure-ad-linux.md) for more information. ++**Role-based Access Control (RBAC)**: Two RBAC roles are available to provide the appropriate level of access to these virtual machines. These RBAC roles can be configured via the Azure portal or via the Azure Cloud Shell Experience. For more information, see [Configure role assignments for the VM](../devices/howto-vm-sign-in-azure-ad-windows.md). ++* **Virtual machine administrator logon**: Users with this role assigned to them can log into an Azure virtual machine with administrator privileges. ++* **Virtual machine user logon**: Users with this role assigned to them can log into an Azure virtual machine with regular user privileges. ++Conditional Access: A key benefit of using Azure AD for signing into Azure virtual machines is the ability to enforce Conditional Access as part of the sign-in process. This provides the ability for organizations to require conditions to be met before allowing access to the virtual machine and to use multifactor authentication to provide strong authentication. For more information, see [Using Conditional Access](../devices/howto-vm-sign-in-azure-ad-windows.md). ++>[!NOTE] +>Remote connection to virtual machines joined to Azure AD is only allowed from Windows 10, Windows 11, and Cloud PC PCs that are Azure AD joined or hybrid Azure AD joined to the same directory as the virtual machine. ++**Challenges**: The list below highlights key challenges with using this option for identity isolation. ++* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](../../automation/update-management/overview.md) to manage patching and updates of these servers. ++* Not suitable for multi-tiered applications that have requirements to authenticate with on-premises mechanisms such as Windows Integrated Authentication across these servers or services. If this is a requirement for the organization, then it's recommended that you explore the Standalone Active Directory Domain Services, or the Azure Active Directory Domain Services scenarios described in this section. ++For this isolated model, it's assumed that there's no connectivity to the VNet that hosts the virtual machines from the customer's corporate network. A jumpbox or management server should be created to allow a point from which these servers can be managed and administered. ++## Next steps ++* [Introduction to delegated administration and isolated environments](secure-introduction.md) ++* [Azure AD fundamentals](../fundamentals/secure-fundamentals.md) ++* [Resource isolation in a single tenant](secure-single-tenant.md) ++* [Resource isolation with multiple tenants](secure-multiple-tenants.md) ++* [Best practices](secure-best-practices.md) |
active-directory | Secure Service Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-service-accounts.md | + + Title: Introduction to securing Azure Active Directory service accounts +description: Explanation of the types of service accounts available in Azure Active Directory. +++++++ Last updated : 08/26/2022++++++# Securing cloud-based service accounts ++There are three types of service accounts native to Azure Active Directory: Managed identities, service principals, and user-based service accounts. Service accounts are a special type of account that is intended to represent a non-human entity such as an application, API, or other service. These entities operate within the security context provided by the service account. ++## Types of Azure Active Directory service accounts ++For services hosted in Azure, we recommend using a managed identity if possible, and a service principal if not. Managed identities can't be used for services hosted outside of Azure. In that case, we recommend a service principal. If you can use a managed identity or a service principal, do so. We recommend that you not use an Azure Active Directory user account as a service account. See the following table for a summary. ++| Service hosting| Managed identity| Service principal| Azure user account | +| - | - | - | - | +|Service is hosted in Azure.| Yes. <br>Recommended if the service <br>supports a Managed Identity.| Yes.| Not recommended. | +| Service is not hosted in Azure.| No| Yes. Recommended.| Not recommended. | +| Service is multi-tenant| No| Yes. Recommended.| No. | ++## Managed identities ++Managed identities are secure Azure Active Directory (Azure AD) identities created to provide identities for Azure resources. There are [two types of managed identities](../managed-identities-azure-resources/overview.md#managed-identity-types): + +* System-assigned managed identities can be assigned directly to an instance of a service. ++* User-assigned managed identities can be created as a standalone resource. ++For more information, see [Securing managed identities](service-accounts-managed-identities.md). For general information about managed identities, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) ++## Service principals ++If you can't use a managed identity to represent your application, use a service principal. Service principals can be used with both single tenant and multi-tenant applications. ++A service principal is the local representation of an application object in a single Azure AD tenant. It functions as the identity of the application instance, defines who can access the application, and what resources the application can access. A service principal is created in (local to) each tenant where the application is used and references the globally unique application object. The tenant secures the service principal's sign-in and access to resources. ++There are two mechanisms for authentication using service principalsΓÇöclient certificates and client secrets. Certificates are more secure: use client certificates if possible. Unlike client secrets, client certificates cannot accidentally be embedded in code. ++For information on securing service principals, see [Securing service principals](service-accounts-principal.md). + +## Next steps ++For more information on securing Azure service accounts, see: ++[Securing managed identities](service-accounts-managed-identities.md) ++[Securing service principals](service-accounts-principal.md) ++[Governing Azure service accounts](govern-service-accounts.md) |
active-directory | Secure Single Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-single-tenant.md | + + Title: Resource isolation in a single tenant to secure with Azure Active Directory +description: Introduction to resource isolation in a single tenant in Azure Active Directory. +++++++ Last updated : 7/5/2022+++++++# Resource isolation in a single tenant ++Many separation scenarios can be achieved within a single tenant. If possible, we recommend that you delegate administration to separate environments within a single tenant to provide the best productivity and collaboration experience for your organization. ++## Outcomes ++**Resource separation** - With Azure AD directory roles, security groups, conditional access policies, Azure resource groups, Azure management groups, administrative units (AU's), and other controls, you can restrict resource access to specific users, groups, and service principals. Resources can be managed by separate administrators, and have separate users, permissions, and access requirements. ++If a set of resources require unique tenant-wide settings, or there's minimal risk tolerance for unauthorized access by tenant members, or critical impact could be caused by configuration changes, you must achieve isolation in multiple tenants. ++**Configuration separation** - In some cases, resources such as applications have dependencies on tenant-wide configurations like authentication methods or [named locations](../conditional-access/location-condition.md#named-locations). You should consider these dependencies when isolating resources. Global administrators can configure the resource settings and tenant-wide settings that affect resources. ++If a set of resources require unique tenant-wide settings, or the tenant's settings must be administered by a different entity, you must achieve isolation with multiple tenants. ++**Administrative separation** - With Azure AD delegated administration, you can segregate the administration of resources such as applications and APIs, users and groups, resource groups, and conditional access policies. ++Global administrators can discover and obtain full access to any trusting resources. You can set up auditing and alerts to know when an administrator changes a resource if they're authenticated. ++You can also use administrative units (AU) in Azure AD to provide some level of administrative separation. Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](../roles/permissions-reference.md) role to regional support specialists, so they can manage users only in the region that they support. ++![Diagram that shows administrative units.](media/secure-single-tenant/administrative-units.png) ++Administrative Units can be used to separate [user, groups and device objects](../roles/administrative-units.md). Assignments of those units can be managed by [dynamic membership rules](../roles/admin-units-members-dynamic.md). ++By using Privileged Identity Management (PIM) you can define who in your organization is the best person to approve the request for highly privileged roles. For example, admins requiring global administrator access to make tenant-wide changes. ++>[!NOTE] +>Using PIM requires and Azure AD P2 license per human. ++If you must ensure that global administrators are unable to manage a specific resource, you must isolate that resource in a separate tenant with separate global administrators. This can be especially important for backups, see [multi-user authorization guidance](../../backup/multi-user-authorization.md) for examples of this. ++## Common usage ++One of the most common uses for multiple environments in a single tenant is to segregate production from nonproduction resources. Within a single tenant, development teams and application owners can create and manage a separate environment with test apps, test users and groups, and test policies for those objects; similarly, they can create nonproduction instances of Azure resources and trusted apps. ++The following diagram illustrates the nonproduction environments and the production environment. ++![Diagram that shows Azure AD tenant boundary.](media/secure-single-tenant/tenant-boundary.png) ++In this diagram, there are nonproduction Azure resources and nonproduction instances Azure AD integrated applications with equivalent nonproduction directory objects. In this example, the nonproduction resources in the directory are used for testing purposes. ++>[!NOTE] +>You cannot have more than one Microsoft 365 environment in a single Azure AD tenant. However, you can have multiple Dynamics 365 environments in a single Azure AD tenant. ++Another scenario for isolation within a single tenant could be separation between locations, subsidiary or implementation of tiered administration (according to the "[Enterprise Access Model](/security/compass/privileged-access-access-model)"). ++Azure RBAC role assignments allow scoped administration of Azure resources. Similarly, Azure AD allows granular management of Azure AD trusting applications through multiple capabilities such as conditional access, user and group filtering, administrative unit assignments and application assignments. ++If you must ensure full isolation (including staging of organization-level configuration) of Microsoft 365 services, you need to choose a [multiple tenant isolation](../../backup/multi-user-authorization.md). ++## Scoped management in a single tenant ++### Scoped management for Azure resources ++Azure RBAC allows you to design an administration model with granular scopes and surface area. Consider the management hierarchy in the following example: ++>[!NOTE] +>There are multiple ways to define the management hierarchy based on an organization's individual requirements, constraints, and goals. For more information, consult the Cloud Adoption Framework guidance on how to [Organize Azure Resources](/azure/cloud-adoption-framework/ready/azure-setup-guide/organize-resources)). ++![Diagram that shows resource isolation in a single tenant.](media/secure-single-tenant/resource-hierarchy.png) ++* **Management group** - You can assign roles to specific management groups so that they don't impact any other management groups. In the scenario above, the HR team can define an Azure Policy to audit the regions where resources are deployed across all HR subscriptions. ++* **Subscription** - You can assign roles to a specific subscription to prevent it from impacting any other resource groups. In the example above, the HR team can assign the Reader role for the Benefits subscription, without reading any other HR subscription, or a subscription from any other team. ++* **Resource group** - You can assign roles to specific resource groups so that they don't impact any other resource groups. In the example above, the Benefits engineering team can assign the Contributor role to the test lead so they can manage the test DB and the test web app, or to add more resources. ++* **Individual resources** - You can assign roles to specific resources so that they don't impact any other resources. In the example above, the Benefits engineering team can assign a data analyst the Cosmos DB Account Reader role just for the test instance of the Azure Cosmos DB database, without interfering with the test web app or any production resource. ++For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md) and [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md). ++This is a hierarchical structure, so the higher up in the hierarchy, the more scope, visibility, and impact there's to lower levels. Top-level scopes affect all Azure resources in the Azure AD tenant boundary. This also means that permissions can be applied at multiple levels. The risk this introduces is that assigning roles higher up the hierarchy could provide more access lower down the scope than intended. [Microsoft Entra](https://www.microsoft.com/security/business/identity-access/microsoft-entra-permissions-management) (formally CloudKnox) is a Microsoft product that provides visibility and remediation to help reduce the risk. A few details are as follows: ++* The root management group defines Azure Policies and RBAC role assignments that will be applied to all subscriptions and resources. ++* Global Administrators can [elevate access](https://aka.ms/AzureADSecuredAzure/12a) to all subscriptions and management groups. ++Both top-level scopes should be strictly monitored. It's important to plan for other dimensions of resource isolation such as networking. For general guidance on Azure networking, see [Azure best practices for network security](../../security/fundamentals/network-best-practices.md). Infrastructure as a Service (IaaS) workloads have special scenarios where both identity and resource isolation need to be part of the overall design and strategy. ++Consider isolating sensitive or test resources according to [Azure landing zone conceptual architecture](/azure/cloud-adoption-framework/ready/landing-zone/). For example, Identity subscription should be assigned to separated management group and all subscriptions for development purposes could be separated in "Sandbox" management group. More details can be found in the [Enterprise-Scale documentation](/azure/cloud-adoption-framework/ready/enterprise-scale/faq). Separation for testing purposes within a single tenant is also considered in the [management group hierarchy of the reference architecture](/azure/cloud-adoption-framework/ready/enterprise-scale/testing-approach). ++### Scoped management for Azure AD trusting applications ++The pattern to scope management of Azure AD trusting applications is outlined in the following section. ++Azure AD supports configuring multiple instances of custom and SaaS apps, but not most Microsoft services, against the same directory with [independent user assignments](../manage-apps/assign-user-or-group-access-portal.md). The above example contains both a production and a test version of the travel app. You can deploy preproduction versions against the corporate tenant to achieve app-specific configuration and policy separation that enables workload owners to perform testing with their corporate credentials. Nonproduction directory objects such as test users and test groups are associated to the nonproduction application with separate [ownership](https://aka.ms/AzureADSecuredAzure/14a) of those objects. ++There are tenant-wide aspects that affect all trusting applications in the Azure AD tenant boundary including: ++* Global Administrators can manage all tenant-wide settings. ++* Other [directory roles](https://aka.ms/AzureADSecuredAzure/14b) such as User Administrator, Administrator, and Conditional Access Administrators can manage tenant-wide configuration within the scope of the role. ++Configuration settings such authentication methods allowed, hybrid configurations, B2B collaboration allow-listing of domains, and named locations are tenant wide. ++>[!Note] +>Microsoft Graph API Permissions and consent permissions cannot be scoped to a group or members of Administrative Units. Those permissions will be assigned on directory-level, only resource-specific consent allows scope on resource-level (currently limited to [Microsoft Teams Chat permissions](/microsoftteams/platform/graph-api/rsc/resource-specific-consent)) ++>[!IMPORTANT] +>The lifecycle of Microsoft SaaS services such as Office 365, Microsoft Dynamics, and Microsoft Exchange are bound to the Azure AD tenant. As a result, multiple instances of these services necessarily require multiple Azure AD tenants. Check the documentation for individual services to learn more about specific management scoping capabilities. ++## Next steps ++* [Introduction to delegated administration and isolated environments](secure-introduction.md) ++* [Azure AD fundamentals](../fundamentals/secure-fundamentals.md) ++* [Azure resource management fundamentals](secure-resource-management.md) ++* [Resource isolation with multiple tenants](secure-multiple-tenants.md) ++* [Best practices](secure-best-practices.md) |
active-directory | Security Operations Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-applications.md | + + Title: Azure Active Directory security operations for applications +description: Learn how to monitor and alert on applications to identify security threats. +++++++ Last updated : 09/06/2022++++++# Azure Active Directory security operations guide for applications ++Applications have an attack surface for security breaches and must be monitored. While not targeted as often as user accounts, breaches can occur. Because applications often run without human intervention, the attacks may be harder to detect. ++This article provides guidance to monitor and alert on application events. It's regularly updated to help ensure you: ++* Prevent malicious applications from getting unwarranted access to data ++* Prevent applications from being compromised by bad actors ++* Gather insights that enable you to build and configure new applications more securely ++If you're unfamiliar with how applications work in Azure Active Directory (Azure AD), see [Apps and service principals in Azure AD](../develop/app-objects-and-service-principals.md). ++> [!NOTE] +> If you have not yet reviewed the [Azure Active Directory security operations overview](security-operations-introduction.md), consider doing so now. ++## What to look for ++As you monitor your application logs for security incidents, review the following list to help differentiate normal activity from malicious activity. The following events might indicate security concerns. Each is covered in the article. ++* Any changes occurring outside normal business processes and schedules ++* Application credentials changes ++* Application permissions ++ * Service principal assigned to an Azure AD or an Azure role-based access control (RBAC) role ++ * Applications granted highly privileged permissions ++ * Azure Key Vault changes ++ * End user granting applications consent ++ * Stopped end-user consent based on level of risk ++* Application configuration changes ++ * Universal resource identifier (URI) changed or non-standard ++ * Changes to application owners ++ * Log-out URLs modified ++## Where to look ++The log files you use for investigation and monitoring are: ++* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) ++* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md) ++* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview) ++* [Azure Key Vault logs](../../key-vault/general/logging.md) ++From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools, which allow more automation of monitoring and alerting: ++* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level with security information and event management (SIEM) capabilities. ++* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where there are Sigma templates for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community. ++* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. ++* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. ++* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance. ++* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - detects risk on workload identities across sign-in behavior and offline indicators of compromise. ++Much of what you monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies, including device state. Use the workbook to view a summary, and identify the effects over a time period. You can use the workbook to investigate the sign-ins of a specific user. ++The remainder of this article is what we recommend you monitor and alert on. It's organized by the type of threat. Where there are pre-built solutions, we link to them or provide samples after the table. Otherwise, you can build alerts using the preceding tools. ++## Application credentials ++Many applications use credentials to authenticate in Azure AD. Any other credentials added outside expected processes could be a malicious actor using those credentials. We recommend using X509 certificates issued by trusted authorities or Managed Identities instead of using client secrets. However, if you need to use client secrets, follow good hygiene practices to keep applications safe. Note, application and service principal updates are logged as two entries in the audit log. ++* Monitor applications to identify long credential expiration times. ++* Replace long-lived credentials with a short life span. Ensure credentials don't get committed in code repositories, and are stored securely. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| -|-|-|-|-| +| Added credentials to existing applications| High| Azure AD Audit logs| Service-Core Directory, Category-ApplicationManagement <br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application| Alert when credentials are: added outside of normal business hours or workflows, of types not used in your environment, or added to a non-SAML flow supporting service principal.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Credentials with a lifetime longer than your policies allow.| Medium| Microsoft Graph| State and end date of Application Key credentials<br>-and-<br>Application password credentials| You can use MS Graph API to find the start and end date of credentials, and evaluate longer-than-allowed lifetimes. See PowerShell script following this table. | ++ The following pre-built monitoring and alerts are available: ++* Microsoft Sentinel ΓÇô [Alert when new app or service principle credentials added](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml) ++* Azure Monitor ΓÇô [Azure AD workbook to help you assess Solorigate risk - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718) ++* Defender for Cloud Apps ΓÇô [Defender for Cloud Apps anomaly detection alerts investigation guide](/cloud-app-security/investigate-anomaly-alerts) ++* PowerShell - [Sample PowerShell script to find credential lifetime](https://github.com/madansr7/appCredAge). ++## Application permissions ++Like an administrator account, applications can be assigned privileged roles. Apps can be assigned Azure AD roles, such as Global Administrator, or Azure RBAC roles such as Subscription Owner. Because they can run without a user, and as a background service, closely monitor when an application is granted a highly privileged role or permission. ++### Service principal assigned to a role ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +|-|-|-|-|-| +| App assigned to Azure RBAC role, or Azure AD Role| High to Medium| Azure AD Audit logs| Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥ or ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥| For highly privileged roles such as Global Administrator, risk is high. For lower privileged roles risk is medium. Alert anytime an application is assigned to an Azure role or Azure AD role outside of normal change management or configuration procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedPrivilegedRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++### Application granted highly privileged permissions ++Applications should follow the principle of least privilege. Investigate application permissions to ensure they're needed. You can create an [app consent grant report](https://aka.ms/getazureadpermissions) to help identify applications and highlight privileged permissions. ++| What to monitor|Risk Level|Where| Filter/sub-filter| Notes| +|-|-|-|-|-| +| App granted highly privileged permissions, such as permissions with ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)| High |Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>- where-<br> Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>AppRole.Value identifies a highly privileged application permission (app role).| Apps granted broad permissions such as ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Administrator granting either application permissions (app roles) or highly privileged delegated permissions |High| Microsoft 365 portal| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>ΓÇ£Add delegated permission grantΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions.| Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/MailPermissionsAddedToApplication.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥ <br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on)| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Application permissions (app roles) for other APIs are granted |Medium| Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies any other API.| Alert as in the preceding row.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Highly privileged delegated permissions are granted on behalf of all users |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥, where Target(s) identifies an API with sensitive data (such as Microsoft Graph), <br> DelegatedPermissionGrant.Scope includes high-privilege permissions, <br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥.| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/SuspiciousOAuthApp_OfflineAccess.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++For more information on monitoring app permissions, see this tutorial: [Investigate and remediate risky OAuth apps](/cloud-app-security/investigate-risky-oauth). ++### Azure Key Vault ++Use Azure Key Vault to store your tenantΓÇÖs secrets. We recommend you pay attention to any changes to Key Vault configuration and activities. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +|-|-|-|-|-| +| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)| Resource type: Key Vaults| Look for: any access to Key Vault outside regular processes and hours, any changes to Key Vault ACL.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AzureDiagnostics/AzureKeyVaultAccessManipulation.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++After you set up Azure Key Vault, [enable logging](../../key-vault/general/howto-logging.md?tabs=azure-cli). See [how and when your Key Vaults are accessed](../../key-vault/general/logging.md?tabs=Vault), and [configure alerts](../../key-vault/general/alert.md) on Key Vault to notify assigned users or distribution lists via email, phone, text, or [Event Grid](../../key-vault/general/event-grid-overview.md) notification, if health is affected. In addition, setting up [monitoring](../../key-vault/general/alert.md) with Key Vault insights gives you a snapshot of Key Vault requests, performance, failures, and latency. [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) also has some [example queries](../../azure-monitor/logs/queries.md) for Azure Key Vault that can be accessed after selecting your Key Vault and then under ΓÇ£MonitoringΓÇ¥ selecting ΓÇ£LogsΓÇ¥. ++### End-user consent ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +|-|-|-|-|-| +| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: high profile or highly privileged accounts, app requests high-risk permissions, apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/ConsentToApplicationDiscovery.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++The act of consenting to an application isn't malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](../../security/fundamentals/steps-secure-identity.md). ++For more information on consent operations, see the following resources: ++* [Managing consent to applications and evaluating consent requests in Azure Active Directory](../manage-apps/manage-consent-requests.md) ++* [Detect and Remediate Illicit Consent Grants - Office 365](/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants) ++* [Incident response playbook - App consent grant investigation](/security/compass/incident-response-playbook-app-consent) ++### End user stopped due to risk-based consent ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +|-|-|-|-|-| +| End-user consent stopped due to risk-based consent| Medium| Azure AD Audit logs| Core Directory / ApplicationManagement / Consent to application<br> Failure status reason = Microsoft.online.Security.userConsent<br>BlockedForRiskyAppsExceptions| Monitor and analyze any time consent is stopped due to risk. Look for: high profile or highly privileged accounts, app requests high-risk permissions, or apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/End-userconsentstoppedduetorisk-basedconsent.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++## Application authentication flows ++There are several flows in the OAuth 2.0 protocol. The recommended flow for an application depends on the type of application being built. In some cases, there's a choice of flows available to the application. For this case, some authentication flows are recommended over others. Specifically, avoid resource owner password credentials (ROPC) because these require the user to expose their current password credentials to the application. The application then uses the credentials to authenticate the user against the identity provider. Most applications should use the auth code flow, or auth code flow with Proof Key for Code Exchange (PKCE), because this flow is recommended. ++The only scenario where ROPC is suggested is for automated application testing. See [Run automated integration tests](../develop/test-automate-integration-testing.md) for details. ++Device code flow is another OAuth 2.0 protocol flow for input-constrained devices and isn't used in all environments. When device code flow appears in the environment, and isn't used in an input constrained device scenario. More investigation is warranted for a misconfigured application or potentially something malicious. ++Monitor application authentication using the following formation: ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| Applications that are using the ROPC authentication flow|Medium | Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-ROPC| High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow. This should only be used in automated testing of applications, if at all. For more information, see [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +|Applications using the Device code flow |Low to medium|Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-Device Code|Device code flows are used for input constrained devices, which may not be in all environments. If successful device code flows appear, without a need for them, investigate for validity. For more information, see [Microsoft identity platform and the OAuth 2.0 device authorization grant flow](../develop/v2-oauth2-device-code.md)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| ++## Application configuration changes ++Monitor changes to application configuration. Specifically, configuration changes to the uniform resource identifier (URI), ownership, and log-out URL. ++### Dangling URI and Redirect URI changes ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +|-|-|-|-|-| +| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| For example, look for dangling URIs that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/URLAddedtoApplicationfromUnknownDomain.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you don't control.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationRedirectURLUpdate.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++Alert when these changes are detected. ++### AppID URI added, modified, or removed ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +|-|-|-|-|-| +| Changes to AppID URI| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update<br>Application<br>Activity: Update Service principal| Look for any AppID URI modifications, such as adding, modifying, or removing the URI.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationIDURIChanged.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++Alert when these changes are detected outside approved change management procedures. ++### New owner ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +|-|-|-|-|-| +| Changes to application ownership| Medium| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Add owner to application| Look for any instance of a user being added as an application owner outside of normal change management activities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationOwnership.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++### Log-out URL modified or removed ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +|-|-|-|-|-| +| Changes to log-out URL| Low| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle| Look for any modifications to a sign-out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationLogoutURL.yaml) <br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| ++## Resources ++* GitHub Azure AD toolkit - [https://github.com/microsoft/AzureADToolkit](https://github.com/microsoft/AzureADToolkit) ++* Azure Key Vault security overview and security guidance - [Azure Key Vault security overview](../../key-vault/general/security-features.md) ++* Solorgate risk information and tools - [Azure AD workbook to help you access Solorigate risk](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718) ++* OAuth attack detection guidance - [Unusual addition of credentials to an OAuth app](/cloud-app-security/investigate-anomaly-alerts) ++* Azure AD monitoring configuration information for SIEMs - [Partner tools with Azure Monitor integration](../..//azure-monitor/essentials/stream-monitoring-data-event-hubs.md) ++## Next steps ++[Azure AD security operations overview](security-operations-introduction.md) ++[Security operations for user accounts](security-operations-user-accounts.md) ++[Security operations for consumer accounts](security-operations-consumer-accounts.md) ++[Security operations for privileged accounts](security-operations-privileged-accounts.md) ++[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) ++[Security operations for devices](security-operations-devices.md) ++[Security operations for infrastructure](security-operations-infrastructure.md) |
active-directory | Security Operations Consumer Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-consumer-accounts.md | + + Title: Azure Active Directory security operations for consumer accounts +description: Guidance to establish baselines and how to monitor and alert on potential security issues with consumer accounts. +++++++ Last updated : 02/28/2023++++++# Azure Active Directory security operations for consumer accounts ++Consumer identity activities are an important area for your organization to protect and monitor. This article is for Azure Active Directory B2C (Azure AD B2C) tenants and has guidance for monitoring consumer account activities. The activities are: ++* Consumer account +* Privileged account +* Application +* Infrastructure ++## Before you begin ++Before using the guidance in this article, we recommend you read, [Azure AD security operations guide](security-operations-introduction.md). ++## Define a baseline ++To discover anomalous behavior, define normal and expected behavior. Defining expected behavior for your organization helps you discover unexpected behavior. Use the definition to help reduce false positives, during monitoring and alerting. ++With expected behavior defined, perform baseline monitoring to validate expectations. Then, monitor logs for what falls outside tolerance. ++For accounts created outside normal processes, use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as your data sources. The following suggestions can help you define normal. ++### Consumer account creation ++Evaluate the following list: ++* Strategy and principles for tools and processes to create and manage consumer accounts + * For example, standard attributes and formats applied to consumer account attributes +* Approved sources for account creation. + * For example, onboarding custom policies, customer provisioning or migration tool +* Alert strategy for accounts created outside approved sources. + * Create a controlled list of organizations your organization collaborates with +* Strategy and alert parameters for accounts created, modified, or disabled by an unapproved consumer account administrator +* Monitoring and alert strategy for consumer accounts missing standard attributes, such as customer number, or not following organizational naming conventions +* Strategy, principles, and process for account deletion and retention ++## Where to look ++Use log files to investigate and monitor. See the following articles for more: ++* [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md) +* [Sign-in logs in Azure AD (preview)](../reports-monitoring/concept-all-sign-ins.md) +* [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md) ++### Audit logs and automation tools ++From the Azure portal, you can view Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. Use the Azure portal to integrate Azure AD logs with other tools to automate monitoring and alerting: ++* **Microsoft Sentinel** ΓÇô security analytics with security information and event management (SIEM) capabilities + * [What is Microsoft Sentinel?](../../sentinel/overview.md) +* **Sigma rules** - an open standard for writing rules and templates that automated management tools can use to parse log files. If there are Sigma templates for our recommended search criteria, we added a link to the Sigma repo. Microsoft doesn't write, test, or manage Sigma templates. The repo and templates are created, and collected, by the IT security community. + * [SigmaHR/sigma](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) +* **Azure Monitor** ΓÇô automated monitoring and alerting of various conditions. Create or use workbooks to combine data from different sources. + * [Azure Monitor overview](../../azure-monitor/overview.md) +* **Azure Event Hubs integrated with a SIEM** - integrate Azure AD logs with SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic with Azure Event Hubs + * [Azure Event Hubs-A big data streaming platform and event ingestion service](../../event-hubs/event-hubs-about.md) + * [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) +* **Microsoft Defender for Cloud Apps** ΓÇô discover and manage apps, govern across apps and resources, and conform cloud app compliance + * [Microsoft Defender for Cloud Apps overview](/defender-cloud-apps/what-is-defender-for-cloud-apps) +* **Identity Protection** - detect risk on workload identities across sign-in behavior and offline indicators of compromise + * [Securing workload identities with Identity Protection](..//identity-protection/concept-workload-identity-risk.md) ++Use the remainder of the article for recommendations on what to monitor and alert. Refer to the tables, organized by threat type. See links to pre-built solutions or samples following the table. Build alerts using the previously mentioned tools. ++## Consumer accounts ++| What to monitor | Risk level | Where | Filter / subfilter | Notes | +| - | - | - | - | - | +| Large number of account creations or deletions | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) = CPIM Service<br>-and-<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) = CPIM Service | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors. Limit false alerts. | +| Accounts created and deleted by non-approved users or processes| Medium | Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>Initiated by (actor) != CPIM Service<br>and-or<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) != CPIM Service | If the actors are non-approved users, configure to send an alert. | +| Accounts assigned to a privileged role| High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) == CPIM Service<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation. | +| Failed sign-in attempts| Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code 50126 - Error validating credentials due to invalid username or password.<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. | +| Smart lock-out events| Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern or a VIP | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application =="ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts. | +| Failed authentications from countries or regions you don't operate from| Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Location = \<unapproved location><br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Monitor entries not equal to provided city names. | +| Increased failed authentications of any type | Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a threshold, monitor and alert if failures increase by 10%, or greater. | +| Account disabled/blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50057, The user account is disabled. | This scenario could indicate someone trying to gain access to an account after they left an organization. The account is blocked, but it's important to log and alert this activity. | +| Measurable increase of successful sign-ins | Low | Azure AD Sign-ins log | Status = Success<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a threshold, monitor and alert if successful authentications increase by 10%, or greater. | ++## Privileged accounts ++| What to monitor | Risk level | Where | Filter / subfilter | Notes | +| - | - | - | - | - | +| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and monitor and adjust to suit your organizational behaviors. Limit false alerts. | +| Failure because of Conditional Access requirement | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | The event can indicate an attacker is trying to get into the account. | +| Interrupt | High, medium | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | The event can indicate an attacker has the account password, but can't pass the MFA challenge. | +| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, then monitor and adjust to suit your organizational behaviors. Limit false alerts. | +| Account disabled or blocked for sign-ins | low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | The event could indicate someone trying to gain account access after they've left the organization. Although the account is blocked, log and alert this activity. | +| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details<br> Result details = MFA denied, fraud code entered | Privileged user indicates they haven't instigated the MFA prompt, which could indicate an attacker has the account password. | +| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken, based on fraud report tenant-level settings | Privileged user indicated no instigation of the MFA prompt. The scenario can indicate an attacker has the account password. | +| Privileged account sign-ins outside of expected controls | High | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account> <br> Location = \<unapproved location> <br> IP address = \<unapproved IP><br>Device info = \<unapproved Browser, Operating System> | Monitor and alert entries you defined as unapproved. | +| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside expected times. Find the normal working pattern for each privileged account and alert if there are unplanned changes outside normal working times. Sign-ins outside normal working hours could indicate compromise or possible insider threat. | +| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query for privileged accounts. | +| Changes to authentication methods | High | Azure AD Audit logs | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | The change could indicate an attacker adding an auth method to the account to have continued access. | +| Identity Provider updated by non-approved actors | High | Azure AD Audit logs | Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | The change could indicate an attacker adding an auth method to the account to have continued access. | +Identity Provider deleted by non-approved actors | High | Azure AD Access Reviews | Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | The change could indicate an attacker adding an auth method to the account to have continued access. | ++## Applications ++| What to monitor | Risk level | Where | Filter / subfilter | Notes | +| - | - | - | - | - | +| Added credentials to applications | High | Azure AD Audit logs | Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application | Alert when credentials are: added outside normal business hours or workflows, types not used in your environment, or added to a non-SAML flow supporting service principal. | +| App assigned to an Azure role-based access control (RBAC) role, or Azure AD Role | High to medium | Azure AD Audit logs | Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥<br>or<br>ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥ |N/A| +| App granted highly privileged permissions, such as permissions with ΓÇ£.AllΓÇ¥ (Directory.ReadWrite.All) or wide-ranging permissions (Mail.) | High | Azure AD Audit logs |N/A | Apps granted broad permissions such as ΓÇ£.AllΓÇ¥ (Directory.ReadWrite.All) or wide-ranging permissions (Mail.) | +| Administrator granting application permissions (app roles), or highly privileged delegated permissions | High | Microsoft 365 portal | ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) ΓÇ£Add delegated permission grantΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions. | Alert when a global, application, or cloud application administrator consents to an application. Especially look for consent outside normal activity and change procedures. | +| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on) | Use the alert in the preceding row. | +| Highly privileged delegated permissions granted on behalf of all users | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>where<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>DelegatedPermissionGrant.Scope includes high-privilege permissions<br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥. | Use the alert in the preceding row. | +| Applications that are using the ROPC authentication flow | Medium | Azure AD Sign-ins log | Status=Success<br>Authentication Protocol-ROPC | High level of trust is placed in this application because the credentials can be cached or stored. If possible, move to a more secure authentication flow. Use the process only in automated application testing, if ever. | +| Dangling URI | High | Azure AD Logs and Application Registration | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | For example, look for dangling URIs pointing to a domain name that is gone, or one you donΓÇÖt own. | +| Redirect URI configuration changes | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are **not** unique to the application, URIs that point to a domain you don't control. | +| Changes to AppID URI | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Activity: Update Service principal | Look for AppID URI modifications, such as adding, modifying, or removing the URI. | +| Changes to application ownership | Medium | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Add owner to application | Look for instances of users added as application owners outside normal change management activities. | +| Changes to sign out URL | Low | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle | Look for modifications to a sign out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session. ++## Infrastructure ++| What to monitor | Risk Level | Where | Filter / subfilter | Notes | +| - | - | - | - | - | +| New Conditional Access Policy created by non-approved actors | High | Azure AD Audit logs | Activity: Add conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert Conditional Access changes. Initiated by (actor): approved to make changes to Conditional Access? | +| Conditional Access Policy removed by non-approved actors | Medium | Azure AD Audit logs | Activity: Delete conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert Conditional Access changes. Initiated by (actor): approved to make changes to Conditional Access? | +| Conditional Access Policy updated by non-approved actors | High | Azure AD Audit logs | Activity: Update conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert Conditional Access changes. Initiated by (actor): approved to make changes to Conditional Access?<br>Review Modified Properties and compare old vs. new value | +| B2C custom policy created by non-approved actors | High | Azure AD Audit logs| Activity: Create custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert custom policy changes. Initiated by (actor): approved to make changes to custom policies? | +| B2C custom policy updated by non-approved actors | High | Azure AD Audit logs| Activity: Get custom policies<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert custom policy changes. Initiated by (actor): approved to make changes to custom policies? | +| B2C custom policy deleted by non-approved actors | Medium |Azure AD Audit logs | Activity: Delete custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert custom policy changes. Initiated by (actor): approved to make changes to custom policies? | +| User flow created by non-approved actors | High |Azure AD Audit logs | Activity: Create user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on user flow changes. Initiated by (actor): approved to make changes to user flows? | +| User flow updated by non-approved actors | High | Azure AD Audit logs| Activity: Update user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on user flow changes. Initiated by (actor): approved to make changes to user flows? | +| User flow deleted by non-approved actors | Medium | Azure AD Audit logs| Activity: Delete user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on user flow changes. Initiated by (actor): approved to make changes to user flows? | +| API connectors created by non-approved actors | Medium | Azure AD Audit logs| Activity: Create API connector<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert API connector changes. Initiated by (actor): approved to make changes to API connectors? | +| API connectors updated by non-approved actors | Medium | Azure AD Audit logs| Activity: Update API connector<br>Category: ResourceManagement<br>Target: User Principal Name: ResourceManagement | Monitor and alert API connector changes. Initiated by (actor): approved to make changes to API connectors? | +| API connectors deleted by non-approved actors | Medium | Azure AD Audit logs|Activity: Update API connector<br>Category: ResourceManagment<br>Target: User Principal Name: ResourceManagment | Monitor and alert API connector changes. Initiated by (actor): approved to make changes to API connectors? | +| Identity provider (IdP) created by non-approved actors | High |Azure AD Audit logs | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert IdP changes. Initiated by (actor): approved to make changes to IdP configuration? | +| IdP updated by non-approved actors | High | Azure AD Audit logs| Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert IdP changes. Initiated by (actor): approved to make changes to IdP configuration? | +IdP deleted by non-approved actors | Medium | Azure AD Audit logs| Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert IdP changes. Initiated by (actor): approved to make changes to IdP configuration? | +++## Next steps ++To learn more, see the following security operations articles: ++* [Azure AD security operations guide](security-operations-introduction.md) +* [Azure AD security operations for user accounts](security-operations-user-accounts.md) +* [Security operations for privileged accounts in Azure AD](security-operations-privileged-accounts.md) +* [Azure AD security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) +* [Azure AD security operations guide for applications](security-operations-applications.md) +* [Azure AD security operations for devices](security-operations-devices.md) +* [Security operations for infrastructure](security-operations-infrastructure.md) |
active-directory | Security Operations Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-devices.md | + + Title: Azure Active Directory security operations for devices +description: Learn to establish baselines, and monitor and report on devices to identity potential security risks with devices. +++++++ Last updated : 09/06/2022++++++# Azure Active Directory security operations for devices ++Devices aren't commonly targeted in identity-based attacks, but *can* be used to satisfy and trick security controls, or to impersonate users. Devices can have one of four relationships with Azure AD: ++* Unregistered ++* [Azure Active Directory (Azure AD) registered](../devices/concept-azure-ad-register.md) ++* [Azure AD joined](../devices/concept-azure-ad-join.md) ++* [Hybrid Azure AD joined](../devices/concept-hybrid-join.md) ++Registered and joined devices are issued a [Primary Refresh Token (PRT),](../devices/concept-primary-refresh-token.md) which can be used as a primary authentication artifact, and in some cases as a multifactor authentication artifact. Attackers may try to register their own devices, use PRTs on legitimate devices to access business data, steal PRT-based tokens from legitimate user devices, or find misconfigurations in device-based controls in Azure Active Directory. With Hybrid Azure AD joined devices, the join process is initiated and controlled by administrators, reducing the available attack methods. ++For more information on device integration methods, see [Choose your integration methods](../devices/plan-device-deployment.md) in the article [Plan your Azure AD device deployment.](../devices/plan-device-deployment.md) ++To reduce the risk of bad actors attacking your infrastructure through devices, monitor ++* Device registration and Azure AD join ++* Non-compliant devices accessing applications ++* BitLocker key retrieval ++* Device administrator roles ++* Sign-ins to virtual machines ++## Where to look ++The log files you use for investigation and monitoring are: ++* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) ++* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md) ++* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview) ++* [Azure Key Vault logs](../..//key-vault/general/logging.md?tabs=Vault) ++From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting: ++* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. ++* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community. ++* **[Azure Monitor](../..//azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. ++* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) -integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. ++* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance. ++* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise. ++Much of what you'll monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies including device state. This workbook enables you to view a summary, and identify the effects over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. ++ The rest of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools. ++## Device registrations and joins outside policy ++Azure AD registered and Azure AD joined devices possess primary refresh tokens (PRTs), which are the equivalent of a single authentication factor. These devices can at times contain strong authentication claims. For more information on when PRTs contain strong authentication claims, see [When does a PRT get an MFA claim](../devices/concept-primary-refresh-token.md)? To keep bad actors from registering or joining devices, require multi-factor authentication (MFA) to register or join devices. Then monitor for any devices registered or joined without MFA. YouΓÇÖll also need to watch for changes to MFA settings and policies, and device compliance policies. ++ | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Device registration or join completed without MFA| Medium| Sign-in logs| Activity: successful authentication to Device Registration Service. <br>And<br>No MFA required| Alert when: Any device registered or joined without MFA<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Changes to the Device Registration MFA toggle in Azure AD| High| Audit log| Activity: Set device registration policies| Look for: The toggle being set to off. There isn't audit log entry. Schedule periodic checks.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Changes to Conditional Access policies requiring domain joined or compliant device.| High| Audit log| Changes to CA policies<br>| Alert when: Change to any policy requiring domain joined or compliant, changes to trusted locations, or accounts or devices added to MFA policy exceptions. | ++You can create an alert that notifies appropriate administrators when a device is registered or joined without MFA by using Microsoft Sentinel. +~~~ +SigninLogs +| where ResourceDisplayName == "Device Registration Service" +| where ConditionalAccessStatus == "success" +| where AuthenticationRequirement <> "multiFactorAuthentication" +~~~ ++You can also use [Microsoft Intune to set and monitor device compliance policies](/mem/intune/protect/device-compliance-get-started). ++## Non-compliant device sign-in ++It might not be possible to block access to all cloud and software-as-a-service applications with Conditional Access policies requiring compliant devices. ++[Mobile device management](/windows/client-management/mdm/) (MDM) helps you keep Windows 10 devices compliant. With Windows version 1809, we released a [security baseline](/windows/client-management/mdm/) of policies. Azure Active Directory can [integrate with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) to enforce device compliance with corporate policies, and can report a deviceΓÇÖs compliance status. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when: any sign in by non-compliant devices, or any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuccessfulSigninFromNon-CompliantDevice.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Sign-ins by unknown devices| Low| Sign-in logs| DeviceDetail is empty, single factor authentication, or from a non-trusted location| Look for: any access from out of compliance devices, any access without MFA or trusted location<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AnomolousSingleFactorSignin.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++### Use LogAnalytics to query ++**Sign-ins by non-compliant devices** ++``` +SigninLogs +| where DeviceDetail.isCompliant == false +| where ConditionalAccessStatus == "success" +``` ++**Sign-ins by unknown devices** ++``` ++SigninLogs +| where isempty(DeviceDetail.deviceId) +| where AuthenticationRequirement == "singleFactorAuthentication" +| where ResultType == "0" +| where NetworkLocationDetails == "[]" +``` ++## Stale devices ++Stale devices include devices that haven't signed in for a specified time period. Devices can become stale when a user gets a new device or loses a device, or when an Azure AD joined device is wiped or reprovisioned. Devices might also remain registered or joined when the user is no longer associated with the tenant. Stale devices should be removed so the primary refresh tokens (PRTs) cannot be used. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Last sign-in date| Low| Graph API| approximateLastSignInDateTime| Use Graph API or PowerShell to identify and remove stale devices. | ++## BitLocker key retrieval ++Attackers who have compromised a userΓÇÖs device may retrieve the [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-device-encryption-overview-windows-10) keys in Azure AD. It's uncommon for users to retrieve keys, and should be monitored and investigated. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for: key retrieval, other anomalous behavior by users retrieving keys.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/BitLockerKeyRetrieval.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++In LogAnalytics create a query such as ++``` +AuditLogs +| where OperationName == "Read BitLocker key" +``` ++## Device administrator roles ++Global administrators and cloud Device Administrators automatically get local administrator rights on all Azure AD joined devices. ItΓÇÖs important to monitor who has these rights to keep your environment safe. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Users added to global or device admin roles| High| Audit logs| Activity type = Add member to role.| Look for: new users added to these Azure AD roles, subsequent anomalous behavior by machines or users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++## Non-Azure AD sign-ins to virtual machines ++Sign-ins to Windows or LINUX virtual machines (VMs) should be monitored for sign-ins by accounts other than Azure AD accounts. ++### Azure AD sign-in for LINUX ++Azure AD sign-in for LINUX allows organizations to sign in to their Azure LINUX VMs using Azure AD accounts over secure shell protocol (SSH). ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Non-Azure AD account signing in, especially over SSH| High| Local authentication logs| Ubuntu: <br>monitor /var/log/auth.log for SSH use<br>RedHat: <br>monitor /var/log/sssd/ for SSH use| Look for: entries [where non-Azure AD accounts are successfully connecting to VMs](../devices/howto-vm-sign-in-azure-ad-linux.md). See following example. | ++Ubuntu example: ++ May 9 23:49:39 ubuntu1804 aad_certhandler[3915]: Version: 1.0.015570001; user: localusertest01 ++ May 9 23:49:39 ubuntu1804 aad_certhandler[3915]: User 'localusertest01' is not an AAD user; returning empty result. ++ May 9 23:49:43 ubuntu1804 aad_certhandler[3916]: Version: 1.0.015570001; user: localusertest01 ++ May 9 23:49:43 ubuntu1804 aad_certhandler[3916]: User 'localusertest01' is not an AAD user; returning empty result. ++ May 9 23:49:43 ubuntu1804 sshd[3909]: Accepted publicly for localusertest01 from 192.168.0.15 port 53582 ssh2: RSA SHA256:MiROf6f9u1w8J+46AXR1WmPjDhNWJEoXp4HMm9lvJAQ ++ May 9 23:49:43 ubuntu1804 sshd[3909]: pam_unix(sshd:session): session opened for user localusertest01 by (uid=0). ++You can set policy for LINUX VM sign-ins, and detect and flag Linux VMs that have non-approved local accounts added. To learn more, see using [Azure Policy to ensure standards and assess compliance](../devices/howto-vm-sign-in-azure-ad-linux.md). ++### Azure AD sign-ins for Windows Server ++Azure AD sign-in for Windows allows your organization to sign in to your Azure Windows 2019+ VMs using Azure AD accounts over remote desktop protocol (RDP). ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Non-Azure AD account sign-in, especially over RDP| High| Windows Server event logs| Interactive Login to Windows VM| Event 528, log-on type 10 (RemoteInteractive).<br>Shows when a user signs in over Terminal Services or Remote Desktop. | ++## Next steps ++[Azure AD security operations overview](security-operations-introduction.md) ++[Security operations for user accounts](security-operations-user-accounts.md) ++[Security operations for consumer accounts](security-operations-consumer-accounts.md) ++[Security operations for privileged accounts](security-operations-privileged-accounts.md) ++[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) ++[Security operations for applications](security-operations-applications.md) ++[Security operations for infrastructure](security-operations-infrastructure.md) |
active-directory | Security Operations Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-infrastructure.md | + + Title: Azure Active Directory security operations for infrastructure +description: Learn how to monitor and alert on infrastructure components to identify security threats. +++++++ Last updated : 09/06/2022++++++# Security operations for infrastructure ++Infrastructure has many components where vulnerabilities can occur if not properly configured. As part of your monitoring and alerting strategy for infrastructure, monitor and alert events in the following areas: ++* Authentication and Authorization ++* Hybrid Authentication components incl. Federation Servers ++* Policies ++* Subscriptions ++Monitoring and alerting the components of your authentication infrastructure is critical. Any compromise can lead to a full compromise of the whole environment. Many enterprises that use Azure AD operate in a hybrid authentication environment. Cloud and on-premises components should be included in your monitoring and alerting strategy. Having a hybrid authentication environment also introduces another attack vector to your environment. ++We recommend all the components be considered Control Plane / Tier 0 assets, and the accounts used to manage them. Refer to [Securing privileged assets](/security/compass/overview) (SPA) for guidance on designing and implementing your environment. This guidance includes recommendations for each of the hybrid authentication components that could potentially be used for an Azure AD tenant. ++A first step in being able to detect unexpected events and potential attacks is to establish a baseline. For all on-premises components listed in this article, see [Privileged access deployment](/security/compass/privileged-access-deployment), which is part of the Securing privileged assets (SPA) guide. ++## Where to look ++The log files you use for investigation and monitoring are: ++* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) ++* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md) ++* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview) ++* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault) ++From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting: ++* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. ++* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community. ++* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. ++* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. ++* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô Enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance. ++* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise. ++The remainder of this article describes what to monitor and alert on. It is organized by the type of threat. Where there are pre-built solutions, you'll find links to them, after the table. Otherwise, you can build alerts using the preceding tools. ++## Authentication infrastructure ++In hybrid environments that contain both on-premises and cloud-based resources and accounts, the Active Directory infrastructure is a key part of the authentication stack. The stack is also a target for attacks so must be configured to maintain a secure environment and must be monitored properly. Examples of current types of attacks used against your authentication infrastructure use Password Spray and Solorigate techniques. The following are links to articles we recommend: ++* [Securing privileged access overview](/security/compass/overview) ΓÇô This article provides an overview of current techniques using Zero Trust techniques to create and maintain secure privileged access. ++* [Microsoft Defender for Identity monitored domain activities](/defender-for-identity/monitored-activities) - This article provides a comprehensive list of activities to monitor and set alerts for. ++* [Microsoft Defender for Identity security alert tutorial](/defender-for-identity/understanding-security-alerts) - This article provides guidance on creating and implementing a security alert strategy. ++The following are links to specific articles that focus on monitoring and alerting your authentication infrastructure: ++* [Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path) - Detection techniques to help identify when non-sensitive accounts are used to gain access to sensitive network accounts. ++* [Working with security alerts in Microsoft Defender for Identity](/defender-for-identity/working-with-suspicious-activities) - This article describes how to review and manage alerts after they're logged. ++ The following are specific things to look for: ++| What to monitor| Risk level| Where| Notes | +| - | - | - | - | +| Extranet lockout trends| High| Azure AD Connect Health| See, [Monitor AD FS using Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md) for tools and techniques to help detect extranet lock-out trends. | +| Failed sign-ins|High | Connect Health Portal| Export or download the Risky IP report and follow the guidance at [Risky IP report (public preview)](../hybrid/how-to-connect-health-adfs-risky-ip.md) for next steps. | +| In privacy compliant| Low| Azure AD Connect Health| Configure Azure AD Connect Health to disable data collections and monitoring using the [User privacy and Azure AD Connect Health](../hybrid/reference-connect-health-user-privacy.md) article. | +| Potential brute force attack on LDAP| Medium| Microsoft Defender for Identity| Use sensor to help detect potential brute force attacks against LDAP. | +| Account enumeration reconnaissance| Medium| Microsoft Defender for Identity| Use sensor to help perform account enumeration reconnaissance. | +| General correlation between Azure AD and Azure AD FS|Medium | Microsoft Defender for Identity| Use capabilities to correlate activities between your Azure AD and Azure AD FS environments. | ++### Pass-through authentication monitoring ++Azure Active Directory (Azure AD) Pass-through Authentication signs users in by validating their passwords directly against on-premises Active Directory. ++The following are specific things to look for: ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80001 ΓÇô Unable to connect to Active Directory| Ensure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they can connect to Active Directory. | +| Azure AD pass-through authentication errors| Medium| Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS8002 - A timeout occurred connecting to Active Directory| Check to ensure that Active Directory is available and is responding to requests from the agents. | +| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80004 - The username passed to the agent was not valid| Ensure the user is attempting to sign in with the right username. | +| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80005 - Validation encountered unpredictable WebException| A transient error. Retry the request. If it continues to fail, contact Microsoft support. | +| Azure AD pass-through authentication errors| Medium| Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80007 - An error occurred communicating with Active Directory| Check the agent logs for more information and verify that Active Directory is operating as expected. | +| Azure AD pass-through authentication errors|High | Win32 LogonUserA function API| Log on events 4624(s): An account was successfully logged on<br>- correlate with ΓÇô<br>4625(F): An account failed to log on| Use with the suspected usernames on the domain controller that is authenticating requests. Guidance at [LogonUserA function (winbase.h)](/windows/win32/api/winbase/nf-winbase-logonusera) | +| Azure AD pass-through authentication errors| Medium| PowerShell script of domain controller| See the query after the table. | Use the information at [Azure AD Connect: Troubleshoot Pass-through Authentication](../hybrid/tshoot-connect-pass-through-authentication.md)for guidance. | ++```Kusto ++<QueryList> ++<Query Id="0" Path="Security"> ++<Select Path="Security">*[EventData[Data[@Name='ProcessName'] and (Data='C:\Program Files\Microsoft Azure AD Connect Authentication Agent\AzureADConnectAuthenticationAgentService.exe')]]</Select> ++</Query> ++</QueryList> +``` ++## Monitoring for creation of new Azure AD tenants ++Organizations might need to monitor for and alert on the creation of new Azure AD tenants when the action is initiated by identities from their organizational tenant. Monitoring for this scenario provides visibility on how many tenants are being created and could be accessed by end users. ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| Creation of a new Azure AD tenant, using an identity from your tenant. | Medium | Azure AD Audit logs | Category: Directory Management<br><br>Activity: Create Company | Target(s) shows the created TenantID | ++### AppProxy Connector ++Azure AD and Azure AD Application Proxy give remote users a single sign-on (SSO) experience. Users securely connect to on-premises apps without a virtual private network (VPN) or dual-homed servers and firewall rules. If your Azure AD Application Proxy connector server is compromised, attackers could alter the SSO experience or change access to published applications. ++To configure monitoring for Application Proxy, see [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). The data file that logs information can be found in Applications and Services Logs\Microsoft\AadApplicationProxy\Connector\Admin. For a complete reference guide to audit activity, see [Azure AD audit activity reference](../reports-monitoring/reference-audit-activities.md). Specific things to monitor: ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| Kerberos errors| Medium | Various tools| Medium | Kerberos authentication error guidance under Kerberos errors on [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). | +| DC security issues| High| DC Security Audit logs| Event ID 4742(S): A computer account was changed<br>-and-<br>Flag ΓÇô Trusted for Delegation<br>-or-<br>Flag ΓÇô Trusted to Authenticate for Delegation| Investigate any flag change. | +| Pass-the-ticket like attacks| High| | | Follow guidance in:<br>[Security principal reconnaissance (LDAP) (external ID 2038)](/defender-for-identity/reconnaissance-alerts)<br>[Tutorial: Compromised credential alerts](/defender-for-identity/compromised-credentials-alerts)<br>[Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path)<br>[Understanding entity profiles](/defender-for-identity/entity-profiles) | ++### Legacy authentication settings ++For multifactor authentication (MFA) to be effective, you also need to block legacy authentication. You then need to monitor your environment and alert on any use of legacy authentication. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI canΓÇÖt enforce MFA. This makes these protocols the preferred entry points for attackers. For more information on tools that you can use to block legacy authentication, see [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302). ++Legacy authentication is captured in the Azure AD Sign-ins log as part of the detail of the event. You can use the Azure Monitor workbook to help with identifying legacy authentication usage. For more information, see [Sign-ins using legacy authentication](../reports-monitoring/howto-use-azure-monitor-workbooks.md), which is part of [How to use Azure Monitor Workbooks for Azure Active Directory reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md). You can also use the Insecure protocols workbook for Microsoft Sentinel. For more information, see [Microsoft Sentinel Insecure Protocols Workbook Implementation Guide](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-insecure-protocols-workbook-implementation-guide/ba-p/1197564). Specific activities to monitor include: ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| Legacy authentications|High | Azure AD Sign-ins log| ClientApp : POP<br>ClientApp : IMAP<br>ClientApp : MAPI<br>ClientApp: SMTP<br>ClientApp : ActiveSync go to EXO<br>Other Clients = SharePoint and EWS| In federated domain environments, failed authentications aren't recorded and don't appear in the log. | ++## Azure AD Connect ++Azure AD Connect provides a centralized location that enables account and attribute synchronization between your on-premises and cloud-based Azure AD environment. Azure AD Connect is the Microsoft tool designed to meet and accomplish your hybrid identity goals. It provides the following features: ++* [Password hash synchronization](../hybrid/whatis-phs.md) - A sign-in method that synchronizes a hash of a userΓÇÖs on-premises AD password with Azure AD. ++* [Synchronization](../hybrid/how-to-connect-sync-whatis.md) - Responsible for creating users, groups, and other objects. And, making sure identity information for your on-premises users and groups matches the cloud. This synchronization also includes password hashes. ++* [Health Monitoring](../hybrid/whatis-azure-ad-connect.md) - Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity. ++Synchronizing identity between your on-premises environment and your cloud environment introduces a new attack surface for your on-premises and cloud-based environment. We recommend: ++* You treat your Azure AD Connect primary and staging servers as Tier 0 Systems in your control plane. ++* You follow a standard set of policies that govern each type of account and its usage in your environment. ++* You install Azure AD Connect and Connect Health. These primarily provide operational data for the environment. ++Logging of Azure AD Connect operations occurs in different ways: ++* The Azure AD Connect wizard logs data to \ProgramData\AADConnect . Each time the wizard is invoked, a timestamped trace log file is created. The trace log can be imported into Sentinel or other 3<sup data-htmlnode="">rd</sup> party security information and event management (SIEM) tools for analysis. ++* Some operations initiate a PowerShell script to capture logging information. To collect this data, you must make sure script block logging in enabled. ++### Monitoring configuration changes ++Azure AD uses Microsoft SQL Server Data Engine or SQL to store Azure AD Connect configuration information. Therefore, monitoring and auditing of the log files associated with configuration should be included in your monitoring and auditing strategy. Specifically, include the following tables in your monitoring and alerting strategy. ++| What to monitor| Where| Notes | +| - | - | - | +| mms_management_agent| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | +| mms_partition| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | +| mms_run_profile| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | +| mms_server_configuration| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | +| mms_synchronization_rule| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | ++For information on what and how to monitor configuration information refer to: ++* For SQL server, see [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records). ++* For Microsoft Sentinel, see [Connect to Windows servers to collect security events](/sql/relational-databases/security/auditing/sql-server-audit-records). ++* For information on configuring and using Azure AD Connect, see [What is Azure AD Connect?](../hybrid/whatis-azure-ad-connect.md) ++### Monitoring and troubleshooting synchronization ++ One function of Azure AD Connect is to synchronize hash synchronization between a userΓÇÖs on-premises password and Azure AD. If passwords aren't synchronizing as expected, the synchronization might affect a subset of users or all users. Use the following to help verify proper operation or troubleshoot issues: ++* Information for checking and troubleshooting hash synchronization, see [Troubleshoot password hash synchronization with Azure AD Connect sync](../hybrid/tshoot-connect-password-hash-synchronization.md). ++* Modifications to the connector spaces, see [Troubleshoot Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes). ++**Important resources on monitoring** ++| What to monitor | Resources | +| - | - | +| Hash synchronization validation|See [Troubleshoot password hash synchronization with Azure AD Connect sync](../hybrid/tshoot-connect-password-hash-synchronization.md) | + Modifications to the connector spaces|see [Troubleshoot Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes) | +| Modifications to rules you configured| Monitor changes to: filtering, domain and OU, attribute, and group-based changes | +| SQL and MSDE changes | Changes to logging parameters and addition of custom functions | ++**Monitor the following**: ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| Scheduler changes|High | PowerShell| Set-ADSyncScheduler| Look for modifications to schedule | +| Changes to scheduled tasks| High | Azure AD Audit logs| Activity = 4699(S): A scheduled task was deleted<br>-or-<br>Activity = 4701(s): A scheduled task was disabled<br>-or-<br>Activity = 4702(s): A scheduled task was updated| Monitor all | ++* For more information on logging PowerShell script operations, see [Enabling Script Block Logging](/powershell/module/microsoft.powershell.core/about/about_logging_windows), which is part of the PowerShell reference documentation. ++* For more information on configuring PowerShell logging for analysis by Splunk, refer to [Get Data into Splunk User Behavior Analytics](https://docs.splunk.com/Documentation/UBA/5.0.4.1/GetDataIn/AddPowerShell). ++### Monitoring seamless single sign-on ++Azure Active Directory (Azure AD) Seamless Single Sign-On (Seamless SSO) automatically signs in users when they are on their corporate desktops that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without other on-premises components. SSO uses the pass-through authentication and password hash synchronization capabilities provided by Azure AD Connect. ++Monitoring single sign-on and Kerberos activity can help you detect general credential theft attack patterns. Monitor using the following information: ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| Errors associated with SSO and Kerberos validation failures|Medium | Azure AD Sign-ins log| | Single sign-on list of error codes at [Single sign-on](../hybrid/tshoot-connect-sso.md). | +| Query for troubleshooting errors|Medium | PowerShell| See query following table. check in each forest with SSO enabled.| Check in each forest with SSO enabled. | +| Kerberos-related events|High | Microsoft Defender for Identity monitoring| | Review guidance available at [Microsoft Defender for Identity Lateral Movement Paths (LMPs)](/defender-for-identity/use-case-lateral-movement-path) | ++```kusto +<QueryList> ++<Query Id="0" Path="Security"> ++<Select Path="Security">*[EventData[Data[@Name='ServiceName'] and (Data='AZUREADSSOACC$')]]</Select> ++</Query> ++</QueryList> +``` ++## Password protection policies ++If you deploy Azure AD Password Protection, monitoring and reporting are essential tasks. The following links provide details to help you understand various monitoring techniques, including where each service logs information and how to report on the use of Azure AD Password Protection. ++The domain controller (DC) agent and proxy services both log event log messages. All PowerShell cmdlets described below are only available on the proxy server (see the AzureADPasswordProtection PowerShell module). The DC agent software doesn't install a PowerShell module. ++Detailed information for planning and implementing on-premises password protection is available at [Plan and deploy on-premises Azure Active Directory Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md). For monitoring details, see [Monitor on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-monitor.md). On each domain controller, the DC agent service software writes the results of each individual password validation operation (and other status) to the following local event log: ++* \Applications and Services Logs\Microsoft\AzureADPasswordProtection\DCAgent\Admin ++* \Applications and Services Logs\Microsoft\AzureADPasswordProtection\DCAgent\Operational ++* \Applications and Services Logs\Microsoft\AzureADPasswordProtection\DCAgent\Trace ++The DC agent Admin log is the primary source of information for how the software is behaving. By default, the Trace log is off and must be enabled before data is logged. To troubleshoot application proxy problems and error messages, detailed information is available at [Troubleshoot Azure Active Directory Application Proxy](../app-proxy/application-proxy-troubleshoot.md). Information for these events is logged in: ++* Applications and Services Logs\Microsoft\AadApplicationProxy\Connector\Admin ++* Azure AD Audit Log, Category Application Proxy ++Complete reference for Azure AD audit activities is available at [Azure Active Directory (Azure AD) audit activity reference](../reports-monitoring/reference-audit-activities.md). ++## Conditional Access +In Azure AD, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure your Conditional Access policies work as expected to ensure that your resources are protected. Monitoring and alerting on changes to the Conditional Access service ensures policies defined by your organization for access to data are enforced. Azure AD logs when changes are made to Conditional Access and also provides workbooks to ensure your policies are providing the expected coverage. ++**Workbook Links** ++* [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md) ++* [Conditional Access gap analysis workbook](../reports-monitoring/workbook-conditional-access-gap-analyzer.md) ++Monitor changes to Conditional Access policies using the following information: ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| New Conditional Access Policy created by non-approved actors|Medium | Azure AD Audit logs|Activity: Add conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +|Conditional Access Policy removed by non-approved actors|Medium|Azure AD Audit logs|Activity: Delete conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +|Conditional Access Policy updated by non-approved actors|Medium|Azure AD Audit logs|Activity: Update conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br><br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +|Removal of a user from a group used to scope critical Conditional Access policies|Medium|Azure AD Audit logs|Activity: Remove member from group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been removed.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +|Addition of a user to a group used to scope critical Conditional Access policies|Low|Azure AD Audit logs|Activity: Add member to group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been added.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| ++## Next steps ++[Azure AD security operations overview](security-operations-introduction.md) ++[Security operations for user accounts](security-operations-user-accounts.md) ++[Security operations for consumer accounts](security-operations-consumer-accounts.md) ++[Security operations for privileged accounts](security-operations-privileged-accounts.md) ++[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) ++[Security operations for applications](security-operations-applications.md) ++[Security operations for devices](security-operations-devices.md) |
active-directory | Security Operations Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-introduction.md | + + Title: Azure Active Directory security operations guide +description: Learn to monitor, identify, and alert on security issues with accounts, applications, devices, and infrastructure in Azure Active Directory. +++++++ Last updated : 09/06/2022+++ - it-pro + - seodec18 + - kr2b-contr-experiment ++++# Azure Active Directory security operations guide ++Microsoft has a successful and proven approach to [Zero Trust security](https://aka.ms/Zero-Trust) using [Defense in Depth](https://www.cisa.gov/sites/default/files/recommended_practices/NCCIC_ICS-CERT_Defense_in_Depth_2016_S508C.pdf) principles that use identity as a control plane. Organizations continue to embrace a hybrid workload world for scale, cost savings, and security. Azure Active Directory (Azure AD) plays a pivotal role in your strategy for identity management. Recently, news surrounding identity and security compromise has increasingly prompted enterprise IT to consider their identity security posture as a measurement of defensive security success. ++Increasingly, organizations must embrace a mixture of on-premises and cloud applications, which users access with both onΓÇôpremises and cloud-only accounts. Managing users, applications, and devices both on-premises and in the cloud poses challenging scenarios. ++## Hybrid identity ++Azure Active Directory creates a common user identity for authentication and authorization to all resources, regardless of location. We call this *hybrid identity*. ++To achieve hybrid identity with Azure AD, one of three authentication methods can be used, depending on your scenarios. The three methods are: ++* [Password hash synchronization (PHS)](../hybrid/whatis-phs.md) +* [Pass-through authentication (PTA)](../hybrid/how-to-connect-pta.md) +* [Federation (AD FS)](../hybrid/whatis-fed.md) ++As you audit your current security operations or establish security operations for your Azure environment, we recommend you: ++* Read specific portions of the Microsoft security guidance to establish a baseline of knowledge about securing your cloud-based or hybrid Azure environment. +* Audit your account and password strategy and authentication methods to help deter the most common attack vectors. +* Create a strategy for continuous monitoring and alerting on activities that might indicate a security threat. ++### Audience ++The Azure AD SecOps Guide is intended for enterprise IT identity and security operations teams and managed service providers that need to counter threats through better identity security configuration and monitoring profiles. This guide is especially relevant for IT administrators and identity architects advising Security Operations Center (SOC) defensive and penetration testing teams to improve and maintain their identity security posture. ++### Scope ++This introduction provides the suggested prereading and password audit and strategy recommendations. This article also provides an overview of the tools available for hybrid Azure environments and fully cloud-based Azure environments. Finally, we provide a list of data sources you can use for monitoring and alerting and configuring your security information and event management (SIEM) strategy and environment. The rest of the guidance presents monitoring and alerting strategies in the following areas: ++* [User accounts](security-operations-user-accounts.md). Guidance specific to non-privileged user accounts without administrative privilege, including anomalous account creation and usage, and unusual sign-ins. ++* [Privileged accounts](security-operations-privileged-accounts.md). Guidance specific to privileged user accounts that have elevated permissions to perform administrative tasks. Tasks include Azure AD role assignments, Azure resource role assignments, and access management for Azure resources and subscriptions. ++* [Privileged Identity Management (PIM)](security-operations-privileged-identity-management.md). Guidance specific to using PIM to manage, control, and monitor access to resources. ++* [Applications](security-operations-applications.md). Guidance specific to accounts used to provide authentication for applications. ++* [Devices](security-operations-devices.md). Guidance specific to monitoring and alerting for devices registered or joined outside of policies, non-compliant usage, managing device administration roles, and sign-ins to virtual machines. ++* [Infrastructure](security-operations-infrastructure.md). Guidance specific to monitoring and alerting on threats to your hybrid and purely cloud-based environments. ++## Important reference content ++Microsoft has many products and services that enable you to customize your IT environment to fit your needs. We recommend that you review the following guidance for your operating environment: ++* Windows operating systems ++ * [Windows 10 and Windows Server 2016 security auditing and monitoring reference](https://www.microsoft.com/download/details.aspx?id=52630) + * [Security baseline (FINAL) for Windows 10 v1909 and Windows Server v1909](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/security-baseline-final-for-windows-10-v1909-and-windows-server/ba-p/1023093) + * [Security baseline for Windows 11](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/windows-11-security-baseline/ba-p/2810772) + * [Security baseline for Windows Server 2022](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/windows-server-2022-security-baseline/ba-p/2724685) ++* On-premises environments ++ * [Microsoft Defender for Identity architecture](/defender-for-identity/architecture) + * [Connect Microsoft Defender for Identity to Active Directory quickstart](/defender-for-identity/install-step2) + * [Azure security baseline for Microsoft Defender for Identity](/defender-for-identity/security-baseline) + * [Monitoring Active Directory for Signs of Compromise](/windows-server/identity/ad-ds/plan/security-best-practices/monitoring-active-directory-for-signs-of-compromise) ++* Cloud-based Azure environments ++ * [Monitor sign-ins with the Azure AD sign-in log](../reports-monitoring/concept-all-sign-ins.md) + * [Audit activity reports in the Azure portal](../reports-monitoring/concept-audit-logs.md) + * [Investigate risk with Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-investigate-risk.md) + * [Connect Azure AD Identity Protection data to Microsoft Sentinel](../../sentinel/data-connectors/azure-active-directory-identity-protection.md) ++* Active Directory Domain Services (AD DS) ++ * [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations) ++* Active Directory Federation Services (AD FS) ++ * [AD FS Troubleshooting - Auditing Events and Logging](/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-logging) ++## Data sources ++The log files you use for investigation and monitoring are: ++* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) +* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md) +* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview) +* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault) ++From the Azure portal, you can view the Azure AD Audit logs. Download logs as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting: ++* **[Microsoft Sentinel](../../sentinel/overview.md)** - Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. ++* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community. ++* **[Azure Monitor](../../azure-monitor/overview.md)** - Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. ++* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Azure AD logs can be integrated to other SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md). ++* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** - Enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps. ++* **[Securing workload identities with Identity Protection Preview](../identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise. ++Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the Conditional Access insights and reporting workbook to examine the effects of one or more Conditional Access policies on your sign-ins and the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. For more information, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md). ++The remainder of this article describes what to monitor and alert on. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools. ++* **[Identity Protection](../identity-protection/overview-identity-protection.md)** generates three key reports that you can use to help with your investigation: ++* **Risky users** contains information about which users are at risk, details about detections, history of all risky sign-ins, and risk history. ++* **Risky sign-ins** contains information surrounding the circumstance of a sign-in that might indicate suspicious circumstances. For more information on investigating information from this report, see [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md). ++* **Risk detections** contains information on risk signals detected by Azure AD Identity Protection that informs sign-in and user risk. For more information, see the [Azure AD security operations guide for user accounts](security-operations-user-accounts.md). ++For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md). ++### Data sources for domain controller monitoring ++For the best results, we recommend that you monitor your domain controllers using Microsoft Defender for Identity. This approach enables the best detection and automation capabilities. Follow the guidance from these resources: ++* [Microsoft Defender for Identity architecture](/defender-for-identity/architecture) +* [Connect Microsoft Defender for Identity to Active Directory quickstart](/defender-for-identity/install-step2) ++If you don't plan to use Microsoft Defender for Identity, monitor your domain controllers by one of these approaches: ++* Event log messages. See [Monitoring Active Directory for Signs of Compromise](/windows-server/identity/ad-ds/plan/security-best-practices/monitoring-active-directory-for-signs-of-compromise). +* PowerShell cmdlets. See [Troubleshooting Domain Controller Deployment](/windows-server/identity/ad-ds/deploy/troubleshooting-domain-controller-deployment). ++## Components of hybrid authentication ++As part of an Azure hybrid environment, the following items should be baselined and included in your monitoring and alerting strategy. ++* **PTA Agent** - The pass-through authentication agent is used to enable pass-through authentication and is installed on-premises. See [Azure AD Pass-through Authentication agent: Version release history](../hybrid/reference-connect-pta-version-history.md) for information on verifying your agent version and next steps. ++* **AD FS/WAP** - Azure Active Directory Federation Services (Azure AD FS) and Web Application Proxy (WAP) enable secure sharing of digital identity and entitlement rights across your security and enterprise boundaries. For information on security best practices, see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs). ++* **Azure AD Connect Health Agent** - The agent used to provide a communications link for Azure AD Connect Health. For information on installing the agent, see [Azure AD Connect Health agent installation](../hybrid/how-to-connect-health-agent-install.md). ++* **Azure AD Connect Sync Engine** - The on-premises component, also called the sync engine. For information on the feature, see [Azure AD Connect sync service features](../hybrid/how-to-connect-syncservice-features.md). ++* **Password Protection DC agent** - Azure password protection DC agent is used to help with monitoring and reporting event log messages. For information, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md). ++* **Password Filter DLL** - The password filter DLL of the DC Agent receives user password-validation requests from the operating system. The filter forwards them to the DC Agent service that's running locally on the DC. For information on using the DLL, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md). ++* **Password writeback Agent** - Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time. For more information on this feature, see [How does self-service password reset writeback work in Azure Active Directory](../authentication/concept-sspr-writeback.md). ++* **Azure AD Application Proxy Connector** - Lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. For more information, see [Understand Azure ADF Application Proxy connectors](../app-proxy/application-proxy-connectors.md). ++## Components of cloud-based authentication ++As part of an Azure cloud-based environment, the following items should be baselined and included in your monitoring and alerting strategy. ++* **Azure AD Application Proxy** - This cloud service provides secure remote access to on-premises web applications. For more information, see [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy-connectors.md). ++* **Azure AD Connect** - Services used for an Azure AD Connect solution. For more information, see [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md). ++* **Azure AD Connect Health** - Service Health provides you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. For more information, see [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md). ++* **Azure AD multifactor authentication** - Multifactor authentication requires a user to provide more than one form of proof for authentication. This approach can provide a proactive first step to securing your environment. For more information, see [Azure AD multi-factor authentication](../authentication/concept-mfa-howitworks.md). ++* **Dynamic groups** - Dynamic configuration of security group membership for Azure AD Administrators can set rules to populate groups that are created in Azure AD based on user attributes. For more information, see [Dynamic groups and Azure Active Directory B2B collaboration](../external-identities/use-dynamic-groups.md). ++* **Conditional Access** - Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane. For more information, see [What is Conditional Access](../conditional-access/overview.md). ++* **Identity Protection** - A tool that enables organizations to automate the detection and remediation of identity-based risks, investigate risks using data in the portal, and export risk detection data to your SIEM. For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md). ++* **Group-based licensing** - Licenses can be assigned to groups rather than directly to users. Azure AD stores information about license assignment states for users. ++* **Provisioning Service** - Provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. For more information, see [How Application Provisioning works in Azure Active Directory](../app-provisioning/how-provisioning-works.md). ++* **Graph API** - The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview). ++* **Domain Service** - Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md). ++* **Azure Resource Manager** - Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md). ++* **Managed identity** - Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. For more information, see [What are managed identities for Azure resources](../managed-identities-azure-resources/overview.md). ++* **Privileged Identity Management** - PIM is a service in Azure AD that enables you to manage, control, and monitor access to important resources in your organization. For more information, see [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md). ++* **Access reviews** - Azure AD access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. User's access can be reviewed regularly to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews](../governance/access-reviews-overview.md). ++* **Entitlement management** - Azure AD entitlement management is an [identity governance](../governance/identity-governance-overview.md) feature. Organizations can manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration. For more information, see [What is Azure AD entitlement management](../governance/entitlement-management-overview.md). ++* **Activity logs** - The Activity log is an Azure [platform log](../../azure-monitor/essentials/platform-logs-overview.md) that provides insight into subscription-level events. This log includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md). ++* **Self-service password reset service** - Azure AD self-service password reset (SSPR) gives users the ability to change or reset their password. The administrator or help desk isn't required. For more information, see [How it works: Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md). ++* **Device services** - Device identity management is the foundation for [device-based Conditional Access](../conditional-access/require-managed-devices.md). With device-based Conditional Access policies, you can ensure that access to resources in your environment is only possible with managed devices. For more information, see [What is a device identity](../devices/overview.md). ++* **Self-service group management** - You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and can delegate control of group membership. Self-service group management features aren't available for mail-enabled security groups or distribution lists. For more information, see [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md). ++* **Risk detections** - Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps. ++## Next steps ++See these security operations guide articles: ++[Security operations for user accounts](security-operations-user-accounts.md) ++[Security operations for consumer accounts](security-operations-consumer-accounts.md) ++[Security operations for privileged accounts](security-operations-privileged-accounts.md) ++[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) ++[Security operations for applications](security-operations-applications.md) ++[Security operations for devices](security-operations-devices.md) ++[Security operations for infrastructure](security-operations-infrastructure.md) |
active-directory | Security Operations Privileged Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-privileged-accounts.md | + + Title: Security operations for privileged accounts in Azure Active Directory +description: Learn about baselines, and how to monitor and alert on potential security issues with privileged accounts in Azure Active Directory. +++++++ Last updated : 09/06/2022++++++# Security operations for privileged accounts in Azure Active Directory ++The security of business assets depends on the integrity of the privileged accounts that administer your IT systems. Cyber attackers use credential theft attacks and other means to target privileged accounts and gain access to sensitive data. ++Traditionally, organizational security has focused on the entry and exit points of a network as the security perimeter. However, software as a service (SaaS) applications and personal devices on the internet have made this approach less effective. ++Azure Active Directory (Azure AD) uses identity and access management (IAM) as the control plane. In your organization's identity layer, users assigned to privileged administrative roles are in control. The accounts used for access must be protected, whether the environment is on-premises, in the cloud, or a hybrid environment. ++You're entirely responsible for all layers of security for your on-premises IT environment. When you use Azure services, prevention and response are the joint responsibilities of Microsoft as the cloud service provider and you as the customer. ++* For more information on the shared responsibility model, see [Shared responsibility in the cloud](../../security/fundamentals/shared-responsibility.md). +* For more information on securing access for privileged users, see [Securing privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md). +* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, see [Privileged Identity Management documentation](../privileged-identity-management/index.yml). ++## Log files to monitor ++The log files you use for investigation and monitoring are: ++* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) ++* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview) ++* [Azure Key Vault insights](../../key-vault/key-vault-insights-overview.md) ++From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting: ++* **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. ++* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community. ++* **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. ++* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Enables Azure AD logs to be pushed to other SIEMs such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md). ++* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)**. Enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance. ++* **Microsoft Graph**. Enables you to export data and use Microsoft Graph to do more analysis. For more information, see [Microsoft Graph PowerShell SDK and Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-graph-api.md). ++* **[Identity Protection](../identity-protection/overview-identity-protection.md)**. Generates three key reports you can use to help with your investigation: ++ * **Risky users**. Contains information about which users are at risk, details about detections, history of all risky sign-ins, and risk history. + + * **Risky sign-ins**. Contains information about a sign-in that might indicate suspicious circumstances. For more information on investigating information from this report, see [Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md). + + * **Risk detections**. Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps. ++* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)**. Use to detect risk on workload identities across sign-in behavior and offline indicators of compromise. ++Although we discourage the practice, privileged accounts can have standing administration rights. If you choose to use standing privileges, and the account is compromised, it can have a strongly negative effect. We recommend you prioritize monitoring privileged accounts and include the accounts in your Privileged Identity Management (PIM) configuration. For more information on PIM, see [Start using Privileged Identity Management](../privileged-identity-management/pim-getting-started.md). Also, we recommend you validate that admin accounts: ++* Are required. +* Have the least privilege to execute the require activities. +* Are protected with multifactor authentication at a minimum. +* Are run from privileged access workstation (PAW) or secure admin workstation (SAW) devices. ++The rest of this article describes what we recommend you monitor and alert on. The article is organized by the type of threat. Where there are specific prebuilt solutions, we link to them following the table. Otherwise, you can build alerts by using the tools described above. ++This article provides details on setting baselines and auditing sign-in and usage of privileged accounts. It also discusses tools and resources you can use to help maintain the integrity of your privileged accounts. The content is organized into the following subjects: ++* Emergency "break-glass" accounts +* Privileged account sign-in +* Privileged account changes +* Privileged groups +* Privilege assignment and elevation ++## Emergency access accounts ++It's important that you prevent being accidentally locked out of your Azure AD tenant. You can mitigate the effect of an accidental lockout by creating emergency access accounts in your organization. Emergency access accounts are also known as *break-glass accounts*, as in "break glass in case of emergency" messages found on physical security equipment like fire alarms. ++Emergency access accounts are highly privileged, and they aren't assigned to specific individuals. Emergency access accounts are limited to emergency or break-glass scenarios where normal privileged accounts can't be used. An example is when a Conditional Access policy is misconfigured and locks out all normal administrative accounts. Restrict emergency account use to only the times when it's absolutely necessary. ++For guidance on what to do in an emergency, see [Secure access practices for administrators in Azure AD](../roles/security-planning.md). ++Send a high-priority alert every time an emergency access account is used. ++### Discovery ++Because break-glass accounts are only used if there's an emergency, your monitoring should discover no account activity. Send a high-priority alert every time an emergency access account is used or changed. Any of the following events might indicate a bad actor is trying to compromise your environments: ++* Sign-in. +* Account password change. +* Account permission or roles changed. +* Credential or auth method added or changed. ++For more information on managing emergency access accounts, see [Manage emergency access admin accounts in Azure AD](../roles/security-emergency-access.md). For detailed information on creating an alert for an emergency account, see [Create an alert rule](../roles/security-emergency-access.md). ++## Privileged account sign-in ++Monitor all privileged account sign-in activity by using the Azure AD Sign-in logs as the data source. In addition to sign-in success and failure information, the logs contain the following details: ++* Interrupts +* Device +* Location +* Risk +* Application +* Date and time +* Is the account disabled +* Lockout +* MFA fraud +* Conditional Access failure ++### Things to monitor ++You can monitor privileged account sign-in events in the Azure AD Sign-in logs. Alert on and investigate the following events for privileged accounts. ++| What to monitor | Risk level | Where | Filter/subfilter | Notes | +| - | - | - | - | - | +| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Failure because of Conditional Access requirement |High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. | +| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multi-factor authentication challenge.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Privileged accounts that don't follow naming policy| High | Azure AD directory | [List Azure AD role assignments](../roles/view-assignments.md)| List role assignments for Azure AD roles and alert where the UPN doesn't match your organization's format. An example is the use of ADM_ as a prefix. | +| Discover privileged accounts not registered for multi-factor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. | +| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Account disabled or blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Privileged account sign-ins outside of expected controls | | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Identity protection risk | High | Identity Protection logs | Risk state = At risk<br>-and-<br>Risk level = Low, medium, high<br>-and-<br>Activity = Unfamiliar sign-in/TOR, and so on | This event indicates there's some abnormality detected with the sign-in for the account and should be alerted on. | +| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountPasswordChanges.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/17ead56ae30b1a8e46bb0f95a458bdeb2d30ba9b/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/AdministratorsAuthenticatingtoAnotherAzureADTenant.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++## Changes by privileged accounts ++Monitor all completed and attempted changes by a privileged account. This data enables you to establish what's normal activity for each privileged account and alert on activity that deviates from the expected. The Azure AD Audit logs are used to record this type of event. For more information on Azure AD Audit logs, see [Audit logs in Azure Active Directory](../reports-monitoring/concept-audit-logs.md). ++### Azure Active Directory Domain Services ++Privileged accounts that have been assigned permissions in Azure AD Domain Services can perform tasks for Azure AD Domain Services that affect the security posture of your Azure-hosted virtual machines that use Azure AD Domain Services. Enable security audits on virtual machines and monitor the logs. For more information on enabling Azure AD Domain Services audits and for a list of sensitive privileges, see the following resources: ++* [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md) +* [Audit Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use) ++| What to monitor | Risk level | Where | Filter/subfilter | Notes | +| - | - | - | - | - | +| Attempted and completed changes | High | Azure AD Audit logs | Date and time<br>-and-<br>Service<br>-and-<br>Category and name of the activity (what)<br>-and-<br>Status = Success or failure<br>-and-<br>Target<br>-and-<br>Initiator or actor (who) | Any unplanned changes should be alerted on immediately. These logs should be retained to help with any investigation. Any tenant-level changes should be investigated immediately (link out to Infra doc) that would lower the security posture of your tenant. An example is excluding accounts from multifactor authentication or Conditional Access. Alert on any additions or changes to applications. See [Azure Active Directory security operations guide for Applications](security-operations-applications.md). | +| **Example**<br>Attempted or completed change to high-value apps or services | High | Audit log | Service<br>-and-<br>Category and name of the activity | Date and time, Service, Category and name of the activity, Status = Success or failure, Target, Initiator or actor (who) | +| Privileged changes in Azure AD Domain Services | High | Azure AD Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md)<br>For a list of all privileged events, see [Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). | ++## Changes to privileged accounts ++Investigate changes to privileged accounts' authentication rules and privileges, especially if the change provides greater privilege or the ability to perform tasks in your Azure AD environment. ++| What to monitor| Risk level| Where| Filter/subfilter| Notes | +| - | - | - | - | - | +| Privileged account creation| Medium| Azure AD Audit logs| Service = Core Directory<br>-and-<br>Category = User management<br>-and-<br>Activity type = Add user<br>-correlate with-<br>Category type = Role management<br>-and-<br>Activity type = Add member to role<br>-and-<br>Modified properties = Role.DisplayName| Monitor creation of any privileged accounts. Look for correlation that's of a short time span between creation and deletion of accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Changes to authentication methods| High| Azure AD Audit logs| Service = Authentication Method<br>-and-<br>Activity type = User registered security information<br>-and-<br>Category = User management| This change could be an indication of an attacker adding an auth method to the account so they can have continued access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/AuthenticationMethodsChangedforPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-or-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivilegedAccountPermissionsChanged.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Accounts exempt from Conditional Access| High| Azure Monitor Logs<br>-or-<br>Access Reviews| Conditional Access = Insights and reporting| Any account exempt from Conditional Access is most likely bypassing security controls and is more vulnerable to compromise. Break-glass accounts are exempt. See information on how to monitor break-glass accounts later in this article.| +| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target: User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AdditionofaTemporaryAccessPasstoaPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++For more information on how to monitor for exceptions to Conditional Access policies, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md). ++For more information on discovering unused privileged accounts, see [Create an access review of Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md). ++## Assignment and elevation ++Having privileged accounts that are permanently provisioned with elevated abilities can increase the attack surface and risk to your security boundary. Instead, employ just-in-time access by using an elevation procedure. This type of system allows you to assign eligibility for privileged roles. Admins elevate their privileges to those roles only when they perform tasks that need those privileges. Using an elevation process enables you to monitor elevations and non-use of privileged accounts. ++### Establish a baseline ++To monitor for exceptions, you must first create a baseline. Determine the following information for these elements ++* **Admin accounts** ++ * Your privileged account strategy + * Use of on-premises accounts to administer on-premises resources + * Use of cloud-based accounts to administer cloud-based resources + * Approach to separating and monitoring administrative permissions for on-premises and cloud-based resources ++* **Privileged role protection** ++ * Protection strategy for roles that have administrative privileges + * Organizational policy for using privileged accounts + * Strategy and principles for maintaining permanent privilege versus providing time-bound and approved access ++The following concepts and information help determine policies: ++* **Just-in-time admin principles**. Use the Azure AD logs to capture information for performing administrative tasks that are common in your environment. Determine the typical amount of time needed to complete the tasks. +* **Just-enough admin principles**. Determine the least-privileged role, which might be a custom role, that's needed for administrative tasks. For more information, see [Least privileged roles by task in Azure Active Directory](../roles/delegate-by-task.md). +* **Establish an elevation policy**. After you have insight into the type of elevated privilege needed and how long is needed for each task, create policies that reflect elevated privileged usage for your environment. As an example, define a policy to limit Global Administrator access to one hour. ++After you establish your baseline and set policy, you can configure monitoring to detect and alert usage outside of policy. ++### Discovery ++Pay particular attention to and investigate changes in assignment and elevation of privilege. ++### Things to monitor ++You can monitor privileged account changes by using Azure AD Audit logs and Azure Monitor logs. Include the following changes in your monitoring process. ++| What to monitor| Risk level| Where| Filter/subfilter| Notes | +| - | - | - | - | - | +| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Roles assigned out of PIM| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role (permanent)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| These roles should be closely monitored and alerted. Users shouldn't be assigned roles outside of PIM where possible.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivlegedRoleAssignedOutsidePIM.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Elevations| Medium| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure <br>-and-<br>Modified properties = Role.DisplayName| After a privileged account is elevated, it can now make changes that could affect the security of your tenant. All elevations should be logged and, if happening outside of the standard pattern for that user, should be alerted and investigated if not planned.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AccountElevatedtoNewRole.yaml) | +| Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity type = Request approved or denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations because it could give a clear indication of the timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Changes to PIM settings| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Update role setting in PIM<br>-and-<br>Status reason = MFA on activation disabled (example)| One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Elevation not occurring on SAW/PAW| High| Azure AD Sign In logs| Device ID <br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| If this change is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately because it could indicate an attacker is trying to use the account.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Elevation to manage all Azure subscriptions| High| Azure Monitor| Activity Log tab <br>Directory Activity tab <br> Operations Name = Assigns the caller to user access admin <br> -and- <br> Event category = Administrative <br> -and-<br>Status = Succeeded, start, fail<br>-and-<br>Event initiated by| This change should be investigated immediately if it isn't planned. This setting could allow an attacker access to Azure subscriptions in your environment. | ++For more information about managing elevation, see [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). For information on monitoring elevations by using information available in the Azure AD logs, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md), which is part of the Azure Monitor documentation. ++For information about configuring alerts for Azure roles, see [Configure security alerts for Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-configure-alerts.md). ++## Next steps ++See these security operations guide articles: ++[Azure AD security operations overview](security-operations-introduction.md) ++[Security operations for user accounts](security-operations-user-accounts.md) ++[Security operations for consumer accounts](security-operations-consumer-accounts.md) ++[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) ++[Security operations for applications](security-operations-applications.md) ++[Security operations for devices](security-operations-devices.md) ++[Security operations for infrastructure](security-operations-infrastructure.md) |
active-directory | Security Operations Privileged Identity Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-privileged-identity-management.md | + + Title: Azure Active Directory security operations for Privileged Identity Management +description: Establish baselines and use Azure AD Privileged Identity Management (PIM) to monitor and alert on issues with accounts governed by PIM. +++++++ Last updated : 09/06/2022+++++++# Azure Active Directory security operations for Privileged Identity Management ++The security of business assets depends on the integrity of the privileged accounts that administer your IT systems. Cyber-attackers use credential theft attacks to target admin accounts and other privileged access accounts to try gaining access to sensitive data. ++For cloud services, prevention and response are the joint responsibilities of the cloud service provider and the customer. ++Traditionally, organizational security has focused on the entry and exit points of a network as the security perimeter. However, SaaS apps and personal devices have made this approach less effective. In Azure Active Directory (Azure AD), we replace the network security perimeter with authentication in your organization's identity layer. As users are assigned to privileged administrative roles, their access must be protected in on-premises, cloud, and hybrid environments. ++You're entirely responsible for all layers of security for your on-premises IT environment. When you use Azure cloud services, prevention and response are joint responsibilities of Microsoft as the cloud service provider and you as the customer. ++* For more information on the shared responsibility model, see [Shared responsibility in the cloud](../../security/fundamentals/shared-responsibility.md). ++* For more information on securing access for privileged users, see [Securing Privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md). ++* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](../privileged-identity-management/index.yml). ++Privileged Identity Management (PIM) is an Azure AD service that enables you to manage, control, and monitor access to important resources in your organization. These resources include resources in Azure AD, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune. You can use PIM to help mitigate the following risks: ++* Identify and minimize the number of people who have access to secure information and resources. ++* Detect excessive, unnecessary, or misused access permissions on sensitive resources. ++* Reduce the chances of a malicious actor getting access to secured information or resources. ++* Reduce the possibility of an unauthorized user inadvertently impacting sensitive resources. ++Use this article provides guidance to set baselines, audit sign-ins, and usage of privileged accounts. Use the source audit log source to help maintain privileged account integrity. ++## Where to look ++The log files you use for investigation and monitoring are: ++* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) ++* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md) ++* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview) ++* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault) ++In the Azure portal, view the Azure AD Audit logs and download them as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools to automate monitoring and alerting: ++* [**Microsoft Sentinel**](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. ++* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community. ++* [**Azure Monitor**](../../azure-monitor/overview.md) ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. ++* [**Azure Event Hubs**](../../event-hubs/event-hubs-about.md) **integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. ++* [**Microsoft Defender for Cloud Apps**](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance. ++* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise. ++The rest of this article has recommendations to set a baseline to monitor and alert on, with a tier model. Links to pre-built solutions appear after the table. You can build alerts using the preceding tools. The content is organized into the following areas: ++* Baselines ++* Azure AD role assignment ++* Azure AD role alert settings ++* Azure resource role assignment ++* Access management for Azure resources ++* Elevated access to manage Azure subscriptions ++## Baselines ++The following are recommended baseline settings: ++| What to monitor| Risk level| Recommendation| Roles| Notes | +| - |- |- |- |- | +| Azure AD roles assignment| High| Require justification for activation. Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication (MFA). Set maximum elevation duration to 8 hrs.| Privileged Role Administration, Global Administrator| A privileged role administrator can customize PIM in their Azure AD organization, including changing the experience for users activating an eligible role assignment. | +| Azure Resource Role Configuration| High| Require justification for activation. Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication. Set maximum elevation duration to 8 hrs.| Owner, Resource Administrator, User Access, Administrator, Global Administrator, Security Administrator| Investigate immediately if not a planned change. This setting might enable attacker access to Azure subscriptions in your environment. | ++## Azure AD roles assignment ++A privileged role administrator can customize PIM in their Azure AD organization, which includes changing the user experience of activating an eligible role assignment: ++* Prevent bad actor to remove Azure AD Multi-Factor Authentication requirements to activate privileged access. ++* Prevent malicious users bypass justification and approval of activating privileged access. ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Alert on Add changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Add eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Add eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Monitor and always alert for any changes to privileged role administrator and global administrator. This can be an indication an attacker is trying to gain privilege to modify role assignment settings. If you donΓÇÖt have a defined threshold, alert on 4 in 60 minutes for users and 2 in 60 minutes for privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Alert on bulk deletion changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Remove eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Remove eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/BulkChangestoPrivilegedAccountPermissions.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Changes to PIM settings| High| Azure AD Audit Log| Service = PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Update role setting in PIM<br>-and-<br>Status Reason = MFA on activation disabled (example)| Monitor and always alert for any changes to Privileged Role Administrator and Global Administrator. This can be an indication an attacker has access to modify role assignment settings. One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Approvals and deny elevation| High| Azure AD Audit Log| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity Type = Request Approved/Denied<br>-and-<br>Initiated actor = UPN| All elevations should be monitored. Log all elevations to give a clear indication of timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Alert setting changes to disabled.| High| Azure AD Audit logs| Service =PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Disable PIM Alert<br>-and-<br>Status = Success /Failure| Always alert. Helps detect bad actor removing alerts associated with Azure AD Multi-Factor Authentication requirements to activate privileged access. Helps detect suspicious or unsafe activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++For more information on identifying role setting changes in the Azure AD Audit log, see [View audit history for Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-how-to-use-audit-log.md). ++## Azure resource role assignment ++Monitoring Azure resource role assignments allows visibility into activity and activations for resources roles. These assignments might be misused to create an attack surface to a resource. As you monitor for this type of activity, you're trying to detect: ++* Query role assignments at specific resources ++* Role assignments for all child resources ++* All active and eligible role assignment changes ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Audit Alert Resource Audit log for Privileged account activities| High| In PIM, under Azure Resources, Resource Audit| Action: Add eligible member to role in PIM completed (time bound) <br>-and-<br>Primary Target <br>-and-<br>Type User<br>-and-<br>Status = Succeeded<br>| Always alert. Helps detect bad actor adding eligible roles to manage all resources in Azure. | +| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target: Too many owners assigned to a resource<br>-and-<br>Status = Succeeded| Helps detect bad actor disabling alerts, in the Alerts pane, which can bypass malicious activity being investigated | +| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target: Too many permanent owners assigned to a resource<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts, in the Alerts pane, which can bypass malicious activity being investigated | +| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target Duplicate role created<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts, from the Alerts pane, which can bypass malicious activity being investigated | ++For more information on configuring alerts and auditing Azure resource roles, see: ++* [Configure security alerts for Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-configure-alerts.md) ++* [View audit report for Azure resource roles in Privileged Identity Management (PIM)](../privileged-identity-management/azure-pim-resource-rbac.md) ++## Access management for Azure resources and subscriptions ++Users or group members assigned the Owner or User Access Administrator subscriptions roles, and Azure AD Global Administrators who enabled subscription management in Azure AD, have Resource Administrator permissions by default. The administrators assign roles, configure role settings, and review access using Privileged Identity Management (PIM) for Azure resources. ++A user who has Resource administrator permissions can manage PIM for Resources. Monitor for and mitigate this introduced risk: the capability can be used to allow bad actors privileged access to Azure subscription resources, such as virtual machines (VMs) or storage accounts. ++| What to monitor| Risk level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Elevations| High| Azure AD, under Manage, Properties| Periodically review setting.<br>Access management for Azure resources| Global administrators can elevate by enabling Access management for Azure resources.<br>Verify bad actors haven't gained permissions to assign roles in all Azure subscriptions and management groups associated with Active Directory. | ++For more information, see [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md) ++## Next steps ++[Azure AD security operations overview](security-operations-introduction.md) ++[Security operations for user accounts](security-operations-user-accounts.md) ++[Security operations for consumer accounts](security-operations-consumer-accounts.md) ++[Security operations for privileged accounts](security-operations-privileged-accounts.md) ++[Security operations for applications](security-operations-applications.md) ++[Security operations for devices](security-operations-devices.md) + +[Security operations for infrastructure](security-operations-infrastructure.md) |
active-directory | Security Operations User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-user-accounts.md | + + Title: Azure Active Directory security operations for user accounts +description: Guidance to establish baselines and how to monitor and alert on potential security issues with user accounts. +++++++ Last updated : 09/06/2022++++++# Azure Active Directory security operations for user accounts ++User identity is one of the most important aspects of protecting your organization and data. This article provides guidance for monitoring account creation, deletion, and account usage. The first portion covers how to monitor for unusual account creation and deletion. The second portion covers how to monitor for unusual account usage. ++If you have not yet read the [Azure Active Directory (Azure AD) security operations overview](security-operations-introduction.md), we recommend you do so before proceeding. ++This article covers general user accounts. For privileged accounts, see Security operations ΓÇô privileged accounts. ++## Define a baseline ++To discover anomalous behavior, you first must define what normal and expected behavior is. Defining what expected behavior for your organization is, helps you determine when unexpected behavior occurs. The definition also helps to reduce the noise level of false positives when monitoring and alerting. ++Once you define what you expect, you perform baseline monitoring to validate your expectations. With that information, you can monitor the logs for anything that falls outside of tolerances you define. ++Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as your data sources for accounts created outside of normal processes. The following are suggestions to help you think about and define what normal is for your organization. ++* **Users account creation** ΓÇô evaluate the following: ++ * Strategy and principles for tools and processes used for creating and managing user accounts. For example, are there standard attributes, formats that are applied to user account attributes. ++ * Approved sources for account creation. For example, originating in Active Directory (AD), Azure Active Directory or HR systems like Workday. ++ * Alert strategy for accounts created outside of approved sources. Is there a controlled list of organizations your organization collaborates with? ++ * Provisioning of guest accounts and alert parameters for accounts created outside of entitlement management or other normal processes. ++ * Strategy and alert parameters for accounts created, modified, or disabled by an account that isn't an approved user administrator. ++ * Monitoring and alert strategy for accounts missing standard attributes, such as employee ID or not following organizational naming conventions. ++ * Strategy, principles, and process for account deletion and retention. ++* **On-premises user accounts** ΓÇô evaluate the following for accounts synced with Azure AD Connect: ++ * The forests, domains, and organizational units (OUs) in scope for synchronization. Who are the approved administrators who can change these settings and how often is the scope checked? ++ * The types of accounts that are synchronized. For example, user accounts and or service accounts. ++ * The process for creating privileged on-premises accounts and how the synchronization of this type of account is controlled. ++ * The process for creating on-premises user accounts and how the synchronization of this type of account is managed. ++For more information for securing and monitoring on-premises accounts, see [Protecting Microsoft 365 from on-premises attacks](protect-m365-from-on-premises-attacks.md). ++* **Cloud user accounts** ΓÇô evaluate the following: ++ * The process to provision and manage cloud accounts directly in Azure AD. ++ * The process to determine the types of users provisioned as Azure AD cloud accounts. For example, do you only allow privileged accounts or do you also allow user accounts? ++ * The process to create and maintain a list of trusted individuals and or processes expected to create and manage cloud user accounts. ++ * The process to create and maintained an alert strategy for non-approved cloud-based accounts. ++## Where to look ++The log files you use for investigation and monitoring are: ++* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) ++* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md) ++* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview) ++* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault) ++* [Risky Users log](../identity-protection/howto-identity-protection-investigate-risk.md) ++* [UserRiskEvents log](../identity-protection/howto-identity-protection-investigate-risk.md) ++From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting: ++* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities. ++* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community. ++* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources. ++* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. ++* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance. ++* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise. ++Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies, including device state. This workbook enables you to view a summary, and identify the effects over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. ++ The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools. ++## Account creation ++Anomalous account creation can indicate a security issue. Short lived accounts, accounts not following naming standards, and accounts created outside of normal processes should be investigated. ++### Short-lived accounts ++Account creation and deletion outside of normal identity management processes should be monitored in Azure AD. Short-lived accounts are accounts created and deleted in a short time span. This type of account creation and quick deletion could mean a bad actor is trying to avoid detection by creating accounts, using them, and then deleting the account. ++Short-lived account patterns might indicate non-approved people or processes might have the right to create and delete accounts that fall outside of established processes and policies. This type of behavior removes visible markers from the directory. ++If the data trail for account creation and deletion is not discovered quickly, the information required to investigate an incident may no longer exist. For example, accounts might be deleted and then purged from the recycle bin. Audit logs are retained for 30 days. However, you can export your logs to Azure Monitor or a security information and event management (SIEM) solution for longer term retention. ++|What to monitor|Risk Level|Where|Filter/sub-filter|Notes| +|||||| +| Account creation and deletion events within a close time frame. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br> | Search for user principal name (UPN) events. Look for accounts created and then deleted in under 24 hours.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedandDeletedinShortTimeframe.yaml) | +| Accounts created and deleted by non-approved users or processes. | Medium| Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>and-or<br>Activity: Delete user<br>Status = success | If the actors are non-approved users, configure to send an alert. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) | +| Accounts from non-approved sources. | Medium | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Target(s) = USER PRINCIPAL NAME | If the entry isn't from an approved domain or is a known blocked domain, configure to send an alert.<br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Accountcreatedfromnon-approvedsources.yaml) | +| Accounts assigned to a privileged role.| High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++Both privileged and non-privileged accounts should be monitored and alerted. However, since privileged accounts have administrative permissions, they should have higher priority in your monitor, alert, and respond processes. ++### Accounts not following naming policies ++User accounts not following naming policies might have been created outside of organizational policies. ++A best practice is to have a naming policy for user objects. Having a naming policy makes management easier and helps provide consistency. The policy can also help discover when users have been created outside of approved processes. A bad actor might not be aware of your naming standards and might make it easier to detect an account provisioned outside of your organizational processes. ++Organizations tend to have specific formats and attributes that are used for creating user and or privileged accounts. For example: ++* Admin account UPN = ADM_firstname.lastname@tenant.onmicrosoft.com ++* User account UPN = Firstname.Lastname@contoso.com ++Frequently, user accounts have an attribute that identifies a real user. For example, EMPID = XXXNNN. Use the following suggestions to help define normal for your organization, and when defining a baseline for log entries when accounts don't follow your naming convention: ++* Accounts that don't follow the naming convention. For example, `nnnnnnn@contoso.com` versus `firstname.lastname@contoso.com`. ++* Accounts that don't have the standard attributes populated or aren't in the correct format. For example, not having a valid employee ID. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| User accounts that don't have expected attributes defined.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with your standard attributes either null or in the wrong format. For example, EmployeeID <br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/Useraccountcreatedwithoutexpectedattributesdefined.yaml) | +| User accounts created using incorrect naming format.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with a UPN that does not follow your naming policy. <br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAccountCreatedUsingIncorrectNamingFormat.yaml) | +| Privileged accounts that don't follow naming policy.| High| Azure Subscription| [List Azure role assignments using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where sign-in name does not match your organizations format. For example, ADM_ as a prefix. | +| Privileged accounts that don't follow naming policy.| High| Azure AD directory| [List Azure AD role assignments](../roles/view-assignments.md)| List roles assignments for Azure AD roles alert where UPN doesn't match your organizations format. For example, ADM_ as a prefix. | ++For more information on parsing, see: ++* Azure AD Audit logs - [Parse text data in Azure Monitor Logs](../../azure-monitor/logs/parse-text.md) ++* Azure Subscriptions - [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md) ++* Azure Active Directory - [List Azure AD role assignments](../roles/view-assignments.md) ++### Accounts created outside normal processes ++Having standard processes to create users and privileged accounts is important so that you can securely control the lifecycle of identities. If users are provisioned and deprovisioned outside of established processes, it can introduce security risks. Operating outside of established processes can also introduce identity management problems. Potential risks include: ++* User and privileged accounts might not be governed to adhere to organizational policies. This can lead to a wider attack surface on accounts that aren't managed correctly. ++* It becomes harder to detect when bad actors create accounts for malicious purposes. By having valid accounts created outside of established procedures, it becomes harder to detect when accounts are created, or permissions modified for malicious purposes. ++We recommend that user and privileged accounts only be created following your organization policies. For example, an account should be created with the correct naming standards, organizational information and under scope of the appropriate identity governance. Organizations should have rigorous controls for who has the rights to create, manage, and delete identities. Roles to create these accounts should be tightly managed and the rights only available after following an established workflow to approve and obtain these permissions. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - | - | - | - | - | +| User accounts created or deleted by non-approved users or processes.| Medium| Azure AD Audit logs| Activity: Add user<br>Status = success<br>and-or-<br>Activity: Delete user<br>Status = success<br>-and-<br>Initiated by (actor) = USER PRINCIPAL NAME| Alert on accounts created by non-approved users or processes. Prioritize accounts created with heightened privileges.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) | +| User accounts created or deleted from non-approved sources.| Medium| Azure AD Audit logs| Activity: Add user<br>Status = success<br>-or-<br>Activity: Delete user<br>Status = success<br>-and-<br>Target(s) = USER PRINCIPAL NAME| Alert when the domain is non-approved or known blocked domain. | ++## Unusual sign-ins ++Seeing failures for user authentication is normal. But seeing patterns or blocks of failures can be an indicator that something is happening with a user's Identity. For example, during Password spray or Brute Force attacks, or when a user account is compromised. It's critical that you monitor and alert when patterns emerge. This helps ensure you can protect the user and your organization's data. ++Success appears to say all is well. But it can mean that a bad actor has successfully accessed a service. Monitoring successful logins helps you detect user accounts that are gaining access but aren't user accounts that should have access. User authentication successes are normal entries in Azure AD Sign-Ins logs. We recommend you monitor and alert to detect when patterns emerge. This helps ensure you can protect user accounts and your organization's data. ++As you design and operationalize a log monitoring and alerting strategy, consider the tools available to you through the Azure portal. Identity Protection enables you to automate the detection, protection, and remediation of identity-based risks. Identity protection uses intelligence-fed machine learning and heuristic systems to detect risk and assign a risk score for users and sign-ins. Customers can configure policies based on a risk level for when to allow or deny access or allow the user to securely self-remediate from a risk. The following Identity Protection risk detections inform risk levels today: ++| What to monitor | Risk Level | Where | Filter/sub-filter | Notes | +| - | - | - | - | - | +| Leaked credentials user risk detection| High| Azure AD Risk Detection logs| UX: Leaked credentials <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Azure AD Threat Intelligence user risk detection| High| Azure AD Risk Detection logs| UX: Azure AD threat intelligence <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Anonymous IP address sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Anonymous IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Atypical travel sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Atypical travel <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Anomalous Token| Varies| Azure AD Risk Detection logs| UX: Anomalous Token <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Malware linked IP address sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Malware linked IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Suspicious browser sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Suspicious browser <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Unfamiliar sign-in properties sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Unfamiliar sign-in properties <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Malicious IP address sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Malicious IP address<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Suspicious inbox manipulation rules sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Suspicious inbox manipulation rules<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Password Spray sign-in risk detection| High| Azure AD Risk Detection logs| UX: Password spray<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Impossible travel sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Impossible travel<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| New country/region sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: New country/region<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Activity from anonymous IP address sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Activity from Anonymous IP address<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Suspicious inbox forwarding sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Suspicious inbox forwarding<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | +| Azure AD threat intelligence sign-in risk detection| High| Azure AD Risk Detection logs| UX: Azure AD threat intelligence<br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | ++For more information, visit [What is Identity Protection](../identity-protection/overview-identity-protection.md). ++### What to look for ++Configure monitoring on the data within the Azure AD Sign-ins Logs to ensure that alerting occurs and adheres to your organization's security policies. Some examples of this are: ++* **Failed Authentications**: As humans we all get our passwords wrong from time to time. However, many failed authentications can indicate that a bad actor is trying to obtain access. Attacks differ in ferocity but can range from a few attempts per hour to a much higher rate. For example, Password Spray normally preys on easier passwords against many accounts, while Brute Force attempts many passwords against targeted accounts. ++* **Interrupted Authentications**: An Interrupt in Azure AD represents an injection of a process to satisfy authentication, such as when enforcing a control in a CA policy. This is a normal event and can happen when applications aren't configured correctly. But when you see many interrupts for a user account it could indicate something is happening with that account. ++ * For example, if you filtered on a user in Sign-in logs and see a large volume of sign in status = Interrupted and Conditional Access = Failure. Digging deeper it may show in authentication details that the password is correct, but that strong authentication is required. This could mean the user isn't completing multi-factor authentication (MFA) which could indicate the user's password is compromised and the bad actor is unable to fulfill MFA. ++* **Smart lock-out**: Azure AD provides a smart lock-out service which introduces the concept of familiar and non-familiar locations to the authentication process. A user account visiting a familiar location might authenticate successfully while a bad actor unfamiliar with the same location is blocked after several attempts. Look for accounts that have been locked out and investigate further. ++* **IP changes**: It is normal to see users originating from different IP addresses. However, Zero Trust states never trust and always verify. Seeing a large volume of IP addresses and failed sign-ins can be an indicator of intrusion. Look for a pattern of many failed authentications taking place from multiple IP addresses. Note, virtual private network (VPN) connections can cause false positives. Regardless of the challenges, we recommend you monitor for IP address changes and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks. ++* **Locations**: Generally, you expect a user account to be in the same geographical location. You also expect sign-ins from locations where you have employees or business relations. When the user account comes from a different international location in less time than it would take to travel there, it can indicate the user account is being abused. Note, VPNs can cause false positives, we recommend you monitor for user accounts signing in from geographically distant locations and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks. ++For this risk area, we recommend you monitor standard user accounts and privileged accounts but prioritize investigations of privileged accounts. Privileged accounts are the most important accounts in any Azure AD tenant. For specific guidance for privileged accounts, see Security operations ΓÇô privileged accounts. ++### How to detect ++You use Azure Identity Protection and the Azure AD sign-in logs to help discover threats indicated by unusual sign-in characteristics. Information about Identity Protection is available at [What is Identity Protection](../identity-protection/overview-identity-protection.md). You can also replicate the data to Azure Monitor or a SIEM for monitoring and alerting purposes. To define normal for your environment and to set a baseline, determine: ++* the parameters you consider normal for your user base. ++* the average number of tries of a password over a time before the user calls the service desk or performs a self-service password reset. ++* how many failed attempts you want to allow before alerting, and if it will be different for user accounts and privileged accounts. ++* how many MFA attempts you want to allow before alerting, and if it will be different for user accounts and privileged accounts. ++* if legacy authentication is enabled and your roadmap for discontinuing usage. ++* the known egress IP addresses are for your organization. ++* the countries/regions your users operate from. ++* whether there are groups of users that remain stationary within a network location or country/region. ++* Identify any other indicators for unusual sign-ins that are specific to your organization. For example days or times of the week or year that your organization doesn't operate. ++After you scope what normal is for the accounts in your environment, consider the following list to help determine scenarios you want to monitor and alert on, and to fine-tune your alerting. ++* Do you need to monitor and alert if Identity Protection is configured? ++* Are there stricter conditions applied to privileged accounts that you can use to monitor and alert on? For example, requiring privileged accounts only be used from trusted IP addresses. ++* Are the baselines you set too aggressive? Having too many alerts might result in alerts being ignored or missed. ++Configure Identity Protection to help ensure protection is in place that supports your security baseline policies. For example, blocking users if risk = high. This risk level indicates with a high degree of confidence that a user account is compromised. For more information on setting up sign in risk policies and user risk policies, visit [Identity Protection policies](../identity-protection/concept-identity-protection-policies.md). For more information on setting up conditional access, visit [Conditional Access: Sign-in risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md). ++The following are listed in order of importance based on the effect and severity of the entries. ++### Monitoring external user sign ins ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID isn't equal to Home Tenant ID <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/UsersAuthenticatingtoOtherAzureADTenants.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) +|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| ++### Monitoring for failed unusual sign ins ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Smart lock-out events.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +| Interrupts| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge wasn't satisfied<br>-or-<br>53003 and Failure reason = blocked by CA| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++The following are listed in order of importance based on the effect and severity of the entries. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Multi-factor authentication (MFA) fraud alerts.| High| Azure AD Sign-ins log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| +| Failed authentications from countries/regions you don't operate out of.| Medium| Azure AD Sign-ins log| Location = \<unapproved location\>| Monitor and alert on any entries. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationAttemptfromNewCountry.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Failed authentications for legacy protocols or protocols that aren't used.| Medium| Azure AD Sign-ins log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Failures blocked by CA.| Medium| Azure AD Sign-ins log| Error code = 53003 <br>-and-<br>Failure reason = blocked by CA| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Increased failed authentications of any type.| Medium| Azure AD Sign-ins log| Capture increases in failures across the board. That is, the failure total for today is >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if failures increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) | +| Authentication occurring at times and days of the week when countries/regions don't conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>-and-<br>Location = \<location\><br>-and-<br>Day\Time = \<not normal working hours\>| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) | +| Account disabled/blocked for sign-ins| Low| Azure AD Sign-ins log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked, it is important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++### Monitoring for successful unusual sign ins ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - |- |- |- |- | +| Authentications of privileged accounts outside of expected controls.| High| Azure AD Sign-ins log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma ruless](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| When only single-factor authentication is required.| Low| Azure AD Sign-ins log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Discover privileged accounts not registered for MFA.| High| Azure Graph API| Query for IsMFARegistered eq false for administrator accounts. <br>[List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http)| Audit and investigate to determine if intentional or an oversight. | +| Successful authentications from countries/regions your organization doesn't operate out of.| Medium| Azure AD Sign-ins log| Status = success<br>Location = \<unapproved country/region\>| Monitor and alert on any entries not equal to the city names you provide.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Successful authentication, session blocked by CA.| Medium| Azure AD Sign-ins log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by CA| Monitor and investigate when authentication is successful, but session is blocked by CA.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Successful authentication after you have disabled legacy authentication.| Medium| Azure AD Sign-ins log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++We recommend you periodically review authentications to medium business impact (MBI) and high business impact (HBI) applications where only single-factor authentication is required. For each, you want to determine if single-factor authentication was expected or not. In addition, review for successful authentication increases or at unexpected times, based on the location. ++| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | +| - | - |- |- |- | +| Authentications to MBI and HBI application using single-factor authentication.| Low| Azure AD Sign-ins log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Authentications at days and times of the week or year that countries/regions do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries/regions do not conduct normal business operations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-UnusualLogonTimes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | +| Measurable increase of successful sign ins.| Low| Azure AD Sign-ins log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | ++## Next steps ++See these security operations guide articles: ++[Azure AD security operations overview](security-operations-introduction.md) ++[Security operations for consumer accounts](security-operations-consumer-accounts.md) ++[Security operations for privileged accounts](security-operations-privileged-accounts.md) ++[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) ++[Security operations for applications](security-operations-applications.md) ++[Security operations for devices](security-operations-devices.md) ++[Security operations for infrastructure](security-operations-infrastructure.md) |
active-directory | Service Accounts Computer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-computer.md | + + Title: Secure on-premises computer accounts with Active Directory +description: A guide to help secure on-premises computer accounts, or LocalSystem accounts, with Active Directory +++++++ Last updated : 02/03/2023+++++++# Secure on-premises computer accounts with Active Directory ++A computer account, or LocalSystem account, is highly privileged with access to almost all resources on the local computer. The account isn't associated with signed-on user accounts. Services run as LocalSystem access network resources by presenting the computer credentials to remote servers in the format `<domain_name>\\<computer_name>$`. The computer account predefined name is `NT AUTHORITY\SYSTEM`. You can start a service and provide security context for that service. ++ ![Screenshot of a list of local services on a computer account.](./media/govern-service-accounts/secure-computer-accounts-image-1.png) ++## Benefits of using a computer account ++A computer account has the following benefits: ++* **Unrestricted local access** - the computer account provides complete access to the machine's local resources +* **Automatic password management** - removes the need for manually changed passwords. The account is a member of Active Directory, and its password is changed automatically. With a computer account, there's no need to register the service principal name. +* **Limited access rights off-machine** - the default access-control list in Active Directory Domain Services (AD DS) permits minimal access to computer accounts. During access by an unauthorized user, the service has limited access to network resources. ++## Computer account security-posture assessment ++Use the following table to review potential computer-account issues and mitigations. + +| Computer-account issue | Mitigation | +| - | - | +| Computer accounts are subject to deletion and re-creation when the computer leaves and rejoins the domain. | Confirm the requirement to add a computer to an Active Directory group. To verify computer accounts added to a group, use the scripts in the following section.| +| If you add a computer account to a group, services that run as LocalSystem on that computer get group access rights.| Be selective about computer-account group memberships. Don't make a computer account a member of a domain administrator group. The associated service has complete access to AD DS. | +| Inaccurate network defaults for LocalSystem. | Don't assume the computer account has the default limited access to network resources. Instead, confirm group memberships for the account. | +| Unknown services that run as LocalSystem. | Ensure services that run under the LocalSystem account are Microsoft services, or trusted services. | ++## Find services and computer accounts ++To find services that run under the computer account, use the following PowerShell cmdlet: ++```powershell +Get-WmiObject win32_service | select Name, StartName | Where-Object {($_.StartName -eq "LocalSystem")} +``` ++To find computer accounts that are members of a specific group, run the following PowerShell cmdlet: ++```powershell +Get-ADComputer -Filter {Name -Like "*"} -Properties MemberOf | Where-Object {[STRING]$_.MemberOf -like "Your_Group_Name_here*"} | Select Name, MemberOf +``` ++To find computer accounts that are members of identity administrators groups (domain administrators, enterprise administrators, and administrators), run the following PowerShell cmdlet: ++```powershell +Get-ADGroupMember -Identity Administrators -Recursive | Where objectClass -eq "computer" +``` ++## Computer account recommendations ++> [!IMPORTANT] +> Computer accounts are highly privileged, therefore use them if your service requires unrestricted access to local resources, on the machine, and you can't use a managed service account (MSA). ++* Confirm the service owner's service runs with an MSA +* Use a group managed service account (gMSA), or a standalone managed service account (sMSA), if your service supports it +* Use a domain user account with the permissions needed to run the service ++## Next steps ++To learn more about securing service accounts, see the following articles: ++* [Securing on-premises service accounts](service-accounts-on-premises.md) +* [Secure group managed service accounts](service-accounts-group-managed.md) +* [Secure standalone managed service accounts](service-accounts-standalone-managed.md) +* [Secure user-based service accounts in Active Directory](service-accounts-user-on-premises.md) +* [Govern on-premises service accounts](service-accounts-govern-on-premises.md) |
active-directory | Service Accounts Govern On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-govern-on-premises.md | + + Title: Govern on-premises service accounts +description: Learn to create and run an account lifecycle process for on-premises service accounts +++++++ Last updated : 02/10/2023+++++++# Govern on-premises service accounts ++Active Directory offers four types of on-premises service accounts: ++* Group-managed service accounts (gMSAs) + * [Secure group managed service accounts](service-accounts-group-managed.md) +* Standalone managed service accounts (sMSAs) + * [Secure standalone managed service accounts](service-accounts-standalone-managed.md) +* On-premises computer accounts + * [Secure on-premises computer accounts with Active Directory](service-accounts-computer.md) +* User accounts functioning as service accounts + * [Secure user-based service accounts in Active Directory](service-accounts-user-on-premises.md) ++Part of service account governance includes: ++* Protecting them, based on requirements and purpose +* Managing account lifecycle, and their credentials +* Assessing service accounts, based on risk and permissions +* Ensuring Active Directory (AD) and Azure Active Directory (Azure AD) have no unused service accounts, with permissions ++## New service account principles ++When you create service accounts, consider the information in the following table. ++| Principle| Consideration | +| - |- | +| Service account mapping| Connect the service account to a service, application, or script | +| Ownership| Ensure there's an account owner who requests and assumes responsibility | +| Scope| Define the scope, and anticipate usage duration| +| Purpose| Create service accounts for one purpose | +| Permissions | Apply the principle of least permission:</br> - Don't assign permissions to built-in groups, such as administrators</br> - Remove local machine permissions, where feasible</br> - Tailor access, and use AD delegation for directory access</br> - Use granular access permissions</br> - Set account expiration and location restrictions on user-based service accounts | +| Monitor and audit use| - Monitor sign-in data, and ensure it matches the intended usage</br> - Set alerts for anomalous usage | ++### User account restrictions ++For user accounts used as service accounts, apply the following settings: ++* **Account expiration** - set the service account to automatically expire, after its review period, unless the account can continue +* **LogonWorkstations** - restrict service account sign-in permissions + * If it runs locally and accesses resources on the machine, restrict it from signing in elsewhere +* **Can't change password** - set the parameter to **true** to prevent the service account from changing its own password + +## Lifecycle management process ++To help maintain service account security, manage them from inception to decommission. Use the following process: ++1. Collect account usage information. +2. Move the service account and app to the configuration management database (CMDB). +3. Perform risk assessment or a formal review. +4. Create the service account and apply restrictions. +5. Schedule and perform recurring reviews. +6. Adjust permissions and scopes as needed. +7. Deprovision the account. ++### Collect service account usage information ++Collect relevant information for each service account. The following table lists the minimum information to collect. Obtain what's needed to validate each account. ++| Data| Description | +| - | - | +| Owner| The user or group accountable for the service account | +| Purpose| The purpose of the service account | +| Permissions (scopes)| The expected permissions | +| CMDB links| The cross-link service account with the target script or application, and owners | +| Risk| The results of a security risk assessment | +| Lifetime| The anticipated maximum lifetime to schedule account expiration or recertification | ++Make the account request self-service, and require the relevant information. The owner is an application or business owner, an IT team member, or an infrastructure owner. You can use Microsoft Forms for requests and associated information. If the account is approved, use Microsoft Forms to port it to a configuration management databases (CMDB) inventory tool. ++### Service accounts and CMDB ++Store the collected information in a CMDB application. Include dependencies on infrastructure, apps, and processes. Use this central repository to: ++* Assess risk +* Configure the service account with restrictions +* Ascertain functional and security dependencies +* Conduct regular reviews for security and continued need +* Contact the owner to review, retire, and change the service account ++#### Example HR scenario + +An example is a service account that runs a website with permissions to connect to Human Resources SQL databases. The information in the service account CMDB, including examples, is in the following table: ++|Data | Example| +| - | - | +| Owner, Deputy| Name, Name | +| Purpose| Run the HR webpage and connect to HR databases. Impersonate end users when accessing databases. | +| Permissions, scopes| HR-WEBServer: sign in locally; run web page<br>HR-SQL1: sign in locally; read permissions on HR databases<br>HR-SQL2: sign in locally; read permissions on Salary database only | +| Cost center| 123456 | +| Risk assessed| Medium; Business Impact: Medium; private information; Medium | +| Account restrictions| Sign in to: only aforementioned servers; Can't change password; MBI-Password Policy; | +| Lifetime| Unrestricted | +| Review cycle| Biannually: By owner, security team, or privacy team | ++### Service account risk assessments or formal reviews ++If your account is compromised by an unauthorized source, assess the risks to associated applications, services, and infrastructure. Consider direct and indirect risks: ++* Resources an unauthorized user can gain access to + * Other information or systems the service account can access +* Permissions the account can grant + * Indications or signals when permissions change ++After the risk assessment, documentation likely shows that risks affect account: + +* Restrictions +* Lifetime +* Review requirements + * Cadence and reviewers ++### Create a service account and apply account restrictions ++> [!NOTE] +> Create a service account after the risk assessment, and document the findings in a CMDB. Align account restrictions with risk assessment findings. + +Consider the following restrictions, although some might not be relevant to your assessment. ++* For user accounts used as service accounts, define a realistic end date + * Use the **Account Expires** flag to set the date + * Learn more: [Set-ADAccountExpiration](/powershell/module/activedirectory/set-adaccountexpiration) +* See, [Set-ADUser (Active Directory)](/powershell/module/activedirectory/set-aduser) +* Password policy requirements + * See, [Password and account lockout policies on Azure AD Domain Services managed domains](../../active-directory-domain-services/password-policy.md) +* Create accounts in an organizational unit location that ensures only some users will manage it + * See, [Delegating Administration of Account OUs and Resource OUs](/windows-server/identity/ad-ds/plan/delegating-administration-of-account-ous-and-resource-ous) +* Set up and collect auditing that detects service account changes: + * See, [Audit Directory Service Changes](/windows/security/threat-protection/auditing/audit-directory-service-changes), and + * Go to manageengine.com for [How to audit Kerberos authentication events in AD](https://www.manageengine.com/products/active-directory-audit/how-to/audit-kerberos-authentication-events.html) +* Grant account access more securely before it goes into production ++### Service account reviews + +Schedule regular service account reviews, especially those classified Medium and High Risk. Reviews can include: ++* Owner attestation of the need for the account, with justification of permissions and scopes +* Privacy and security team reviews that include upstream and downstream dependencies +* Audit data review + * Ensure the account is used for its stated purpose ++### Deprovision service accounts ++Deprovision service accounts at the following junctures: ++* Retirement of the script or application for which the service account was created +* Retirement of the script or application function, for which the service account was used +* Replacement of the service account for another ++To deprovision: + +1. Remove permissions and monitoring. +2. Examine sign-ins and resource access of related service accounts to ensure no potential effect on them. +3. Prevent account sign-in. +4. Ensure the account is no longer needed (there's no complaint). +5. Create a business policy that determines the amount of time that accounts are disabled. +6. Delete the service account. ++ * **MSAs** - see, [Uninstall-ADServiceAccount](/powershell/module/activedirectory/uninstall-adserviceaccount?view=winserver2012-ps&preserve-view=true) + * Use PowerShell, or delete it manually from the managed service account container + * **Computer or user accounts** - manually delete the account from Active Directory ++## Next steps ++To learn more about securing service accounts, see the following articles: ++* [Securing on-premises service accounts](service-accounts-on-premises.md) +* [Secure group managed service accounts](service-accounts-group-managed.md) +* [Secure standalone managed service accounts](service-accounts-standalone-managed.md) +* [Secure on-premises computer accounts with AD](service-accounts-computer.md) +* [Secure user-based service accounts in AD](service-accounts-user-on-premises.md) |
active-directory | Service Accounts Group Managed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-group-managed.md | + + Title: Secure group managed service accounts +description: A guide to securing group managed service accounts (gMSAs) ++++++ Last updated : 02/09/2023+++++++# Secure group managed service accounts ++Group managed service accounts (gMSAs) are domain accounts to help secure services. gMSAs can run on one server, or in a server farm, such as systems behind a network load balancing or Internet Information Services (IIS) server. After you configure your services to use a gMSA principal, account password management is handled by the Windows operating system (OS). ++## Benefits of gMSAs ++gMSAs are an identity solution with greater security that help reduce administrative overhead: ++* **Set strong passwords** - 240-byte, randomly generated passwords: the complexity and length of gMSA passwords minimizes the likelihood of compromise by brute force or dictionary attacks +* **Cycle passwords regularly** - password management goes to the Windows OS, which changes the password every 30 days. Service and domain administrators don't need to schedule password changes, or manage service outages. +* **Support deployment to server farms** - deploy gMSAs to multiple servers to support load balanced solutions where multiple hosts run the same service +* **Support simplified service principal name (SPN) management** - set up an SPN with PowerShell, when you create an account. + * In addition, services that support automatic SPN registrations might do so against the gMSA, if the gMSA permissions are set correctly. ++## Using gMSAs ++Use gMSAs as the account type for on-premises services unless a service, such as failover clustering, doesn't support it. ++> [!IMPORTANT] +> Test your service with gMSAs before it goes to production. Set up a test environment to ensure the application uses the gMSA, then accesses resources. For more information, see [Support for group managed service accounts](/system-center/scom/support-group-managed-service-accounts?view=sc-om-2022&preserve-view=true). ++If a service doesn't support gMSAs, you can use a standalone managed service account (sMSA). An sMSA has the same functionality, but is intended for deployment on a single server. ++If you can't use a gMSA or sMSA supported by your service, configure the service to run as a standard user account. Service and domain administrators are required to observe strong password management processes to help keep the account secure. ++## Assess gMSA security posture ++gMSAs are more secure than standard user accounts, which require ongoing password management. However, consider gMSA scope of access in relation to security posture. Potential security issues and mitigations for using gMSAs are shown in the following table: ++| Security issue| Mitigation | +| - | - | +| gMSA is a member of privileged groups | - Review your group memberships. Create a PowerShell script to enumerate group memberships. Filter the resultant CSV file by gMSA file names</br> - Remove the gMSA from privileged groups</br> - Grant the gMSA rights and permissions it requires to run its service. See your service vendor. +| gMSA has read/write access to sensitive resources | - Audit access to sensitive resources</br> - Archive audit logs to a SIEM, such as Azure Log Analytics or Microsoft Sentinel</br> - Remove unnecessary resource permissions if there's an unnecessary access level | +++## Find gMSAs ++Your organization might have gMSAs. To retrieve these accounts, run the following PowerShell cmdlets: ++```powershell +Get-ADServiceAccount +Install-ADServiceAccount +New-ADServiceAccount +Remove-ADServiceAccount +Set-ADServiceAccount +Test-ADServiceAccount +Uninstall-ADServiceAccount +``` ++### Managed Service Accounts container + +To work effectively, gMSAs must be in the Managed Service Accounts container. + +![Screenshot of a gMSA in the Managed Service Accounts container.](./media/govern-service-accounts/secure-gmsa-image-1.png) ++To find service MSAs not in the list, run the following commands: ++```powershell ++Get-ADServiceAccount -Filter * ++# This PowerShell cmdlet returns managed service accounts (gMSAs and sMSAs). Differentiate by examining the ObjectClass attribute on returned accounts. ++# For gMSA accounts, ObjectClass = msDS-GroupManagedServiceAccount ++# For sMSA accounts, ObjectClass = msDS-ManagedServiceAccount ++# To filter results to only gMSAs: ++Get-ADServiceAccount ΓÇôFilter * | where-object {$_.ObjectClass -eq "msDS-GroupManagedServiceAccount"} +``` ++## Manage gMSAs ++To manage gMSAs, use the following Active Directory PowerShell cmdlets: ++`Get-ADServiceAccount` ++`Install-ADServiceAccount` ++`New-ADServiceAccount` ++`Remove-ADServiceAccount` ++`Set-ADServiceAccount` ++`Test-ADServiceAccount` ++`Uninstall-ADServiceAccount` ++> [!NOTE] +> In Windows Server 2012, and later versions, the *-ADServiceAccount cmdlets work with gMSAs. Learn more: [Get started with group managed service accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts). ++## Move to a gMSA ++gMSAs are a secure service account type for on-premises. It's recommended you use gMSAs, if possible. In addition, consider moving your services to Azure and your service accounts to Azure Active Directory. ++ > [!NOTE] + > Before you configure your service to use the gMSA, see [Get started with group managed service accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj128431(v=ws.11)). + +To move to a gMSA: ++1. Ensure the Key Distribution Service (KDS) root key is deployed in the forest. This is a one-time operation. See, [Create the Key Distribution Services KDS Root Key](/windows-server/security/group-managed-service-accounts/create-the-key-distribution-services-kds-root-key). +2. Create a new gMSA. See, [Getting Started with Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts). +3. Install the new gMSA on hosts that run the service. +4. Change your service identity to gMSA. +5. Specify a blank password. +6. Validate your service is working under the new gMSA identity. +7. Delete the old service account identity. ++## Next steps ++To learn more about securing service accounts, see the following articles: ++* [Introduction to on-premises service accounts](service-accounts-on-premises.md) +* [Secure standalone managed service accounts](service-accounts-standalone-managed.md) +* [Secure computer accounts with Active Directory](service-accounts-computer.md) +* [Secure user-based service accounts in Active Directory](service-accounts-user-on-premises.md) +* [Govern on-premises service accounts](service-accounts-govern-on-premises.md) |
active-directory | Service Accounts Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-managed-identities.md | + + Title: Securing managed identities in Azure Active Directory +description: Learn to find, assess, and increase the security of managed identities in Azure AD +++++++ Last updated : 02/07/2023+++++++# Securing managed identities in Azure Active Directory ++In this article, learn about managing secrets and credentials to secure communication between services. Managed identities provide an automatically managed identity in Azure Active Directory (Azure AD). Applications use managed identities to connect to resources that support Azure AD authentication, and to obtain Azure AD tokens, without credentials management. ++## Benefits of managed identities ++Benefits of using managed identities: ++* With managed identities, credentials are fully managed, rotated, and protected by Azure. Identities are provided and deleted with Azure resources. Managed identities enable Azure resources to communicate with services that support Azure AD authentication. ++* No one, including the Global Administrator, has access to the credentials, which can't be accidentally leaked by being included in code. ++## Using managed identities ++Managed identities are best for communications among services that support Azure AD authentication. A source system requests access to a target service. Any Azure resource can be a source system. For example, an Azure virtual machine (VM), Azure Function instance, and Azure App Services instances support managed identities. ++Learn more in the video, [What can a managed identity be used for?](https://www.youtube.com/embed/5lqayO_oeEo) ++### Authentication and authorization ++With managed identities, the source system obtains a token from Azure AD without owner credential management. Azure manages the credentials. Tokens obtained by the source system are presented to the target system for authentication. ++The target system authenticates and authorizes the source system to allow access. If the target service supports Azure AD authentication, it accepts an access token issued by Azure AD. ++Azure has a control plane and a data plane. You create resources in the control plane, and access them in the data plane. For example, you create an Azure Cosmos DB database in the control plane, but query it in the data plane. ++After the target system accepts the token for authentication, it supports mechanisms for authorization for its control plane and data plane. ++Azure control plane operations are managed by Azure Resource Manager and use Azure role-based access control (Azure RBAC). In the data plane, target systems have authorization mechanisms. Azure Storage supports Azure RBAC on the data plane. For example, applications using Azure App Services can read data from Azure Storage, and applications using Azure Kubernetes Service can read secrets stored in Azure Key Vault. ++Learn more: +* [What is Azure Resource Manager?](../../azure-resource-manager/management/overview.md) +* [What is Azure role-based Azure RBAC?](../../role-based-access-control/overview.md) +* [Azure control plane and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) +* [Azure services that can use managed identities to access other services](../managed-identities-azure-resources/managed-identities-status.md) ++## System-assigned and user-assigned managed identities ++There are two types of managed identities, system- and user-assigned. ++System-assigned managed identity: ++* One-to-one relationship with the Azure resource + * For example, there's a unique managed identity associated with each VM +* Tied to the Azure resource lifecycle. When the resource is deleted, the managed identity associated with it, is automatically deleted. +* This action eliminates the risk from orphaned accounts ++User-assigned managed identity ++* The lifecycle is independent from an Azure resource. You manage the lifecycle. + * When the Azure resource is deleted, the assigned user-assigned managed identity isn't automatically deleted +* Assign user-assigned managed identity to zero or more Azure resources +* Create an identity ahead of time, and then assigned it to a resource later ++## Find managed identity service principals in Azure AD ++To find managed identities, you can use: ++* Enterprise applications page in the Azure portal +* Microsoft Graph ++### The Azure portal ++1. In the Azure portal, in the left navigation, select **Azure Active Directory**. +2. In the left navigation, select **Enterprise applications**. +3. In the **Application type** column, under **Value**, select the down-arrow to select **Managed Identities**. ++ ![Screenshot of the Managed Identies option under Values, in the Application type column.](./media/govern-service-accounts/service-accounts-managed-identities.png) ++### Microsoft Graph ++Use the following GET request to Microsoft Graph to get a list of managed identities in your tenant. ++`https://graph.microsoft.com/v1.0/servicePrincipals?$filter=(servicePrincipalType eq 'ManagedIdentity')` ++You can filter these requests. For more information, see [GET servicePrincipal](/graph/api/serviceprincipal-get?view=graph-rest-1.0&tabs=http&preserve-view=true). ++## Assess managed identity security ++To assess managed identity security: ++* Examine privileges to ensure the least-privileged model is selected + * Use the following PowerShell cmdlet to get the permissions assigned to your managed identities: ++ `Get-AzureADServicePrincipal | % { Get-AzureADServiceAppRoleAssignment -ObjectId $_ }` ++* Ensure the managed identity is not part of a privileged group, such as an administrators group. + * To enumerate the members of your highly privileged groups with PowerShell: ++ `Get-AzureADGroupMember -ObjectId <String> [-All <Boolean>] [-Top <Int32>] [<CommonParameters>]` ++* Confirm what resources the managed identity accesses + * See, [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md). ++## Move to managed identities ++If you're using a service principal or an Azure AD user account, evaluate the use of managed identities. You can eliminate the need to protect, rotate, and manage credentials. ++## Next steps ++* [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) +* [Configure managed identities for Azure resources on a VM using the Azure portal](../managed-identities-azure-resources/qs-configure-portal-windows-vm.md) ++**Service accounts** ++* [Securing cloud-based service accounts](secure-service-accounts.md) +* [Securing service principals](service-accounts-principal.md) +* [Governing Azure AD service accounts](govern-service-accounts.md) +* [Securing on-premises service accounts](service-accounts-on-premises.md) |
active-directory | Service Accounts On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-on-premises.md | + + Title: Introduction to Active Directory service accounts +description: An introduction to the types of service accounts in Active Directory, and how to secure them. +++++++ Last updated : 08/26/2022++++++# Securing on-premises service accounts ++A service has a primary security identity that determines the access rights for local and network resources. The security context for a Microsoft Win32 service is determined by the service account that's used to start the service. You use a service account to: +* Identify and authenticate a service. +* Successfully start a service. +* Access or execute code or an application. +* Start a process. ++## Types of on-premises service accounts ++Depending on your use case, you can use a managed service account (MSA), a computer account, or a user account to run a service. You must first test a service to confirm that it can use a managed service account. If the service can use an MSA, you should use one. ++### Group managed service accounts ++For services that run in your on-premises environment, use [group managed service accounts (gMSAs)](service-accounts-group-managed.md) whenever possible. gMSAs provide a single identity solution for services that run on a server farm or behind a network load balancer. gMSAs can also be used for services that run on a single server. For information about the requirements for gMSAs, see [Get started with group managed service accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts). ++### Standalone managed service accounts ++If you can't use a gMSA, use a [standalone managed service account (sMSA)](service-accounts-standalone-managed.md). sMSAs require at least Windows Server 2008 R2. Unlike gMSAs, sMSAs run on only one server. They can be used for multiple services on that server. ++### Computer accounts ++If you can't use an MSA, consider using a [computer account](service-accounts-computer.md). The LocalSystem account is a predefined local account that has extensive permissions on the local computer and acts as the computer identity on the network. ++Services that run as a LocalSystem account access network resources by using the credentials of the computer account in the format <domain_name>\\<computer_name>. Its predefined name is NT AUTHORITY\SYSTEM. You can use it to start a service and provide a security context for that service. ++> [!NOTE] +> When you use a computer account, you can't determine which service on the computer is using that account. Consequently, you can't audit which service is making changes. ++### User accounts ++If you can't use an MSA, consider using a [user account](service-accounts-user-on-premises.md). A user account can be a *domain* user account or a *local* user account. ++A domain user account enables the service to take full advantage of the service security features of Windows and Microsoft Active Directory Domain Services. The service will have local and network permissions granted to the account. It will also have the permissions of any groups of which the account is a member. Domain service accounts support Kerberos mutual authentication. ++A local user account (name format: *.\UserName*) exists only in the Security Account Manager database of the host computer. It doesn't have a user object in Active Directory Domain Services. A local account can't be authenticated by the domain. So, a service that runs in the security context of a local user account doesn't have access to network resources (except as an anonymous user). Services that run in the local user context can't support Kerberos mutual authentication in which the service is authenticated by its clients. For these reasons, local user accounts are ordinarily inappropriate for directory-enabled services. ++> [!IMPORTANT] +> Service accounts shouldn't be members of any privileged groups, because privileged group membership confers permissions that might be a security risk. Each service should have its own service account for auditing and security purposes. ++## Choose the right type of service account ++| Criterion| gMSA| sMSA| Computer account| User account | +| - | - | - | - | - | +| App runs on a single server| Yes| Yes. Use a gMSA if possible.| Yes. Use an MSA if possible.| Yes. Use an MSA if possible. | +| App runs on multiple servers| Yes| No| No. Account is tied to the server.| Yes. Use an MSA if possible. | +| App runs behind a load balancer| Yes| No| No| Yes. Use only if you can't use a gMSA. | +| App runs on Windows Server 2008 R2| No| Yes| Yes. Use an MSA if possible.| Yes. Use an MSA if possible. | +| App runs on Windows Server 2012| Yes| Yes. Use a gMSA if possible.| Yes. Use an MSA if possible.| Yes. Use an MSA if possible. | +| Requirement to restrict service account to single server| No| Yes| Yes. Use an sMSA if possible.| No | +| | | ++### Use server logs and PowerShell to investigate ++You can use server logs to determine which servers, and how many servers, an application is running on. ++To get a listing of the Windows Server version for all servers on your network, you can run the following PowerShell command: ++```PowerShell ++Get-ADComputer -Filter 'operatingsystem -like "*server*" -and enabled -eq "true"' ` ++-Properties Name,Operatingsystem,OperatingSystemVersion,IPv4Address | ++sort-Object -Property Operatingsystem | ++Select-Object -Property Name,Operatingsystem,OperatingSystemVersion,IPv4Address | ++Out-GridView ++``` ++## Find on-premises service accounts ++We recommend that you add a prefix such as ΓÇ£svc-ΓÇ¥ to all accounts that you use as service accounts. This naming convention will make the accounts easier to find and manage. Also consider using a description attribute for the service account and the owner of the service account. The description can be a team alias or security team owner. ++Finding on-premises service accounts is key to ensuring their security. Doing so can be difficult for non-MSA accounts. We recommend that you review all the accounts that have access to your important on-premises resources, and that you determine which computer or user accounts might be acting as service accounts. ++To learn how to find a service account, see the article about that account type in the ["Next steps" section](#next-steps). ++## Document service accounts ++After you've found the service accounts in your on-premises environment, document the following information: ++* **Owner**: The person accountable for maintaining the account. ++* **Purpose**: The application the account represents, or other purpose. ++* **Permission scopes**: The permissions it has or should have, and any groups it's a member of. ++* **Risk profile**: The risk to your business if this account is compromised. If the risk is high, use an MSA. ++* **Anticipated lifetime and periodic attestation**: How long you anticipate that this account will be live, and how often the owner should review and attest to its ongoing need. ++* **Password security**: For user and local computer accounts, where the password is stored. Ensure that passwords are kept secure, and document who has access. Consider using [Privileged Identity Management](../privileged-identity-management/pim-configure.md) to secure stored passwords. ++## Next steps ++To learn more about securing service accounts, see the following articles: ++* [Secure group managed service accounts](service-accounts-group-managed.md) +* [Secure standalone managed service accounts](service-accounts-standalone-managed.md) +* [Secure computer accounts](service-accounts-computer.md) +* [Secure user accounts](service-accounts-user-on-premises.md) +* [Govern on-premises service accounts](service-accounts-govern-on-premises.md) |
active-directory | Service Accounts Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-principal.md | + + Title: Securing service principals in Azure Active Directory +description: Find, assess, and secure service principals. +++++++ Last updated : 02/08/2023+++++++# Securing service principals in Azure Active Directory ++An Azure Active Directory (Azure AD) service principal is the local representation of an application object in a tenant or directory. It's the identity of the application instance. Service principals define application access and resources the application accesses. A service principal is created in each tenant where the application is used and references the globally unique application object. The tenant secures the service principal sign-in and access to resources. ++Learn more: [Application and service principal objects in Azure AD](../develop/app-objects-and-service-principals.md) ++## Tenant-service principal relationships ++A single-tenant application has one service principal in its home tenant. A multi-tenant web application or API requires a service principal in each tenant. A service principal is created when a user from that tenant consents to use of the application or API. This consent creates a one-to-many relationship between the multi-tenant application and its associated service principals. ++A multi-tenant application is homed in a tenant and has instances in other tenants. Most software-as-a-service (SaaS) applications accommodate multi-tenancy. Use service principals to ensure the needed security posture for the application, and its users, in single- and multi-tenant scenarios. ++## ApplicationID and ObjectID ++An application instance has two properties: the ApplicationID (or ClientID) and the ObjectID. ++> [!NOTE] +> The terms **application** and **service principal** are used interchangeably, when referring to an application in authentication tasks. However, they are two representations of applications in Azure AD. + +The ApplicationID represents the global application and is the same for application instances, across tenants. The ObjectID is a unique value for an application object. As with users, groups, and other resources, the ObjectID helps to identify an application instance in Azure AD. ++To learn more, see [Application and service principal relationship in Azure AD](../develop/app-objects-and-service-principals.md) ++### Create an application and its service principal object ++You can create an application and its service principal object (ObjectID) in a tenant using: ++* Azure PowerShell +* Azure command-line interface (Azure CLI) +* Microsoft Graph +* The Azure portal +* Other tools ++![Screenshot of Application or Client ID and Object ID on the New App page.](./media/govern-service-accounts/secure-principal-image-1.png) ++## Service principal authentication ++There are two mechanisms for authentication, when using service principalsΓÇöclient certificates and client secrets. ++![Screenshot of Certificates and Client secrets under New App, Certificates and secrets.](./media/govern-service-accounts/secure-principal-certificates.png) ++Because certificates are more secure, it's recommended you use them, when possible. Unlike client secrets, client certificates can't be embedded in code, accidentally. When possible, use Azure Key Vault for certificate and secrets management to encrypt assets with keys protected by hardware security modules: ++* Authentication keys +* Storage account keys +* Data encryption keys +* .pfx files +* Passwords ++For more information on Azure Key Vault and how to use it for certificate and secret management, see: ++* [About Azure Key Vault](../../key-vault/general/overview.md) +* [Assign a Key Vault access policy](../../key-vault/general/assign-access-policy.md) ++### Challenges and mitigations + +When using service principals, use the following table to match challenges and mitigations. ++| Challenge| Mitigation| +| - | - | +| Access reviews for service principals assigned to privileged roles| This functionality is in preview | +| Service principal access reviews| Manual check of resource access control list using the Azure portal | +| Over-permissioned service principals| When you create automation service accounts, or service principals, grant permissions for the task. Evaluate service principals to reduce privileges. | +|Identify modifications to service principal credentials or authentication methods | - See, [Sensitive operations report workbook](../reports-monitoring/workbook-sensitive-operations-report.md) </br> - See the Tech Community blog post, [Azure AD workbook to help you assess Solorigate risk](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718)| ++## Find accounts using service principals ++To find accounts, run the following commands using service principals with Azure CLI or PowerShell. ++* Azure CLI - `az ad sp list` +* PowerShell - `Get-AzureADServicePrincipal -All:$true` ++For more information, see [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) ++## Assess service principal security ++To assess the security, evaluate privileges and credential storage. Use the following table to help mitigate challenges: ++|Challenge | Mitigation| +| - | - | +| Detect the user who consented to a multi-tenant app, and detect illicit consent grants to a multi-tenant app | - Run the following PowerShell to find multi-tenant apps <br>`Get-AzureADServicePrincipal -All:$true ? {$_.Tags -eq WindowsAzureActiveDirectoryIntegratedApp"}`</br> - Disable user consent </br> - Allow user consent from verified publishers, for selected permissions (recommended) </br> - Configure them in the user context </br> - Use their tokens to trigger the service principal| +|Use of a hard-coded shared secret in a script using a service principal|Use a certificate| +|Tracking who uses the certificate or the secret| Monitor the service principal sign-ins using the Azure AD sign-in logs| +|Can't manage service principal sign-in with Conditional Access| Monitor the sign-ins using the Azure AD sign-in logs +| Contributor is the default Azure role-based access control (Azure RBAC) role|Evaluate needs and apply the least possible permissions| ++Learn more: [What is Conditional Access?](../conditional-access/overview.md) ++## Move from a user account to a service principal ++If you're using an Azure user account as a service principal, evaluate if you can move to a managed identity or a service principal. If you can't use a managed identity, grant a service principal enough permissions and scope to run the required tasks. You can create a service principal by registering an application, or with PowerShell. ++When using Microsoft Graph, check the API documentation. Ensure the permission type for application is supported. </br>See, [Create servicePrincipal](/graph/api/serviceprincipal-post-serviceprincipals?view=graph-rest-1.0&tabs=http&preserve-view=true) ++Learn more: ++* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md?tabs=dotnet) +* [Create an Azure AD application and service principal that can access resources](../develop/howto-create-service-principal-portal.md) +* [Use Azure PowerShell to create a service principal with a certificate](../develop/howto-authenticate-service-principal-powershell.md) ++## Next steps ++Learn more about service principals: ++* [Create an Azure AD application and service principal that can access resources](../develop/howto-create-service-principal-portal.md) +* [Sign-in logs in Azure AD](../reports-monitoring/concept-sign-ins.md) ++Secure service accounts: ++* [Securing cloud-based service accounts](secure-service-accounts.md) +* [Securing managed identities in Azure AD](service-accounts-managed-identities.md) +* [Governing Azure AD service accounts](govern-service-accounts.md) +* [Securing on-premises service accounts](service-accounts-on-premises.md) ++Conditional Access: ++Use Conditional Access to block service principals from untrusted locations. ++See, [Create a location-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy) |
active-directory | Service Accounts Standalone Managed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-standalone-managed.md | + + Title: Secure standalone managed service accounts +description: Learn when to use, how to assess, and to secure standalone managed service accounts (sMSAs) +++++++ Last updated : 02/08/2023+++++++# Secure standalone managed service accounts ++Standalone managed service accounts (sMSAs) are managed domain accounts that help secure services running on a server. They can't be reused across multiple servers. sMSAs have automatic password management, simplified service principal name (SPN) management, and delegated management to administrators. ++In Active Directory (AD), sMSAs are tied to a server that runs a service. You can find accounts in the Active Directory Users and Computers snap-in in Microsoft Management Console. ++ ![Screenshot of a service name and type under Active Directory Users and Computers.](./media/govern-service-accounts/secure-standalone-msa-image-1.png) ++> [!NOTE] +> Managed service accounts were introduced in Windows Server 2008 R2 Active Directory Schema, and they require Windows Server 2008 R2, or a later version. ++## sMSA benefits ++sMSAs have greater security than user accounts used as service accounts. They help reduce administrative overhead: ++* Set strong passwords - sMSAs use 240 byte, randomly generated complex passwords + * The complexity minimizes the likelihood of compromise by brute force or dictionary attacks +* Cycle passwords regularly - Windows changes the sMSA password every 30 days. + * Service and domain administrators donΓÇÖt need to schedule password changes or manage the associated downtime +* Simplify SPN management - SPNs are updated if the domain functional level is Windows Server 2008 R2. The SPN is updated when you: + * Rename the host computer account + * Change the host computer domain name server (DNS) name + * Use PowerShell to add or remove other sam-accountname or dns-hostname parameters + * See, [Set-ADServiceAccount](/powershell/module/activedirectory/set-adserviceaccount) ++## Using sMSAs ++Use sMSAs to simplify management and security tasks. sMSAs are useful when services are deployed to a server and you can't use a group managed service account (gMSA). ++> [!NOTE] +> You can use sMSAs for more than one service, but it's recommended that each service has an identity for auditing. ++If the software creator canΓÇÖt tell you if the application uses an MSA, test the application. Create a test environment and ensure it accesses required resources. ++Learn more: [Managed Service Accounts: Understanding, Implementing, Best Practices, and Troubleshooting](/archive/blogs/askds/managed-service-accounts-understanding-implementing-best-practices-and-troubleshooting) ++### Assess sMSA security posture ++Consider the sMSA scope of access as part of the security posture. To mitigate potential security issues, see the following table: ++| Security issue| Mitigation | +| - | - | +| sMSA is a member of privileged groups | - Remove the sMSA from elevated privileged groups, such as Domain Admins</br> - Use the least-privileged model </br> - Grant the sMSA rights and permissions to run its services</br> - If you're unsure about permissions, consult the service creator| +| sMSA has read/write access to sensitive resources | - Audit access to sensitive resources</br> - Archive audit logs to a security information and event management (SIEM) program, such as Azure Log Analytics or Microsoft Sentinel </br> - Remediate resource permissions if an undesirable access is detected | +| By default, the sMSA password rollover frequency is 30 days | Use group policy to tune the duration, depending on enterprise security requirements. To set the password expiration duration, go to:<br>Computer Configuration>Policies>Windows Settings>Security Settings>Security Options. For domain member, use **Maximum machine account password age**. | ++### sMSA challenges + +Use the following table to associate challenges with mitigations. ++| Challenge| Mitigation | +| - | - | +| sMSAs are on a single server | Use a gMSA to use the account across servers | +| sMSAs can't be used across domains | Use a gMSA to use the account across domains | +| Not all applications support sMSAs| Use a gMSA, if possible. Otherwise, use a standard user account or a computer account, as recommended by the creator| ++## Find sMSAs ++On a domain controller, run DSA.msc, and then expand the managed service accounts container to view all sMSAs. ++To return all sMSAs and gMSAs in the Active Directory domain, run the following PowerShell command: ++`Get-ADServiceAccount -Filter *` ++To return sMSAs in the Active Directory domain, run the following command: ++`Get-ADServiceAccount -Filter * | where { $_.objectClass -eq "msDS-ManagedServiceAccount" }` ++## Manage sMSAs ++To manage your sMSAs, you can use the following AD PowerShell cmdlets: ++`Get-ADServiceAccount` +`Install-ADServiceAccount` +`New-ADServiceAccount` +`Remove-ADServiceAccount` +`Set-ADServiceAccount` +`Test-ADServiceAccount` +`Uninstall-ADServiceAccount` ++## Move to sMSAs ++If an application service supports sMSAs, but not gMSAs, and you're using a user account or computer account for the security context, see</br> +[Managed Service Accounts: Understanding, Implementing, Best Practices, and Troubleshooting](/archive/blogs/askds/managed-service-accounts-understanding-implementing-best-practices-and-troubleshooting). ++If possible, move resources to Azure and use Azure managed identities, or service principals. ++## Next steps ++To learn more about securing service accounts, see: ++* [Securing on-premises service accounts](service-accounts-on-premises.md) +* [Secure group managed service accounts](service-accounts-group-managed.md) +* [Secure on-premises computer accounts with AD](service-accounts-computer.md) +* [Secure user-based service accounts in AD](service-accounts-user-on-premises.md) +* [Govern on-premises service accounts](service-accounts-govern-on-premises.md) |
active-directory | Service Accounts User On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-user-on-premises.md | + + Title: Secure user-based service accounts in Active Directory +description: Learn how to locate, assess, and mitigate security issues for user-based service accounts +++++++ Last updated : 02/09/2023+++++++# Secure user-based service accounts in Active Directory ++On-premises user accounts were the traditional approach to help secure services running on Windows. Today, use these accounts if group managed service accounts (gMSAs) and standalone managed service accounts (sMSAs) aren't supported by your service. For information about the account type to use, see [Securing on-premises service accounts](service-accounts-on-premises.md). ++You can investigate moving your service an Azure service account, such as a managed identity or a service principal. ++Learn more: ++* [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) +* [Securing service principals in Azure Active Directory](service-accounts-principal.md) ++You can create on-premises user accounts to provide security for services and permissions the accounts use to access local and network resources. On-premises user accounts require manual password management, like other Active Directory (AD) user accounts. Service and domain administrators are required to maintain strong password management processes to help keep accounts secure. ++When you create a user account as a service account, use it for one service. Use a naming convention that clarifies it's a service account, and the service it's related to. ++## Benefits and challenges ++On-premises user accounts are a versatile account type. User accounts used as service accounts are controlled by policies governing user accounts. Use them if you can't use an MSA. Evaluate whether a computer account is a better option. ++The challenges of on-premises user accounts are summarized in the following table: ++| Challenge | Mitigation | +| - | - | +| Password management is manual and leads to weaker security and service downtime| - Ensure regular password complexity and that changes are governed by a process that maintains strong passwords</br> - Coordinate password changes with a service password, which helps reduce service downtime| +| Identifying on-premises user accounts that are service accounts can be difficult | - Document service accounts deployed in your environment</br> - Track the account name and the resources they can access</br> - Consider adding the prefix svc to user accounts used as service accounts | ++## Find on-premises user accounts used as service accounts ++On-premises user accounts are like other AD user accounts. It can be difficult to find the accounts, because no user account attribute identifies it as a service account. We recommend you create a naming convention for user accounts uses as service accounts. For example, add the prefix svc to a service name: svc-HRDataConnector. ++Use some of the following criteria to find service accounts. However, this approach might not find accounts: ++* Trusted for delegation +* With service principal names +* With passwords that never expire ++To find the on-premises user accounts used for services, run the following PowerShell commands: ++To find accounts trusted for delegation: ++```PowerShell ++Get-ADObject -Filter {(msDS-AllowedToDelegateTo -like '*') -or (UserAccountControl -band 0x0080000) -or (UserAccountControl -band 0x1000000)} -prop samAccountName,msDS-AllowedToDelegateTo,servicePrincipalName,userAccountControl | select DistinguishedName,ObjectClass,samAccountName,servicePrincipalName, @{name='DelegationStatus';expression={if($_.UserAccountControl -band 0x80000){'AllServices'}else{'SpecificServices'}}}, @{name='AllowedProtocols';expression={if($_.UserAccountControl -band 0x1000000){'Any'}else{'Kerberos'}}}, @{name='DestinationServices';expression={$_.'msDS-AllowedToDelegateTo'}} ++``` ++To find accounts with service principal names: ++```PowerShell ++Get-ADUser -Filter * -Properties servicePrincipalName | where {$_.servicePrincipalName -ne $null} ++``` ++To find accounts with passwords that never expire: ++```PowerShell ++Get-ADUser -Filter * -Properties PasswordNeverExpires | where {$_.PasswordNeverExpires -eq $true} ++``` ++You can audit access to sensitive resources, and archive audit logs to a security information and event management (SIEM) system. By using Azure Log Analytics or Microsoft Sentinel, you can search for and analyze service accounts. ++## Assess on-premises user account security ++Use the following criteria to assess the security of on-premises user accounts used as service accounts: ++* Password management policy +* Accounts with membership in privileged groups +* Read/write permissions for important resources ++### Mitigate potential security issues ++See the following table for potential on-premises user account security issues and their mitigations: ++| Security issue | Mitigation | +| - | - | +| Password management| - Ensure password complexity and password change are governed by regular updates and strong password requirements</br> - Coordinate password changes with a password update to minimize service downtime | +| The account is a member of privileged groups| - Review group membership</br> - Remove the account from privileged groups</br> - Grant the account rights and permissions to run its service (consult with service vendor)</br> - For example, deny sign-in locally or interactive sign-in| +| The account has read/write permissions to sensitive resources| - Audit access to sensitive resources</br> - Archive audit logs to a SIEM: Azure Log Analytics or Microsoft Sentinel</br> - Remediate resource permissions if you detect undesirable access levels | ++## Secure account types ++Microsoft doesn't recommend use of on-premises user accounts as service accounts. For services that use this account type, assess if it can be configured to use a gMSA or an sMSA. In addition, evaluate if you can move the service to Azure to enable use of safer account types. ++## Next steps ++To learn more about securing service accounts: ++* [Securing on-premises service accounts](service-accounts-on-premises.md) +* [Secure group managed service accounts](service-accounts-group-managed.md) +* [Secure standalone managed service accounts](service-accounts-standalone-managed.md) +* [Secure on-premises computer accounts with AD](service-accounts-computer.md) +* [Govern on-premises service accounts](service-accounts-govern-on-premises.md) |
active-directory | Sync Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/sync-directory.md | + + Title: Directory synchronization with Azure Active Directory +description: Architectural guidance on achieving directory synchronization with Azure Active Directory. +++++++ Last updated : 03/01/2023++++++# Directory synchronization ++Many organizations have a hybrid infrastructure that encompasses both on-premises and cloud components. Synchronizing users' identities between local and cloud directories lets users access resources with a single set of credentials. ++Synchronization is the process of ++* creating an object based on certain conditions, +* keeping the object updated, and +* removing the object when conditions are no longer met. ++On-premises provisioning involves provisioning from on-premises sources (such as Active Directory) to Azure Active Directory (Azure AD). ++## When to use directory synchronization ++Use directory synchronization when you need to synchronize identity data from your on premises Active Directory environments to Azure AD as illustrated in the following diagram. ++![architectural diagram](./media/authentication-patterns/dir-sync-auth.png) ++## System components ++* **Azure AD**: Synchronizes identity information from organization's on premises directory via Azure AD Connect. +* **Azure AD Connect**: A tool for connecting on premises identity infrastructures to Microsoft Azure AD. The wizard and guided experiences help you to deploy and configure prerequisites and components required for the connection (including sync and sign on from Active Directories to Azure AD). +* **Active Directory**: Active Directory is a directory service that is included in most Windows Server operating systems. Servers that run Active Directory Domain Services (AD DS) are called domain controllers. They authenticate and authorize all users and computers in the domain. ++Microsoft designed [Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md) to meet and accomplish your hybrid identity goals for synchronization of users, groups, and contacts to Azure AD. Azure AD Connect cloud sync uses the Azure AD cloud provisioning agent instead of the Azure AD Connect application. ++## Implement directory synchronization with Azure AD ++Explore the following resources to learn more about directory synchronization with Azure AD. ++* [What is identity provisioning with Azure AD?](../cloud-sync/what-is-provisioning.md)Provisioning is the process of creating an object based on certain conditions, keeping the object up-to-date and deleting the object when conditions are no longer met. On-premises provisioning involves provisioning from on premises sources (like Active Directory) to Azure AD. +* [Hybrid Identity: Directory integration tools comparison](../hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md) describes differences between Azure AD Connect sync and Azure AD Connect cloud provisioning. +* [Azure AD Connect and Azure AD Connect Health installation roadmap](../hybrid/how-to-connect-install-roadmap.md) provides detailed installation and configuration steps. ++## Next steps ++* [What is hybrid identity with Azure Active Directory?](../../active-directory/hybrid/whatis-hybrid-identity.md) Microsoft's identity solutions span on-premises and cloud-based capabilities. Hybrid identity solutions create a common user identity for authentication and authorization to all resources, regardless of location. +* [Install the Azure AD Connect provisioning agent](../cloud-sync/how-to-install.md) walks you through the installation process for the Azure Active Directory (Azure AD) Connect provisioning agent and how to initially configure it in the Azure portal. +* [Azure AD Connect cloud sync new agent configuration](../cloud-sync/how-to-configure.md) guides you through configuring Azure AD Connect cloud sync. +* [Azure Active Directory authentication and synchronization protocol overview](auth-sync-overview.md) describes integration with authentication and synchronization protocols. Authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. Synchronization integrations enable you to sync user and group data to Azure AD and then user Azure AD management capabilities. Some sync patterns enable automated provisioning. |
active-directory | Sync Ldap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/sync-ldap.md | + + Title: LDAP synchronization with Azure Active Directory +description: Architectural guidance on achieving LDAP synchronization with Azure Active Directory. +++++++ Last updated : 03/01/2023++++++# LDAP synchronization with Azure Active Directory ++Lightweight Directory Access Protocol (LDAP) is a directory service protocol that runs on the TCP/IP stack. It provides a mechanism that you can use to connect to, search, and modify internet directories. Based on a client-server model, the LDAP directory service enables access to an existing directory. ++Many companies depend on on-premises LDAP servers to store users and groups for their critical business apps. ++Azure Active Directory (Azure AD) can replace LDAP synchronization with Azure AD Connect. The Azure AD Connect synchronization service performs all operations related to synchronizing identity data between you're on premises environments and Azure AD. ++## When to use LDAP synchronization ++Use LDAP synchronization when you need to synchronize identity data between your on premises LDAP v3 directories and Azure AD as illustrated in the following diagram. ++![architectural diagram](./media/authentication-patterns/ldap-sync.png) ++## System components ++* **Azure AD**: Azure AD synchronizes identity information (users, groups) from organization's on-premises LDAP directories via Azure AD Connect. +* **Azure AD Connect**: is a tool for connecting on premises identity infrastructures to Microsoft Azure AD. The wizard and guided experiences help to deploy and configure prerequisites and components required for the connection. +* **Custom Connector**: A Generic LDAP Connector enables you to integrate the Azure AD Connect synchronization service with an LDAP v3 server. It sits on Azure AD Connect. +* **Active Directory**: Active Directory is a directory service included in most Windows Server operating systems. Servers that run Active Directory Services, referred to as domain controllers, authenticate and authorize all users and computers in a Windows domain. +* **LDAP v3 server**: LDAP protocol-compliant directory storing corporate users and passwords used for directory services authentication. ++## Implement LDAP synchronization with Azure AD ++Explore the following resources to learn more about LDAP synchronization with Azure AD. ++* [Hybrid Identity: Directory integration tools comparison](../hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md) describes differences between Azure AD Connect sync and Azure AD Connect cloud provisioning. +* [Azure AD Connect and Azure AD Connect Health installation roadmap](../hybrid/how-to-connect-install-roadmap.md) provides detailed installation and configuration steps. +* The [Generic LDAP Connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap) enables you to integrate the synchronization service with an LDAP v3 server. ++ > [!NOTE] + > Deploying the LDAP Connector requires an advanced configuration. Microsoft provides this connector with limited support. Configuring this connector requires familiarity with Microsoft Identity Manager and the specific LDAP directory. + > + > When you deploy this configuration in a production environment, collaborate with a partner such as Microsoft Consulting Services for help, guidance, and support. ++## Next steps ++* [What is hybrid identity with Azure Active Directory?](../../active-directory/hybrid/whatis-hybrid-identity.md) Microsoft's identity solutions span on-premises and cloud-based capabilities. Hybrid identity solutions create a common user identity for authentication and authorization to all resources, regardless of location. +* [Azure Active Directory authentication and synchronization protocol overview](auth-sync-overview.md) describes integration with authentication and synchronization protocols. Authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. Synchronization integrations enable you to sync user and group data to Azure AD and then user Azure AD management capabilities. Some sync patterns enable automated provisioning. |
active-directory | Sync Scim | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/sync-scim.md | + + Title: SCIM synchronization with Azure Active Directory +description: Architectural guidance on achieving SCIM synchronization with Azure Active Directory. ++++++++ Last updated : 01/10/2023+++++++# SCIM synchronization with Azure Active Directory ++System for Cross-Domain Identity Management (SCIM) is an open standard protocol for automating the exchange of user identity information between identity domains and IT systems. SCIM ensures that employees added to the Human Capital Management (HCM) system automatically have accounts created in Azure Active Directory (Azure AD) or Windows Server Active Directory. User attributes and profiles are synchronized between the two systems, updating removing users based on the user status or role change. ++SCIM is a standardized definition of two endpoints: a /Users’ endpoint and a /Groups endpoint. It uses common REST verbs to create, update, and delete objects. It also uses a pre-defined schema for common attributes like group name, username, first name, last name, and email. Applications that offer a SCIM 2.0 REST API can reduce or eliminate the pain of working with proprietary user management APIs or products. For example, any SCIM-compliant client can make an HTTP POST of a JSON object to the /Users endpoint to create a new user entry. Instead of needing a slightly different API for the same basic actions, apps that conform to the SCIM standard can instantly take advantage of pre-existing clients, tools, and code. ++## Use when:  ++You want to automatically provision user information from an HCM system to Azure AD and Windows Server Active Directory, and then to target systems if necessary. ++![architectural diagram](./media/authentication-patterns/scim-auth.png) +++## Components of system  ++* **HCM system**: Applications and technologies that enable Human Capital Management process and practices that support and automate HR processes throughout the employee lifecycle. ++* **Azure AD Provisioning Service**: Uses the SCIM 2.0 protocol for automatic provisioning. The service connects to the SCIM endpoint for the application, and uses the SCIM user object schema and REST APIs to automate provisioning and de-provisioning of users and groups. ++* **Azure AD**: User repository used to manage the lifecycle of identities and their entitlements. ++* **Target system**: Application or system that has SCIM endpoint and works with the Azure AD provisioning to enable automatic provisioning of users and groups. ++## Implement SCIM with Azure AD  ++* [How provisioning works in Azure AD ](../app-provisioning/how-provisioning-works.md) ++* [Managing user account provisioning for enterprise apps in the Azure portal ](../app-provisioning/configure-automatic-user-provisioning-portal.md) ++* [Build a SCIM endpoint and configure user provisioning with Azure AD ](../app-provisioning/use-scim-to-provision-users-and-groups.md) ++* [SCIM 2.0 protocol compliance of the Azure AD Provisioning Service](../app-provisioning/application-provisioning-config-problem-scim-compatibility.md) |
active-directory | Concepts Azure Multi Factor Authentication Prompts Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md | The following table summarizes the recommendations based on licenses: | | Azure AD Free and Microsoft 365 apps | Azure AD Premium | ||--||-| **SSO** | [Azure AD join](../devices/concept-azure-ad-join.md) or [Hybrid Azure AD join](../devices/concept-azure-ad-join-hybrid.md), or [Seamless SSO](../hybrid/how-to-connect-sso.md) for unmanaged devices. | Azure AD join<br />Hybrid Azure AD join | +| **SSO** | [Azure AD join](../devices/concept-azure-ad-join.md) or [Hybrid Azure AD join](../devices/concept-hybrid-join.md), or [Seamless SSO](../hybrid/how-to-connect-sso.md) for unmanaged devices. | Azure AD join<br />Hybrid Azure AD join | | **Reauthentication settings** | Remain signed-in | Use Conditional Access policies for sign-in frequency and persistent browser session | ## Next steps |
active-directory | Howto Authentication Passwordless Security Key Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md | This document focuses on enabling FIDO2 security key based passwordless authenti | Compatible [FIDO2 security keys](concept-authentication-passwordless.md#fido2-security-keys) | X | X | | WebAuthN requires Windows 10 version 1903 or higher | X | X | | [Azure AD joined devices](../devices/concept-azure-ad-join.md) require Windows 10 version 1909 or higher | X | |-| [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) require Windows 10 version 2004 or higher | | X | +| [Hybrid Azure AD joined devices](../devices/concept-hybrid-join.md) require Windows 10 version 2004 or higher | | X | | Fully patched Windows Server 2016/2019 Domain Controllers. | | X | | [Azure AD Hybrid Authentication Management module](https://www.powershellgallery.com/packages/AzureADHybridAuthenticationManagement/2.1.1.0) | | X | | [Microsoft Intune](/intune/fundamentals/what-is-intune) (Optional) | X | X | |
active-directory | Howto Authentication Use Email Signin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md | In the current preview state, the following limitations apply to email as an alt * When a user is signed-in with a non-UPN email, they cannot change their password. Azure AD self-service password reset (SSPR) should work as expected. During SSPR, the user may see their UPN if they verify their identity using a non-UPN email. * **Unsupported scenarios** - The following scenarios are not supported. Sign-in with non-UPN email for:- * [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) + * [Hybrid Azure AD joined devices](../devices/concept-hybrid-join.md) * [Azure AD joined devices](../devices/concept-azure-ad-join.md) * [Azure AD registered devices](../devices/concept-azure-ad-register.md) * [Resource Owner Password Credentials (ROPC)](../develop/v2-oauth-ropc.md) |
active-directory | Active Directory Acs Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-acs-migration.md | Each Microsoft cloud service that accepts tokens that are issued by Access Contr | Azure Service Bus | [Migrate to shared access signatures](../../service-bus-messaging/service-bus-sas.md) | | Azure Service Bus Relay | [Migrate to shared access signatures](../../azure-relay/relay-migrate-acs-sas.md) | | Azure Managed Cache | [Migrate to Azure Cache for Redis](../../azure-cache-for-redis/cache-faq.yml) |-| Azure DataMarket | [Migrate to the Cognitive Services APIs](https://azure.microsoft.com/services/cognitive-services/) | +| Azure DataMarket | [Migrate to the Azure AI services APIs](https://azure.microsoft.com/services/cognitive-services/) | | BizTalk Services | [Migrate to the Logic Apps feature of Azure App Service](https://azure.microsoft.com/services/cognitive-services/) | | Azure Media Services | [Migrate to Azure AD authentication](https://azure.microsoft.com/blog/azure-media-service-aad-auth-and-acs-deprecation/) | | Azure Backup | [Upgrade the Azure Backup agent](../../backup/backup-azure-file-folder-backup-faq.yml) | |
active-directory | Concept Conditional Access Grant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md | Administrators can choose to enforce one or more controls when granting access. - [Require multifactor authentication (Azure AD Multifactor Authentication)](../authentication/concept-mfa-howitworks.md) - [Require authentication strength](#require-authentication-strength) - [Require device to be marked as compliant (Microsoft Intune)](/intune/protect/device-compliance-get-started)-- [Require hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md)+- [Require hybrid Azure AD joined device](../devices/concept-hybrid-join.md) - [Require approved client app](app-based-conditional-access.md) - [Require app protection policy](app-protection-based-conditional-access.md) - [Require password change](#require-password-change) |
active-directory | Concept Continuous Access Evaluation Strict Enforcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-strict-enforcement.md | After enabling policies requiring strict location enforcement on a subset of tes If administrators don't perform this validation, their users may be negatively impacted. If traffic to Azure AD or a CAE supported resource is through a shared or undefinable egress IP, don't enable strict location enforcement in your Conditional Access policies. -### Step 3 - Identify IP addresses that should be added to your named locations +### Step 3 - Use the CAE Workbook to Identify IP addresses that should be added to your named locations -If the filter search of **IP address (seen by resource)** in the Azure AD Sign-in logs isn't empty, you might have a split-tunnel network configuration. To ensure your users aren't accidentally locked out by policies requiring strict location enforcement, administrators should: +If you haven't already, create a new Azure Workbook using the public template "Continuous Access Evaluation Insights" to identify IP mismatch between IP address seen by Azure AD and **IP address (seen by resource)**. In this case, you might have a split-tunnel network configuration. To ensure your users aren't accidentally locked out by policies requiring strict location enforcement, administrators should: -- Investigate and identify any IP addresses identified in the Sign-in logs.+- Investigate and identify any IP addresses identified in the CAE Workbook. - Add public IP addresses associated with known organizational egress points to their defined [named locations](location-condition.md#named-locations). - [ ![Screenshot of sign-in logs with an example of IP address seen by resource filter.](./media/concept-continuous-access-evaluation-strict-enforcement/sign-in-logs-ip-address-seen-by-resource.png) ](./media/concept-continuous-access-evaluation-strict-enforcement/sign-in-logs-ip-address-seen-by-resource.png#lightbox) + [ ![Screenshot of cae-workbook with an example of IP address seen by resource filter.](./media/concept-continuous-access-evaluation-strict-enforcement/continuous-access-evaluation-workbook.png) ](./media/concept-continuous-access-evaluation-strict-enforcement/continuous-access-evaluation-workbook.png#lightbox) The following screenshot shows an example of a clientΓÇÖs access to a resource being blocked. This block is due to policies requiring CAE strict location enforcement being triggered revoking the clientΓÇÖs session. |
active-directory | Concept Continuous Access Evaluation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md | Continuous access evaluation is available in Azure Government tenants (GCC High ### Key benefits -- User termination or password change/reset: User session revocation will be enforced in near real time.-- Network location change: Conditional Access location policies will be enforced in near real time.+- User termination or password change/reset: User session revocation is enforced in near real time. +- Network location change: Conditional Access location policies are enforced in near real time. - Token export to a machine outside of a trusted network can be prevented with Conditional Access location policies. ## Scenarios This process enables the scenario where users lose access to organizational file ### Client-side claim challenge -Before continuous access evaluation, clients would replay the access token from its cache as long as it hadn't expired. With CAE, we introduce a new case where a resource provider can reject a token when it isn't expired. To inform clients to bypass their cache even though the cached tokens haven't expired, we introduce a mechanism called **claim challenge** to indicate that the token was rejected and a new access token need to be issued by Azure AD. CAE requires a client update to understand claim challenge. The latest version of the following applications support claim challenge: +Before continuous access evaluation, clients would replay the access token from its cache as long as it wasn't expired. With CAE, we introduce a new case where a resource provider can reject a token when it isn't expired. To inform clients to bypass their cache even though the cached tokens haven't expired, we introduce a mechanism called **claim challenge** to indicate that the token was rejected and a new access token need to be issued by Azure AD. CAE requires a client update to understand claim challenge. The latest versions of the following applications support claim challenge: | | Web | Win32 | iOS | Android | Mac | | : | :: | :: | :: | :: | :: | Before continuous access evaluation, clients would replay the access token from Because risk and policy are evaluated in real time, clients that negotiate continuous access evaluation aware sessions no longer rely on static access token lifetime policies. This change means that the configurable token lifetime policy isn't honored for clients negotiating CAE-aware sessions. -Token lifetime is increased to long lived, up to 28 hours, in CAE sessions. Revocation is driven by critical events and policy evaluation, not just an arbitrary time period. This change increases the stability of applications without affecting security posture. +Token lifetime increases to long-lived, up to 28 hours, in CAE sessions. Critical events and policy evaluation drive revocation, not just an arbitrary time period. This change increases the stability of applications without affecting security posture. -If you aren't using CAE-capable clients, your default access token lifetime will remain 1 hour. The default only changes if you configured your access token lifetime with the [Configurable Token Lifetime (CTL)](../develop/configurable-token-lifetimes.md) preview feature. +If you aren't using CAE-capable clients, your default access token lifetime remains 1 hour. The default only changes if you configured your access token lifetime with the [Configurable Token Lifetime (CTL)](../develop/configurable-token-lifetimes.md) preview feature. ## Example flow diagrams If you aren't using CAE-capable clients, your default access token lifetime will 1. A CAE-capable client presents credentials or a refresh token to Azure AD asking for an access token for some resource. 1. An access token is returned along with other artifacts to the client.-1. An Administrator explicitly [revokes all refresh tokens for the user](/powershell/module/microsoft.graph.users.actions/revoke-mgusersign). A revocation event will be sent to the resource provider from Azure AD. +1. An Administrator explicitly [revokes all refresh tokens for the user](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession), then a revocation event is sent to the resource provider from Azure AD. 1. An access token is presented to the resource provider. The resource provider evaluates the validity of the token and checks whether there's any revocation event for the user. The resource provider uses this information to decide to grant access to the resource or not. 1. In this case, the resource provider denies access, and sends a 401+ claim challenge back to the client.-1. The CAE-capable client understands the 401+ claim challenge. It bypasses the caches and goes back to step 1, sending its refresh token along with the claim challenge back to Azure AD. Azure AD will then reevaluate all the conditions and prompt the user to reauthenticate in this case. +1. The CAE-capable client understands the 401+ claim challenge. It bypasses the caches and goes back to step 1, sending its refresh token along with the claim challenge back to Azure AD. Azure AD then reevaluates all the conditions and prompt the user to reauthenticate in this case. ### User condition change flow In the following example, a Conditional Access Administrator has configured a lo 1. The client presents an access token to the resource provider from outside of an allowed IP range. 1. The resource provider evaluates the validity of the token and checks the location policy synced from Azure AD. 1. In this case, the resource provider denies access, and sends a 401+ claim challenge back to the client. The client is challenged because it isn't coming from an allowed IP range.-1. The CAE-capable client understands the 401+ claim challenge. It bypasses the caches and goes back to step 1, sending its refresh token along with the claim challenge back to Azure AD. Azure AD reevaluates all the conditions and will deny access in this case. +1. The CAE-capable client understands the 401+ claim challenge. It bypasses the caches and goes back to step 1, sending its refresh token along with the claim challenge back to Azure AD. Azure AD reevaluates all the conditions and denies access in this case. ++## Exception for IP address variations and how to turn off the exception ++In step 8 above, when Azure AD reevaluates the conditions, it denies access because the new location detected by Azure AD is outside the allowed IP range. This isn't always the case. Due to [some complex network topologies](concept-continuous-access-evaluation.md#ip-address-variation-and-networks-with-ip-address-shared-or-unknown-egress-ips), the authentication request can arrive from an allowed egress IP address even after the access request received by the resource provider arrived from an IP address that isn't allowed. Under these conditions, Azure AD interprets that the client continues to be in an allowed location and should be granted access. Therefore, Azure AD issues a one-hour token that suspends IP address checks at the resource until token expiration. Azure AD continues to enforce IP address checks. ++Standard vs. Strict mode. The granting of access under this exception (that is, an allowed location detected between Azure AD with a disallowed location detected by the resource provider) protects user productivity by maintaining access to critical resources. This is standard location enforcement. On the other hand, Administrators who operate under stable network topologies and wish remove this exception can use [Strict Location Enforcement (Public Preview)](concept-continuous-access-evaluation-strict-enforcement.md). ## Enable or disable CAE Customers who have configured CAE settings under Security before have to migrate 1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. 1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**. -1. You'll then see the option to **Migrate** your policy. This action is the only one that youΓÇÖll have access to at this point. -1. Browse to **Conditional Access** and you'll find a new policy named **Conditional Access policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it. +1. You have the option to **Migrate** your policy. This action is the only one that you have access to at this point. +1. Browse to **Conditional Access** and you find a new policy named **Conditional Access policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it. The following table describes the migration experience of each customer group based on previously configured CAE settings. | Existing CAE Setting | Is Migration Needed | Auto Enabled for CAE | Expected Migration Experience | | | | | |-| New tenants that didn't configure anything in the old experience. | No | Yes | Old CAE setting will be hidden given these customers likely didn't see the experience before general availability. | -| Tenants that explicitly enabled for all users with the old experience. | No | Yes | Old CAE setting will be greyed out. Since these customers explicitly enabled this setting for all users, they don't need to migrate. | -| Tenants that explicitly enabled some users in their tenants with the old experience.| Yes | No | Old CAE settings will be greyed out. Clicking **Migrate** launches the new Conditional Access policy wizard, which includes **All users**, while excluding users and groups copied from CAE. It also sets the new **Customize continuous access evaluation** Session control to **Disabled**. | -| Tenants that explicitly disabled the preview. | Yes | No | Old CAE settings will be greyed out. Clicking **Migrate** launches the new Conditional Access policy wizard, which includes **All users**, and sets the new **Customize continuous access evaluation** Session control to **Disabled**. | +| New tenants that didn't configure anything in the old experience. | No | Yes | Old CAE setting is hidden given these customers likely didn't see the experience before general availability. | +| Tenants that explicitly enabled for all users with the old experience. | No | Yes | Old CAE setting is greyed out. Since these customers explicitly enabled this setting for all users, they don't need to migrate. | +| Tenants that explicitly enabled some users in their tenants with the old experience.| Yes | No | Old CAE settings are greyed out. Clicking **Migrate** launches the new Conditional Access policy wizard, which includes **All users**, while excluding users and groups copied from CAE. It also sets the new **Customize continuous access evaluation** Session control to **Disabled**. | +| Tenants that explicitly disabled the preview. | Yes | No | Old CAE settings are greyed out. Clicking **Migrate** launches the new Conditional Access policy wizard, which includes **All users**, and sets the new **Customize continuous access evaluation** Session control to **Disabled**. | More information about continuous access evaluation as a session control can be found in the section, [Customize continuous access evaluation](concept-conditional-access-session.md#customize-continuous-access-evaluation). Changes made to Conditional Access policies and group membership made by adminis When Conditional Access policy or group membership changes need to be applied to certain users immediately, you have two options. -- Run the [revoke-mgusersign PowerShell command](/powershell/module/microsoft.graph.users.actions/revoke-mgusersign) to revoke all refresh tokens of a specified user.-- Select "Revoke Session" on the user profile page in the Azure portal to revoke the user's session to ensure that the updated policies will be applied immediately.+- Run the [revoke-mgusersign PowerShell command](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession) to revoke all refresh tokens of a specified user. +- Select "Revoke Session" on the user profile page in the Azure portal to revoke the user's session to ensure that the updated policies are applied immediately. ### IP address variation and networks with IP address shared or unknown egress IPs In addition to IP variations, customers also may employ network solutions and se - Use IP addresses that may be shared with other customers. For example, cloud-based proxy services where egress IP addresses are shared between customers. - Use easily varied or undefinable IP addresses. For example, topologies where there are large, dynamic sets of egress IP addresses used, like large enterprise scenarios or [split VPN](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel) and local egress network traffic. -Networks where egress IP addresses may change frequently or are shared may affect Azure AD Conditional Access and Continues Access Evaluation (CAE). This variability can affect how these features work and their recommended configurations. Split Tunneling may also cause unexpected blocks when an environment is configured using [Split Tunneling VPN Best Practices](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel). Routing [Optimized IPs](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel#optimize-ip-address-ranges) through a Trusted IP/VPN may be required to prevent blocks related to "insufficient_claims" or "Instant IP Enforcement check failed". +Networks where egress IP addresses may change frequently or are shared may affect Azure AD Conditional Access and Continues Access Evaluation (CAE). This variability can affect how these features work and their recommended configurations. Split Tunneling may also cause unexpected blocks when an environment is configured using [Split Tunneling VPN Best Practices](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel). Routing [Optimized IPs](/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel#optimize-ip-address-ranges) through a Trusted IP/VPN may be required to prevent blocks related to *insufficient_claims* or *Instant IP Enforcement check failed*. The following table summarizes Conditional Access and CAE feature behaviors and recommendations for different types of network deployments: The following table summarizes Conditional Access and CAE feature behaviors and | Network Type | Example | IPs seen by Azure AD | IPs seen by RP | Applicable CA Configuration (Trusted Named Location) | CAE enforcement | CAE access token | Recommendations | ||||||||| | 1. Egress IPs are dedicated and enumerable for both Azure AD and all RPs traffic | All to network traffic to Azure AD and RPs egresses through 1.1.1.1 and/or 2.2.2.2 | 1.1.1.1 | 2.2.2.2 | 1.1.1.1 <br> 2.2.2.2 | Critical Events <br> IP location Changes | Long lived ΓÇô up to 28 hours | If CA Named Locations are defined, ensure that they contain all possible egress IPs (seen by Azure AD and all RPs) |-| 2. Egress IPs are dedicated and enumerable for Azure AD, but not for RPs traffic | Network traffic to Azure AD egresses through 1.1.1.1. RP traffic egresses through x.x.x.x | 1.1.1.1 | x.x.x.x | 1.1.1.1 | Critical Events | Default access token lifetime ΓÇô 1 hour | Do not add non dedicated or non-enumerable egress IPs (x.x.x.x) into Trusted Named Location Conditional Access rules as it can weaken security | -| 3. Egress IPs are non-dedicated/shared or not enumerable for both Azure AD and RPs traffic | Network traffic to Azure AD egresses through y.y.y.y. RP traffic egresses through x.x.x.x | y.y.y.y | x.x.x.x | N/A -no IP CA policies/Trusted Locations configured | Critical Events | Long lived ΓÇô up to 28 hours | Don't add non dedicated or non-enumerable egress IPs (x.x.x.x/y.y.y.y) into Trusted Named Location CA rules as it can weaken security | +| 2. Egress IPs are dedicated and enumerable for Azure AD, but not for RPs traffic | Network traffic to Azure AD egresses through 1.1.1.1. RP traffic egresses through x.x.x.x | 1.1.1.1 | x.x.x.x | 1.1.1.1 | Critical Events | Default access token lifetime ΓÇô 1 hour | Don't add non dedicated or nonenumerable egress IPs (x.x.x.x) into Trusted Named Location Conditional Access rules as it can weaken security | +| 3. Egress IPs are non-dedicated/shared or not enumerable for both Azure AD and RPs traffic | Network traffic to Azure AD egresses through y.y.y.y. RP traffic egresses through x.x.x.x | y.y.y.y | x.x.x.x | N/A -no IP CA policies/Trusted Locations configured | Critical Events | Long lived ΓÇô up to 28 hours | Don't add non dedicated or nonenumerable egress IPs (x.x.x.x/y.y.y.y) into Trusted Named Location CA rules as it can weaken security | Networks and network services used by clients connecting to identity and resource providers continue to evolve and change in response to modern trends. These changes may affect Conditional Access and CAE configurations that rely on the underlying IP addresses. When deciding on these configurations, factor in future changes in technology and upkeep of the defined list of addresses in your plan. ### Supported location policies -CAE only has insight into [IP-based named locations](../conditional-access/location-condition.md#ipv4-and-ipv6-address-ranges). CAE doesn't have insight into other location conditions like [MFA trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) or country/region-based locations. When a user comes from an MFA trusted IP, trusted location that includes MFA Trusted IPs, or country/region location, CAE won't be enforced after that user moves to a different location. In those cases, Azure AD will issue a one-hour access token without instant IP enforcement check. +CAE only has insight into [IP-based named locations](../conditional-access/location-condition.md#ipv4-and-ipv6-address-ranges). CAE doesn't have insight into other location conditions like [MFA trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) or country/region-based locations. When a user comes from an MFA trusted IP, trusted location that includes MFA Trusted IPs, or country/region location, CAE won't be enforced after that user moves to a different location. In those cases, Azure AD issues a one-hour access token without instant IP enforcement check. > [!IMPORTANT] > If you want your location policies to be enforced in real time by continuous access evaluation, use only the [IP based Conditional Access location condition](../conditional-access/location-condition.md) and configure all IP addresses, **including both IPv4 and IPv6**, that can be seen by your identity provider and resources provider. Do not use country/region location conditions or the trusted ips feature that is available in Azure AD Multifactor Authentication's service settings page. ### Named location limitations -When the sum of all IP ranges specified in location policies exceeds 5,000, user change location flow won't be enforced by CAE in real time. In this case, Azure AD will issue a one-hour CAE token. CAE will continue enforcing [all other events and policies](#critical-event-evaluation) besides client location change events. With this change, you still maintain stronger security posture compared to traditional one-hour tokens, since [other events](#critical-event-evaluation) will be evaluated in near real time. +When the sum of all IP ranges specified in location policies exceeds 5,000, user change location flow isn't enforced by CAE in real time. In this case, Azure AD issues a one-hour CAE token. CAE continues enforcing [all other events and policies](#critical-event-evaluation) besides client location change events. With this change, you still maintain stronger security posture compared to traditional one-hour tokens, since [other events](#critical-event-evaluation) are still evaluated in near real time. ### Office and Web Account Manager settings When multiple users are collaborating on a document at the same time, their acce - Closing the Office app - After 1 hour when a Conditional Access IP policy is set -To further reduce this time, a SharePoint Administrator can reduce the maximum lifetime of coauthoring sessions for documents stored in SharePoint Online and OneDrive for Business, by [configuring a network location policy in SharePoint Online](/sharepoint/control-access-based-on-network-location). Once this configuration is changed, the maximum lifetime of coauthoring sessions will be reduced to 15 minutes, and can be adjusted further using the SharePoint Online PowerShell command "[Set-SPOTenant ΓÇôIPAddressWACTokenLifetime](/powershell/module/sharepoint-online/set-spotenant)". +To further reduce this time, a SharePoint Administrator can reduce the maximum lifetime of coauthoring sessions for documents stored in SharePoint Online and OneDrive for Business, by [configuring a network location policy in SharePoint Online](/sharepoint/control-access-based-on-network-location). Once this configuration is changed, the maximum lifetime of coauthoring sessions is reduced to 15 minutes, and can be adjusted further using the SharePoint Online PowerShell command [Set-SPOTenant ΓÇôIPAddressWACTokenLifetime](/powershell/module/sharepoint-online/set-spotenant). ### Enable after a user is disabled An IP address policy isn't evaluated before push notifications are released. Thi ### Guest users -Guest user accounts aren't supported by CAE. CAE revocation events and IP based Conditional Access policies aren't enforced instantaneously. +CAE doesn't support Guest user accounts. CAE revocation events and IP based Conditional Access policies aren't enforced instantaneously. -### How will CAE work with Sign-in Frequency? +### CAE and Sign-in Frequency -Sign-in Frequency will be honored with or without CAE. +Sign-in Frequency is honored with or without CAE. ## Next steps |
active-directory | Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/controls.md | +> [!NOTE] +> As Alex Simons mentioned in his blog post [Upcoming changes to Custom Controls](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/upcoming-changes-to-custom-controls/ba-p/1144696): +> +> ...We are planning to replace the current preview with an approach which will allow partner-provided authentication capabilities to work seamlessly with the Azure AD administrator and end user experiences. Today, partner MFA solutions can only function after a password has been entered, donΓÇÖt serve as MFA for step-up authentication on other key scenarios, and donΓÇÖt integrate with end user or administrative credential management functions. The new implementation will allow partner-provided authentication factors to work alongside built-in factors for key scenarios including registration, usage, MFA claims, step-up authentication, reporting, and logging. +> +> The current, limited approach will be supported in preview until the new design is completed, previews, and reaches ΓÇ£General Availability.ΓÇ¥ At that point, we will provide time for customers to migrate to the new implementation. Because of the limitations of the current approach, we will not onboard any new providers until the new capabilities are ready. +> +> We are working closely with customers and providers and will communicate timeline as we get closer... ## Creating custom controls |
active-directory | How To App Protection Policy Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-app-protection-policy-windows.md | Clicking on **Switch Edge profile** opens a window listing their Work or school This process opens a window offering to allow Windows to remember your account and automatically sign you in to your apps and websites. > [!CAUTION]-> You must *UNCHECK* the box **Allow my organization to manage my device**. Leaving this checked enrolls your device in mobile device maangment (MDM) not mobile application management (MAM). +> You must *CLEAR THE CHECKBOX* **Allow my organization to manage my device**. Leaving this checked enrolls your device in mobile device maangment (MDM) not mobile application management (MAM). ![Screenshot showing the stay signed in to all your apps window. Uncheck the allow my organization to manage my device checkbox.](./media/how-to-app-protection-policy-windows/stay-signed-in-to-all-your-apps.png) In some circumstances, after getting the "you're all set" page you may still be To resolve these possible scenarios: - Wait a few minutes and try again in a new tab.-- Go to **Settings** > **Accounts** > **Access work or school**, then add the account there. - Contact your administrator to check that Microsoft Intune MAM policies are applying to your account correctly. ### Existing account |
active-directory | Howto Conditional Access Policy Compliant Device Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device-admin.md | Accounts that are assigned administrative rights are targeted by attackers. Requ More information about device compliance policies can be found in the article, [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started) -Requiring a hybrid Azure AD joined device is dependent on your devices already being hybrid Azure AD joined. For more information, see the article [Configure hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md). +Requiring a hybrid Azure AD joined device is dependent on your devices already being hybrid Azure AD joined. For more information, see the article [Configure hybrid Azure AD join](../devices/how-to-hybrid-join.md). Microsoft recommends you require enable this policy for the following roles at a minimum, based on [identity score recommendations](../fundamentals/identity-secure-score.md): |
active-directory | Howto Conditional Access Policy Compliant Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md | Organizations who have deployed Microsoft Intune can use the information returne Policy compliance information is sent to Azure AD where Conditional Access decides to grant or block access to resources. More information about device compliance policies can be found in the article, [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started) -Requiring a hybrid Azure AD joined device is dependent on your devices already being hybrid Azure AD joined. For more information, see the article [Configure hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md). +Requiring a hybrid Azure AD joined device is dependent on your devices already being hybrid Azure AD joined. For more information, see the article [Configure hybrid Azure AD join](../devices/how-to-hybrid-join.md). ## User exclusions [!INCLUDE [active-directory-policy-exclusions](../../../includes/active-directory-policy-exclude-user.md)] |
active-directory | Howto Add Branding In Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-branding-in-apps.md | To download the official images for use in your app, right-click the one you wan | Sign in (dark theme) | ![Downloadable "Sign in" short button dark theme PNG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_dark_short.png) | ![Downloadable "Sign in" short button dark theme SVG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_dark_short.svg) | | Sign in (light theme) | ![Downloadable "Sign in" short button light theme PNG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_light_short.png) | ![Downloadable "Sign in" short button light theme SVG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_light_short.svg) | +## Localized terminology and UI strings ++Microsoft Terminology can be used to ensure that terminology in your localized versions of applications match the corresponding terminology in Microsoft products. You can query the Microsoft Terminology via the [Microsoft Terminology Search page](https://msit.powerbi.com/view?r=eyJrIjoiODJmYjU4Y2YtM2M0ZC00YzYxLWE1YTktNzFjYmYxNTAxNjQ0IiwidCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0NyIsImMiOjV9). ++Microsoft UI string translations can be used to ensure that translations in the localized versions of your applications match the corresponding UI strings in Microsoft products. You can query the Microsoft UI strings via the [Microsoft UI String Search page](https://msit.powerbi.com/view?r=eyJrIjoiMmE2NjJhMDMtNTY3MC00MmI2LWFmOWUtYWM5YTVjODI5MjQwIiwidCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0NyIsImMiOjV9). + ## Branding DoΓÇÖs and DonΓÇÖts **DO** use ΓÇ£work or school accountΓÇ¥ in combination with the "Sign in with Microsoft" button to provide additional explanation to help end users recognize whether they can use it. **DONΓÇÖT** use other terms such as ΓÇ£enterprise accountΓÇ¥, ΓÇ£business accountΓÇ¥ or ΓÇ£corporate account.ΓÇ¥ |
active-directory | Tutorial Blazor Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-server.md | In this tutorial: > [!div class="checklist"] >-> - Create a new Blazor Server app configured to use Azure AD for authentication +> - Create a new Blazor Server app configured to use Azure AD for authentication for users in a single organization (in the Azure Active Directory tenant the app is registered) > - Handle both authentication and authorization using `Microsoft.Identity.Web` > - Retrieve data from a protected web API, Microsoft Graph In this tutorial: - [Application administrator](../roles/permissions-reference.md#application-administrator) - [Application developer](../roles/permissions-reference.md#application-developer) - [Cloud application administrator](../roles/permissions-reference.md#cloud-application-administrator)--## Register the app in the Azure portal --Every app that uses Azure AD for authentication must be registered with Azure AD. Follow the instructions in [Register an application](quickstart-register-app.md) with these additions: --- For **Supported account types**, select **Accounts in this organizational directory only**.-- Leave the **Redirect URI** drop down set to **Web** and enter `https://localhost:5001/signin-oidc`. The default port for an app running on Kestrel is `5001`. If the app is available on a different port, specify that port number instead of `5001`.--Under **Manage**, select **Authentication** > **Implicit grant and hybrid flows**. Select **ID tokens**, and then select **Save**. --Finally, because the app calls a protected API (in this case Microsoft Graph), it needs a client secret in order to verify its identity when it requests an access token to call that API. --1. Within the same app registration, under **Manage**, select **Certificates & secrets** and then **Client secrets**. -2. Create a **New client secret** that never expires. -3. Make note of the secret's **Value** as you'll use it in the next step. You canΓÇÖt access it again once you navigate away from this pane. However, you can recreate it as needed. +- The tenant-id or domain of the Azure Active Directory associated with your Azure Account ## Create the app using the .NET CLI -To create the application, run the following command. Replace the placeholders in the command with the proper information from your app's overview page and execute the command in a command shell. The output location specified with the `-o|--output` option creates a project folder if it doesn't exist and becomes part of the app's name. - ```dotnetcli-dotnet new blazorserver --auth SingleOrg --calls-graph -o {APP NAME} --client-id "{CLIENT ID}" --tenant-id "{TENANT ID}" --domain "{DOMAIN}" -f net7.0 +mkdir <new-project-folder> +cd <new-project-folder> +dotnet new blazorserver --auth SingleOrg --calls-graph ``` -| Placeholder | Azure portal name | Example | -| - | -- | -- | -| `{APP NAME}` | — | `BlazorSample` | -| `{CLIENT ID}` | Application (client) ID | `41451fa7-0000-0000-0000-69eff5a761fd` | -| `{TENANT ID}` | Directory (tenant) ID | `e86c78e2-0000-0000-0000-918e0565a45e` | -| `{DOMAIN}` | Primary domain | `tenantname.onmicrosoft.com` | --Now, navigate to your new Blazor app in your editor and add the client secret to the _appsettings.json_ file, replacing the text "secret-from-app-registration". +## Install the Microsoft Identity App Sync .NET Tool -```json -"ClientSecret": "secret-from-app-registration", +```dotnetcli +dotnet tool install --global msidentity-app-sync ``` -## Test the app +This tool will automate the following tasks for you: -In your terminal, run the following command: +- Register your application in Azure Active Directory + - Create a secret for your registered application + - Register redirect URIs based on your launchsettings.json +- Initialize the use of user secrets in your project +- Store your application secret in user secrets storage +- Update your appsettings.json with the client-id, tenant-id, and others. -```dotnetcli -dotnet run -``` +.NET Tools extend the capabilities of the dotnet CLI command. To learn more about .NET Tools, see [.NET Tools](/dotnet/core/tools/global-tools). -In your browser, navigate to `https://localhost:<port number> `, and log in using an Azure AD user account to see the app running. +For more information on user secrets storage, see [safe storage of app secrets during development](/aspnet/core/security/app-secrets). -## Retrieving data from Microsoft Graph +## Use the Microsoft Identity App Sync Tool -[Microsoft Graph](/graph/overview) offers a range of APIs that provide access to your users' Microsoft 365 data. By using the Microsoft identity platform as the identity provider for your app, you have easier access to this information since Microsoft Graph directly supports the tokens issued by the Microsoft identity platform. In this section, you add code to display the signed in user's emails on the application's "fetch data" page. +Run the following command to register your app in your tenant and update the .NET configuration of your application. Provide the username/upn belonging to your Azure Account (for instance, `username@domain.com`) and the tenant ID or domain name of the Azure Active Directory associated with your Azure Account. If you use an account that is signed in in either Visual Studio, Azure CLI, or Azure PowerShell, you'll benefit from single sign-on (SSO). -Before you start, log out of your app since you'll be making changes to the required permissions, and your current token won't work. If you haven't already, run your app again and select **Log out** before updating the code below. +```dotnetcli +msidentity-app-sync --username <username/upn> --tenant-id <tenantID> +``` -Now you'll update your app's registration and code to pull a user's email and display the messages within the app. To achieve this, first extend the app registration permissions in Azure AD to enable access to the email data. Then, add code to the Blazor app to retrieve and display this data in one of the pages. +> [!Note] +> - You don't need to provide the username if you are signed in with only one account in the developer tools. +> - You don't need to provide the tenant-id if the tenant in which you want to create the application is your home tenant. -1. In the Azure portal, select your app in **App registrations**. -1. Under **Manage**, select **API permissions**. -1. Select **Add a permission** > **Microsoft Graph**. -1. Select **Delegated Permissions**, then search for and select the **Mail.Read** permission. -1. Select **Add permissions**. +## Optional - Create a development SSL certificate -In the _appsettings.json_ file, update your code so it fetches the appropriate token with the right permissions. Add `mail.read` after the `user.read` scope under `DownstreamAPI`. This is specifying which scopes (or permissions) the app will request access to. +In order to avoid SSL errors/warnings when browsing the running application, you can use the following on macOS and Windows to generate a self-signed SSL certificate for use by .NET Core. -```json -"Scopes": "user.read mail.read" +```dotnetcli +dotnet dev-certs https --trust ``` -Next, in the _Pages_ folder, update the code in the _FetchData.razor_ file to retrieve email data instead of the default (random) weather details. Replace the code in that file with the following code snippet: --```csharp -@page "/fetchdata" --@inject IHttpClientFactory HttpClientFactory -@inject Microsoft.Identity.Web.ITokenAcquisition TokenAcquisitionService --<p>This component demonstrates fetching data from a service.</p> --@if (messages == null) -{ - <p><em>Loading...</em></p> -} -else -{ - <h1>Hello @userDisplayName !!!!</h1> - <table class="table"> - <thead> - <tr> - <th>Subject</th> - <th>Sender</th> - <th>Received Time</th> - </tr> - </thead> - <tbody> - @foreach (var mail in messages) - { - <tr> - <td>@mail.Subject</td> - <td>@mail.Sender</td> - <td>@mail.ReceivedTime</td> - </tr> - } - </tbody> - </table> -} --@code { -- private string userDisplayName; - private List<MailMessage> messages = new List<MailMessage>(); -- private HttpClient _httpClient; -- protected override async Task OnInitializedAsync() - { - _httpClient = HttpClientFactory.CreateClient(); --- // get a token - var token = await TokenAcquisitionService.GetAccessTokenForUserAsync(new string[] { "User.Read", "Mail.Read" }); -- // make API call - _httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token); - var dataRequest = await _httpClient.GetAsync("https://graph.microsoft.com/beta/me"); -- if (dataRequest.IsSuccessStatusCode) - { - var userData = System.Text.Json.JsonDocument.Parse(await dataRequest.Content.ReadAsStreamAsync()); - userDisplayName = userData.RootElement.GetProperty("displayName").GetString(); - } -- var mailRequest = await _httpClient.GetAsync("https://graph.microsoft.com/beta/me/messages?$select=subject,receivedDateTime,sender&$top=10"); -- if (mailRequest.IsSuccessStatusCode) - { - var mailData = System.Text.Json.JsonDocument.Parse(await mailRequest.Content.ReadAsStreamAsync()); - var messagesArray = mailData.RootElement.GetProperty("value").EnumerateArray(); -- foreach (var m in messagesArray) - { - var message = new MailMessage(); - message.Subject = m.GetProperty("subject").GetString(); - message.Sender = m.GetProperty("sender").GetProperty("emailAddress").GetProperty("address").GetString(); - message.ReceivedTime = m.GetProperty("receivedDateTime").GetDateTime(); - messages.Add(message); - } - } - } -- public class MailMessage - { - public string Subject; - public string Sender; - public DateTime ReceivedTime; - } -} +## Run the app -``` --Launch the app. YouΓÇÖll notice that you're prompted for the newly added permissions, indicating that everything is working as expected. Now, beyond basic user profile data, the app is requesting access to email data. +In your terminal, run the following command: -After granting consent, navigate to the "Fetch data" page to read some email. +```dotnetcli +dotnet run +``` +Browse to the running web application using the URL outputted by the command line. ## Next steps |
active-directory | V2 Conditional Access Dev Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-conditional-access-dev-guide.md | error_description=AADSTS50076: Due to a configuration change made by your admini Our app needs to catch the `error=interaction_required`. The application can then use either `acquireTokenPopup()` or `acquireTokenRedirect()` on the same resource. The user is forced to do a multi-factor authentication. After the user completes the multi-factor authentication, the app is issued a fresh access token for the requested resource. +To try out this scenario, see our [React SPA calling Node.js web API using on-behalf-of flow](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo) code sample. This code sample uses the Conditional Access policy and web API you registered earlier with a React SPA to demonstrate this scenario. It shows how to properly handle the claims challenge and get an access token that can be used for your web API. + ## See also * To learn more about the capabilities, see [Conditional Access in Azure Active Directory](../conditional-access/overview.md). |
active-directory | Concept Azure Ad Join | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-join.md | Azure AD Join can be deployed by using any of the following methods: - [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) - [Bulk deployment](/intune/windows-bulk-enroll)-- [Self-service experience](azuread-joined-devices-frx.md)+- [Self-service experience](device-join-out-of-box.md) ## Next steps -- [Plan your Azure AD join implementation](azureadjoin-plan.md)+- [Plan your Azure AD join implementation](device-join-plan.md) - [Co-management using Configuration Manager and Microsoft Intune](/mem/configmgr/comanage/overview) - [How to manage the local administrators group on Azure AD joined devices](assign-local-admin.md) - [Manage device identities using the Azure portal](device-management-azure-portal.md) |
active-directory | Concept Hybrid Join | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-hybrid-join.md | + + Title: What is a hybrid Azure AD joined device? +description: Learn how device identity management can help you to manage devices that are accessing resources in your environment. +++++ Last updated : 01/24/2023+++++++++# Hybrid Azure AD joined devices ++Organizations with existing Active Directory implementations can benefit from some of the functionality provided by Azure Active Directory (Azure AD) by implementing hybrid Azure AD joined devices. These devices are joined to your on-premises Active Directory and registered with Azure Active Directory. ++Hybrid Azure AD joined devices require network line of sight to your on-premises domain controllers periodically. Without this connection, devices become unusable. If this requirement is a concern, consider [Azure AD joining](concept-azure-ad-join.md) your devices. ++| Hybrid Azure AD Join | Description | +| | | +| **Definition** | Joined to on-premises AD and Azure AD requiring organizational account to sign in to the device | +| **Primary audience** | Suitable for hybrid organizations with existing on-premises AD infrastructure | +| | Applicable to all users in an organization | +| **Device ownership** | Organization | +| **Operating Systems** | Windows 11, Windows 10 or 8.1 except Home editions | +| | Windows Server 2008/R2, 2012/R2, 2016, 2019 and 2022 | +| **Provisioning** | Windows 11, Windows 10, Windows Server 2016/2019/2022 | +| | Domain join by IT and autojoin via Azure AD Connect or ADFS config | +| | Domain join by Windows Autopilot and autojoin via Azure AD Connect or ADFS config | +| | Windows 8.1, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2 - Require MSI | +| **Device sign in options** | Organizational accounts using: | +| | Password | +| | Windows Hello for Business for Windows 10 or newer | +| **Device management** | [Group Policy](/mem/configmgr/comanage/faq#my-environment-has-too-many-group-policy-objects-and-legacy-authenticated-apps--do-i-have-to-use-hybrid-azure-ad-) | +| | [Configuration Manager standalone or co-management with Microsoft Intune](/mem/configmgr/comanage/overview) | +| **Key capabilities** | SSO to both cloud and on-premises resources | +| | Conditional Access through Domain join or through Intune if co-managed | +| | [Self-service Password Reset and Windows Hello PIN reset on lock screen](../authentication/howto-sspr-windows.md) | ++## Scenarios ++Use Azure AD hybrid joined devices if: ++- You support down-level devices running Windows 8.1, Windows Server 2008/R2, 2012/R2, 2016. +- You want to continue to use [Group Policy](/mem/configmgr/comanage/faq#my-environment-has-too-many-group-policy-objects-and-legacy-authenticated-apps--do-i-have-to-use-hybrid-azure-ad-) to manage device configuration. +- You want to continue to use existing imaging solutions to deploy and configure devices. +- You have Win32 apps deployed to these devices that rely on Active Directory machine authentication. ++## Next steps ++- [Plan your hybrid Azure AD join implementation](hybrid-azuread-join-plan.md) +- [Co-management using Configuration Manager and Microsoft Intune](/mem/configmgr/comanage/overview) +- [Manage device identities using the Azure portal](device-management-azure-portal.md) +- [Manage stale devices in Azure AD](manage-stale-devices.md) |
active-directory | Device Join Out Of Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-join-out-of-box.md | + + Title: Join a new Windows 11 device with Azure AD during the out of box experience +description: How users can set up Azure AD Join during OOBE. +++++ Last updated : 08/31/2022+++++++++# Azure AD join a new Windows device during the out of box experience ++Windows 11 users can join new Windows devices to Azure AD during the first-run out-of-box experience (OOBE). This functionality enables you to distribute shrink-wrapped devices to your employees or students. ++This functionality pairs well with mobile device management platforms like [Microsoft Intune](/mem/intune/fundamentals/what-is-intune) and tools like [Windows Autopilot](/mem/autopilot/windows-autopilot) to ensure devices are configured according to your standards. ++## Prerequisites ++To Azure AD join a Windows device, the device registration service must be configured to enable you to register devices. For more information about prerequisites, see the article [How to: Plan your Azure AD join implementation](device-join-plan.md). ++> [!TIP] +> Windows Home Editions do not support Azure AD join. These editions can still access many of the benefits by using [Azure AD registration](concept-azure-ad-register.md). +> +> For information about how complete Azure AD registration on a Windows device see the support article [Register your personal device on your work or school network](https://support.microsoft.com/account-billing/register-your-personal-device-on-your-work-or-school-network-8803dd61-a613-45e3-ae6c-bd1ab25bf8a8). ++## Join a new Windows 11 device to Azure AD ++Your device may restart several times as part of the setup process. Your device must be connected to the Internet to complete Azure AD join. ++1. Turn on your new device and start the setup process. Follow the prompts to set up your device. +1. When prompted **How would you like to set up this device?**, select **Set up for work or school**. + :::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-work-or-school.png" alt-text="Screenshot of Windows 11 out-of-box experience showing the option to set up for work or school."::: +1. On the **Let's set things up for your work or school** page, provide the credentials that your organization provided. + 1. Optionally you can choose to **Sign in with a security key** if one was provided to you. + 1. If your organization requires it, you may be prompted to perform multifactor authentication. + :::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-device-sign-in-info.png" alt-text="Screenshot of Windows 11 out-of-box experience showing the sign-in experience."::: +1. Continue to follow the prompts to set up your device. +1. Azure AD checks if an enrollment in mobile device management is required and starts the process. + 1. Windows registers the device in the organizationΓÇÖs directory in Azure AD and enrolls it in mobile device management, if applicable. +1. If you sign in with a managed user account, Windows takes you to the desktop through the automatic sign-in process. Federated users are directed to the Windows sign-in screen to enter your credentials. + :::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-complete-automatic-sign-in-desktop.png" alt-text="Screenshot of Windows 11 at the desktop after first run experience Azure AD joined."::: ++For more information about the out-of-box experience, see the support article [Join your work device to your work or school network](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973). ++## Verification ++To verify whether a device is joined to your Azure AD, review the **Access work or school** dialog on your Windows device found in **Settings** > **Accounts**. The dialog should indicate that you're connected to Azure AD, and provides information about areas managed by your IT staff. +++## Next steps ++- For more information about managing devices in the Azure portal, see [managing devices using the Azure portal](device-management-azure-portal.md). +- [What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune) +- [Overview of Windows Autopilot](/mem/autopilot/windows-autopilot) +- [Passwordless authentication options for Azure Active Directory](../authentication/concept-authentication-passwordless.md) |
active-directory | Device Join Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-join-plan.md | + + Title: Plan your Azure Active Directory join deployment +description: Explains the steps that are required to implement Azure AD joined devices in your environment. +++++ Last updated : 01/24/2023+++++++++# How to: Plan your Azure AD join implementation ++You can join devices directly to Azure Active Directory (Azure AD) without the need to join to on-premises Active Directory while keeping your users productive and secure. Azure AD join is enterprise-ready for both at-scale and scoped deployments. Single sign-on (SSO) access to on-premises resources is also available to devices that are Azure AD joined. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](device-sso-to-on-premises-resources.md). ++This article provides you with the information you need to plan your Azure AD join implementation. ++## Prerequisites ++This article assumes that you're familiar with the [Introduction to device management in Azure Active Directory](./overview.md). ++## Plan your implementation ++To plan your Azure AD join implementation, you should familiarize yourself with: ++> [!div class="checklist"] +> - Review your scenarios +> - Review your identity infrastructure +> - Assess your device management +> - Understand considerations for applications and resources +> - Understand your provisioning options +> - Configure enterprise state roaming +> - Configure Conditional Access ++## Review your scenarios ++Azure AD join enables you to transition towards a cloud-first model with Windows. If you're planning to modernize your devices management and reduce device-related IT costs, Azure AD join provides a great foundation towards achieving those goals. + +Consider Azure AD join if your goals align with the following criteria: ++- You're adopting Microsoft 365 as the productivity suite for your users. +- You want to manage devices with a cloud device management solution. +- You want to simplify device provisioning for geographically distributed users. +- You plan to modernize your application infrastructure. ++## Review your identity infrastructure ++Azure AD join works in managed and federated environments. We think most organizations will deploy with managed domains. Managed domain scenarios don't require configuring and managing a federation server like Active Directory Federation Services (AD FS). ++### Managed environment ++A managed environment can be deployed either through [Password Hash Sync](../hybrid/how-to-connect-password-hash-synchronization.md) or [Pass Through Authentication](../hybrid/how-to-connect-pta-quick-start.md) with Seamless Single Sign On. ++### Federated environment ++A federated environment should have an identity provider that supports both WS-Trust and WS-Fed protocols: ++- **WS-Fed:** This protocol is required to join a device to Azure AD. +- **WS-Trust:** This protocol is required to sign in to an Azure AD joined device. ++When you're using AD FS, you need to enable the following WS-Trust endpoints: + `/adfs/services/trust/2005/usernamemixed` + `/adfs/services/trust/13/usernamemixed` + `/adfs/services/trust/2005/certificatemixed` + `/adfs/services/trust/13/certificatemixed` ++If your identity provider doesn't support these protocols, Azure AD join doesn't work natively. ++> [!NOTE] +> Currently, Azure AD join does not work with [AD FS 2019 configured with external authentication providers as the primary authentication method](/windows-server/identity/ad-fs/operations/additional-authentication-methods-ad-fs#enable-external-authentication-methods-as-primary). Azure AD join defaults to password authentication as the primary method, which results in authentication failures in this scenario ++### User configuration ++If you create users in your: ++- **On-premises Active Directory**, you need to synchronize them to Azure AD using [Azure AD Connect](../hybrid/how-to-connect-sync-whatis.md). +- **Azure AD**, no extra setup is required. ++On-premises user principal names (UPNs) that are different from Azure AD UPNs aren't supported on Azure AD joined devices. If your users use an on-premises UPN, you should plan to switch to using their primary UPN in Azure AD. ++UPN changes are only supported starting Windows 10 2004 update. Users on devices with this update won't have any issues after changing their UPNs. For devices before the Windows 10 2004 update, users would have SSO and Conditional Access issues on their devices. They need to sign in to Windows through the "Other user" tile using their new UPN to resolve this issue. ++## Assess your device management ++### Supported devices ++Azure AD join: ++- Supports Windows 10 and Windows 11 devices. +- Isn't supported on previous versions of Windows or other operating systems. If you have Windows 7/8.1 devices, you must upgrade at least to Windows 10 to deploy Azure AD join. +- Is supported for FIPS-compliant TPM 2.0 but not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with Azure AD join. Microsoft doesn't provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Contact your hardware OEM for support. + +**Recommendation:** Always use the latest Windows release to take advantage of updated features. ++### Management platform ++Device management for Azure AD joined devices is based on a mobile device management (MDM) platform such as Intune, and MDM CSPs. Starting in Windows 10 there's a built-in MDM agent that works with all compatible MDM solutions. ++> [!NOTE] +> Group policies are not supported in Azure AD joined devices as they are not connected to on-premises Active Directory. Management of Azure AD joined devices is only possible through MDM ++There are two approaches for managing Azure AD joined devices: ++- **MDM-only** - A device is exclusively managed by an MDM provider like Intune. All policies are delivered as part of the MDM enrollment process. For Azure AD Premium or EMS customers, MDM enrollment is an automated step that is part of an Azure AD join. +- **Co-management** - A device is managed by an MDM provider and Microsoft Configuration Manager. In this approach, the Microsoft Configuration Manager agent is installed on an MDM-managed device to administer certain aspects. ++If you're using Group Policies, evaluate your GPO and MDM policy parity by using [Group Policy analytics](/mem/intune/configuration/group-policy-analytics) in Microsoft Intune. ++Review supported and unsupported policies to determine whether you can use an MDM solution instead of Group policies. For unsupported policies, consider the following questions: ++- Are the unsupported policies necessary for Azure AD joined devices or users? +- Are the unsupported policies applicable in a cloud-driven deployment? ++If your MDM solution isn't available through the Azure AD app gallery, you can add it following the process +outlined in [Azure Active Directory integration with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm). ++Through co-management, you can use Microsoft Configuration Manager to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with Microsoft Configuration Manager. For more information on co-management for Windows 10 or newer devices, see [What is co-management?](/configmgr/core/clients/manage/co-management-overview). If you use an MDM product other than Intune, check with your MDM provider on applicable co-management scenarios. ++**Recommendation:** Consider MDM only management for Azure AD joined devices. ++## Understand considerations for applications and resources ++We recommend migrating applications from on-premises to cloud for a better user experience and access control. Azure AD joined devices can seamlessly provide access to both, on-premises and cloud applications. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](device-sso-to-on-premises-resources.md). ++The following sections list considerations for different types of applications and resources. ++### Cloud-based applications ++If an application is added to Azure AD app gallery, users get SSO through Azure AD joined devices. No other configuration is required. Users get SSO on both, Microsoft Edge and Chrome browsers. For Chrome, you need to deploy the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji). ++All Win32 applications that: ++- Rely on Web Account Manager (WAM) for token requests also get SSO on Azure AD joined devices. +- Don't rely on WAM may prompt users for authentication. ++### On-premises web applications ++If your apps are custom built and/or hosted on-premises, you need to add them to your browserΓÇÖs trusted sites to: ++- Enable Windows integrated authentication to work +- Provide a no-prompt SSO experience to users. ++If you use AD FS, see [Verify and manage single sign-on with AD FS](/previous-versions/azure/azure-services/jj151809(v%3dazure.100)). ++**Recommendation:** Consider hosting in the cloud (for example, Azure) and integrating with Azure AD for a better experience. ++### On-premises applications relying on legacy protocols ++Users get SSO from Azure AD joined devices if the device has access to a domain controller. ++> [!NOTE] +> Azure AD joined devices can seamlessly provide access to both, on-premises and cloud applications. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](device-sso-to-on-premises-resources.md). ++**Recommendation:** Deploy [Azure AD App proxy](../app-proxy/application-proxy.md) to enable secure access for these applications. ++### On-premises network shares ++Your users have SSO from Azure AD joined devices when a device has access to an on-premises domain controller. [Learn how this works](device-sso-to-on-premises-resources.md) ++### Printers ++We recommend deploying [Universal Print](/universal-print/fundamentals/universal-print-whatis) to have a cloud-based print management solution without any on-premises dependencies. ++### On-premises applications relying on machine authentication ++Azure AD joined devices don't support on-premises applications relying on machine authentication. ++**Recommendation:** Consider retiring these applications and moving to their modern alternatives. ++### Remote Desktop Services ++Remote desktop connection to an Azure AD joined devices requires the host machine to be either Azure AD joined or hybrid Azure AD joined. Remote desktop from an unjoined or non-Windows device isn't supported. For more information, see [Connect to remote Azure AD joined pc](/windows/client-management/connect-to-remote-aadj-pc) ++Starting with the Windows 10 2004 update, users can also use remote desktop from an Azure AD registered Windows 10 or newer device to another Azure AD joined device. ++### RADIUS and Wi-Fi authentication ++Currently, Azure AD joined devices don't support RADIUS authentication for connecting to Wi-Fi access points, since RADIUS relies on presence of an on-premises computer object. As an alternative, you can use certificates pushed via Intune or user credentials to authenticate to Wi-Fi. ++## Understand your provisioning options +**Note**: Azure AD joined devices canΓÇÖt be deployed using System Preparation Tool (Sysprep) or similar imaging tools ++You can provision Azure AD joined devices using the following approaches: ++- **Self-service in OOBE/Settings** - In the self-service mode, users go through the Azure AD join process either during Windows Out of Box Experience (OOBE) or from Windows Settings. For more information, see [Join your work device to your organization's network](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973). +- **Windows Autopilot** - Windows Autopilot enables pre-configuration of devices for a smoother Azure AD join experience in OOBE. For more information, see the [Overview of Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot). +- **Bulk enrollment** - Bulk enrollment enables an administrator driven Azure AD join by using a bulk provisioning tool to configure devices. For more information, see [Bulk enrollment for Windows devices](/intune/windows-bulk-enroll). + +HereΓÇÖs a comparison of these three approaches + +| Element | Self-service setup | Windows Autopilot | Bulk enrollment | +| | | | | +| Require user interaction to set up | Yes | Yes | No | +| Require IT effort | No | Yes | Yes | +| Applicable flows | OOBE & Settings | OOBE only | OOBE only | +| Local admin rights to primary user | Yes, by default | Configurable | No | +| Require device OEM support | No | Yes | No | +| Supported versions | 1511+ | 1709+ | 1703+ | + +Choose your deployment approach or approaches by reviewing the previous table and reviewing the following considerations for adopting either approach: ++- Are your users tech savvy to go through the setup themselves? + - Self-service can work best for these users. Consider Windows Autopilot to enhance the user experience. +- Are your users remote or within corporate premises? + - Self-service or Autopilot work best for remote users for a hassle-free setup. +- Do you prefer a user driven or an admin-managed configuration? + - Bulk enrollment works better for admin-driven deployment to set up devices before handing over to users. +- Do you purchase devices from 1-2 OEMS, or do you have a wide distribution of OEM devices? + - If purchasing from limited OEMs who also support Autopilot, you can benefit from tighter integration with Autopilot. ++## Configure your device settings ++The Azure portal allows you to control the deployment of Azure AD joined devices in your organization. To configure the related settings, on the **Azure Active Directory page**, select `Devices > Device settings`. [Learn more](device-management-azure-portal.md) ++### Users may join devices to Azure AD ++Set this option to **All** or **Selected** based on the scope of your deployment and who you want to set up an Azure AD joined device. ++![Users may join devices to Azure AD](./media/device-join-plan/01.png) ++### Additional local administrators on Azure AD joined devices ++Choose **Selected** and selects the users you want to add to the local administratorsΓÇÖ group on all Azure AD joined devices. ++![Additional local administrators on Azure AD joined devices](./media/device-join-plan/02.png) ++### Require multifactor authentication (MFA) to join devices ++Select **ΓÇ£Yes** if you require users to do MFA while joining devices to Azure AD. ++![Require multifactor Auth to join devices](./media/device-join-plan/03.png) ++**Recommendation:** Use the user action [Register or join devices](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) in Conditional Access for enforcing MFA for joining devices. ++## Configure your mobility settings ++Before you can configure your mobility settings, you may have to add an MDM provider, first. ++**To add an MDM provider**: ++1. On the **Azure Active Directory page**, in the **Manage** section, select `Mobility (MDM and MAM)`. +1. Select **Add application**. +1. Select your MDM provider from the list. ++ :::image type="content" source="./media/device-join-plan/04.png" alt-text="Screenshot of the Azure Active Directory Add an application page. Several M D M providers are listed." border="false"::: ++Select your MDM provider to configure the related settings. ++### MDM user scope ++Select **Some** or **All** based on the scope of your deployment. ++![MDM user scope](./media/device-join-plan/05.png) ++Based on your scope, one of the following happens: ++- **User is in MDM scope**: If you have an Azure AD Premium subscription, MDM enrollment is automated along with Azure AD join. All scoped users must have an appropriate license for your MDM. If MDM enrollment fails in this scenario, Azure AD join will also be rolled back. +- **User is not in MDM scope**: If users aren't in MDM scope, Azure AD join completes without any MDM enrollment. This scope results in an unmanaged device. ++### MDM URLs ++There are three URLs that are related to your MDM configuration: ++- MDM terms of use URL +- MDM discovery URL +- MDM compliance URL +++Each URL has a predefined default value. If these fields are empty, contact your MDM provider for more information. ++### MAM settings ++MAM doesn't apply to Azure AD join. ++## Configure enterprise state roaming ++If you want to enable state roaming to Azure AD so that users can sync their settings across devices, see [Enable Enterprise State Roaming in Azure Active Directory](enterprise-state-roaming-enable.md). ++**Recommendation**: Enable this setting even for hybrid Azure AD joined devices. ++## Configure Conditional Access ++If you have an MDM provider configured for your Azure AD joined devices, the provider flags the device as compliant as soon as the device is under management. ++![Compliant device](./media/device-join-plan/46.png) ++You can use this implementation to [require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md). ++## Next steps ++- [Join a new Windows 10 device to Azure AD during a first run](device-join-out-of-box.md) +- [Join your work device to your organization's network](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973) |
active-directory | Device Registration How It Works | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-registration-how-it-works.md | Device Registration is a prerequisite to cloud-based authentication. Commonly, d - [Azure AD joined devices](concept-azure-ad-join.md) - [Azure AD registered devices](concept-azure-ad-register.md)-- [Hybrid Azure AD joined devices](concept-azure-ad-join-hybrid.md)+- [Hybrid Azure AD joined devices](concept-hybrid-join.md) - [What is a Primary Refresh Token?](concept-primary-refresh-token.md) - [Azure AD Connect: Device options](../hybrid/how-to-connect-device-options.md) |
active-directory | Device Sso To On Premises Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-sso-to-on-premises-resources.md | + + Title: How SSO to on-premises resources works on Azure AD joined devices +description: Extend the SSO experience by configuring hybrid Azure Active Directory joined devices. +++++ Last updated : 02/27/2023+++++++++# How SSO to on-premises resources works on Azure AD joined devices ++Azure Active Directory (Azure AD) joined devices give users a single sign-on (SSO) experience to your tenant's cloud apps. If your environment has on-premises Active Directory Domain Services (AD DS), users can also SSO to resources and applications that rely on on-premises Active Directory Domain Services. ++This article explains how this works. ++## Prerequisites ++- An [Azure AD joined device](concept-azure-ad-join.md). +- On-premises SSO requires line-of-sight communication with your on-premises AD DS domain controllers. If Azure AD joined devices aren't connected to your organization's network, a VPN or other network infrastructure is required. +- Azure AD Connect or Azure AD Connect cloud sync: To synchronize default user attributes like SAM Account Name, Domain Name, and UPN. For more information, see the article [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10). ++## How it works ++With an Azure AD joined device, your users already have an SSO experience to the cloud apps in your environment. If your environment has Azure AD and on-premises AD DS, you may want to expand the scope of your SSO experience to your on-premises Line Of Business (LOB) apps, file shares, and printers. ++Azure AD joined devices have no knowledge about your on-premises AD DS environment because they aren't joined to it. However, you can provide additional information about your on-premises AD to these devices with Azure AD Connect. ++Azure AD Connect or Azure AD Connect cloud sync synchronize your on-premises identity information to the cloud. As part of the synchronization process, on-premises user and domain information is synchronized to Azure AD. When a user signs in to an Azure AD joined device in a hybrid environment: ++1. Azure AD sends the details of the user's on-premises domain back to the device, along with the [Primary Refresh Token](concept-primary-refresh-token.md) +1. The local security authority (LSA) service enables Kerberos and NTLM authentication on the device. ++> [!NOTE] +> Additional configuration is required when passwordless authentication to Azure AD joined devices is used. +> +> For FIDO2 security key based passwordless authentication and Windows Hello for Business Hybrid Cloud Trust, see [Enable passwordless security key sign-in to on-premises resources with Azure Active Directory](../authentication/howto-authentication-passwordless-security-key-on-premises.md). +> +> For Windows Hello for Business Cloud Kerberos Trust, see [Configure and provision Windows Hello for Business - cloud Kerberos trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust-provision). +> +> For Windows Hello for Business Hybrid Key Trust, see [Configure Azure AD joined devices for On-premises Single-Sign On using Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-base). +> +> For Windows Hello for Business Hybrid Certificate Trust, see [Using Certificates for AADJ On-premises Single-sign On](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-cert). ++During an access attempt to an on-premises resource requesting Kerberos or NTLM, the device: ++1. Sends the on-premises domain information and user credentials to the located DC to get the user authenticated. +1. Receives a Kerberos [Ticket-Granting Ticket (TGT)](/windows/desktop/secauthn/ticket-granting-tickets) or NTLM token based on the protocol the on-premises resource or application supports. If the attempt to get the Kerberos TGT or NTLM token for the domain fails, Credential Manager entries are tried, or the user may receive an authentication pop-up requesting credentials for the target resource. This failure can be related to a delay caused by a DCLocator timeout. ++All apps that are configured for **Windows-Integrated authentication** seamlessly get SSO when a user tries to access them. ++## What you get ++With SSO, on an Azure AD joined device you can: ++- Access a UNC path on an AD member server +- Access an AD DS member web server configured for Windows-integrated security ++If you want to manage your on-premises AD from a Windows device, install the [Remote Server Administration Tools](https://www.microsoft.com/download/details.aspx?id=45520). ++You can use: ++- The Active Directory Users and Computers (ADUC) snap-in to administer all AD objects. However, you have to specify the domain that you want to connect to manually. +- The DHCP snap-in to administer an AD-joined DHCP server. However, you may need to specify the DHCP server name or address. + +## What you should know ++- You may have to adjust your [domain-based filtering](../hybrid/how-to-connect-sync-configure-filtering.md#domain-based-filtering) in Azure AD Connect to ensure that the data about the required domains is synchronized if you have multiple domains. +- Apps and resources that depend on Active Directory machine authentication don't work because Azure AD joined devices don't have a computer object in AD DS. +- You can't share files with other users on an Azure AD-joined device. +- Applications running on your Azure AD joined device may authenticate users. They must use the implicit UPN or the NT4 type syntax with the domain FQDN name as the domain part, for example: user@contoso.corp.com or contoso.corp.com\user. + - If applications use the NETBIOS or legacy name like contoso\user, the errors the application gets would be either, NT error STATUS_BAD_VALIDATION_CLASS - 0xc00000a7, or Windows error ERROR_BAD_VALIDATION_CLASS - 1348 ΓÇ£The validation information class requested was invalid.ΓÇ¥ This error happens even if you can resolve the legacy domain name. ++## Next steps ++For more information, see [What is device management in Azure Active Directory?](overview.md) |
active-directory | Enterprise State Roaming Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md | Enterprise State Roaming provides users with a unified experience across their W 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Devices** > **Enterprise State Roaming**. 1. Select **Users may sync settings and app data across devices**. For more information, see [how to configure device settings](./device-management-azure-portal.md).- - ![image of device setting labeled Users may sync settings and app data across devices](./media/enterprise-state-roaming-enable/device-settings.png) - + For a Windows 10 or newer device to use the Enterprise State Roaming service, the device must authenticate using an Azure AD identity. For devices that are joined to Azure AD, the userΓÇÖs primary sign-in identity is their Azure AD identity, so no other configuration is required. For devices that use on-premises Active Directory, the IT admin must [Configure hybrid Azure Active Directory joined devices](./hybrid-azuread-join-plan.md). ## Data storage |
active-directory | How To Hybrid Join Downlevel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join-downlevel.md | + + Title: Enable downlevel devices for hybrid Azure Active Directory join +description: Configure older operating systems for hybrid Azure AD join +++++ Last updated : 01/24/2023+++++++++# Enable older operating systems ++If some of your domain-joined devices are Windows [downlevel devices](hybrid-azuread-join-plan.md#windows-down-level-devices), you must complete the following steps to allow them to hybrid Azure AD join: ++- Configure the local intranet settings for device registration +- Install Microsoft Workplace Join for Windows downlevel computers +- Need AD FS (for federated domains) or Seamless SSO configured (for managed domains). ++> [!NOTE] +> Windows 7 support ended on January 14, 2020. For more information, [Support for Windows 7 has ended](https://support.microsoft.com/en-us/help/4057281/windows-7-support-ended-on-january-14-2020). ++## Configure the local intranet settings for device registration ++To complete hybrid Azure AD join of your Windows downlevel devices, and avoid certificate prompts when devices authenticate to Azure AD, you can push a policy to your domain-joined devices to add the following URLs to the local intranet zone in Internet Explorer: ++- `https://device.login.microsoftonline.com` +- `https://autologon.microsoftazuread-sso.com` (For seamless SSO) +- Your organization's STS (**For federated domains**) ++You also must enable **Allow updates to status bar via script** in the userΓÇÖs local intranet zone. ++## Install Microsoft Workplace Join for Windows downlevel computers ++To register Windows downlevel devices, organizations must install [Microsoft Workplace Join for non-Windows 10 computers](https://www.microsoft.com/download/details.aspx?id=53554). Microsoft Workplace Join for non-Windows 10 computers is available in the Microsoft Download Center. ++You can deploy the package by using a software distribution system like [Microsoft Configuration Manager](/configmgr/). The package supports the standard silent installation options with the `quiet` parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations. ++The installer creates a scheduled task on the system that runs in the user context. The task is triggered when the user signs in to Windows. The task silently joins the device with Azure AD by using the user credentials after it authenticates with Azure AD. ++## Next steps ++- [Hybrid Azure AD join verification](how-to-hybrid-join-verify.md) +- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md) |
active-directory | How To Hybrid Join Verify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join-verify.md | + + Title: Verify hybrid Azure Active Directory join state +description: Verify configurations for hybrid Azure AD joined devices +++++ Last updated : 02/27/2023+++++++++# Verify hybrid Azure AD join ++Here are three ways to locate and verify the hybrid joined device state: ++## Locally on the device ++1. Open Windows PowerShell. +2. Enter `dsregcmd /status`. +3. Verify that both **AzureAdJoined** and **DomainJoined** are set to **YES**. +4. You can use the **DeviceId** and compare the status on the service using either the Azure portal or PowerShell. ++For downlevel devices, see the article [Troubleshooting hybrid Azure Active Directory joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md#step-1-retrieve-the-registration-status) ++## Using the Azure portal ++1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices). +2. Information on how to locate a device can be found in [How to manage device identities using the Azure portal](./device-management-azure-portal.md). +3. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices. Wait for Azure AD Connect to complete a sync cycle. +4. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed. ++## Using PowerShell ++Verify the device registration state in your Azure tenant by using **[Get-MsolDevice](/powershell/module/msonline/get-msoldevice)**. This cmdlet is in the [Azure Active Directory PowerShell module](/powershell/azure/active-directory/install-msonlinev1). ++When you use the **Get-MSolDevice** cmdlet to check the service details: ++- An object with the **device ID** that matches the ID on the Windows client must exist. +- The value for **DeviceTrustType** is **Domain Joined**. This setting is equivalent to the **Hybrid Azure AD joined** state on the **Devices** page in the Azure AD portal. +- For devices that are used in Conditional Access, the value for **Enabled** is **True** and **DeviceTrustLevel** is **Managed**. ++1. Open Windows PowerShell as an administrator. +2. Enter `Connect-MsolService` to connect to your Azure tenant. ++### Count all Hybrid Azure AD joined devices (excluding **Pending** state) ++```azurepowershell +(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count +``` ++### Count all Hybrid Azure AD joined devices with **Pending** state ++```azurepowershell +(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count +``` ++### List all Hybrid Azure AD joined devices ++```azurepowershell +Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))} +``` ++### List all Hybrid Azure AD joined devices with **Pending** state ++```azurepowershell +Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))} +``` ++### List details of a single device: ++1. Enter `get-msoldevice -deviceId <deviceId>` (This **DeviceId** is obtained locally on the device). +2. Verify that **Enabled** is set to **True**. ++## Next steps ++- [Downlevel device enablement](how-to-hybrid-join-downlevel.md) +- [Configure hybrid Azure AD join](how-to-hybrid-join.md) +- [Troubleshoot pending device state](/troubleshoot/azure/active-directory/pending-devices) |
active-directory | How To Hybrid Join | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join.md | + + Title: Configure hybrid Azure Active Directory join +description: Learn how to configure hybrid Azure Active Directory join. +++++ Last updated : 10/26/2022+++++++++# Configure hybrid Azure AD join ++Bringing your devices to Azure AD maximizes user productivity through single sign-on (SSO) across your cloud and on-premises resources. You can secure access to your resources with [Conditional Access](../conditional-access/howto-conditional-access-policy-compliant-device.md) at the same time. ++> [!VIDEO https://www.youtube-nocookie.com/embed/hSCVR1oJhFI] ++## Prerequisites ++- [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) version 1.1.819.0 or later. + - Don't exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to Azure AD, see [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10). + - If the computer objects of the devices you want to be hybrid Azure AD joined belong to specific organizational units (OUs), configure the correct OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Organizational unitΓÇôbased filtering](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering). +- Global Administrator credentials for your Azure AD tenant. +- Enterprise administrator credentials for each of the on-premises Active Directory Domain Services forests. +- (**For federated domains**) At least Windows Server 2012 R2 with Active Directory Federation Services installed. +- Users can register their devices with Azure AD. More information about this setting can be found under the heading **Configure device settings**, in the article, [Configure device settings](device-management-azure-portal.md#configure-device-settings). ++### Network connectivity requirements ++Hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network: ++- `https://enterpriseregistration.windows.net` +- `https://login.microsoftonline.com` +- `https://device.login.microsoftonline.com` +- `https://autologon.microsoftazuread-sso.com` (If you use or plan to use seamless SSO) +- Your organization's Security Token Service (STS) (**For federated domains**) ++> [!WARNING] +> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to `https://device.login.microsoftonline.com` is excluded from TLS break-and-inspect. Failure to exclude this URL may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access. ++If your organization requires access to the internet via an outbound proxy, you can use [Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v=technet.10)) to enable Windows 10 or newer computers for device registration with Azure AD. To address issues configuring and managing WPAD, see [Troubleshooting Automatic Detection](/previous-versions/tn-archive/cc302643(v=technet.10)). ++If you don't use WPAD, you can configure WinHTTP proxy settings on your computer with a Group Policy Object (GPO) beginning with Windows 10 1709. For more information, see [WinHTTP Proxy Settings deployed by GPO](/archive/blogs/netgeeks/winhttp-proxy-settings-deployed-by-gpo). ++> [!NOTE] +> If you configure proxy settings on your computer by using WinHTTP settings, any computers that can't connect to the configured proxy will fail to connect to the internet. ++If your organization requires access to the internet via an authenticated outbound proxy, make sure that your Windows 10 or newer computers can successfully authenticate to the outbound proxy. Because Windows 10 or newer computers run device registration by using machine context, configure outbound proxy authentication by using machine context. Follow up with your outbound proxy provider on the configuration requirements. ++Verify devices can access the required Microsoft resources under the system account by using the [Test Device Registration Connectivity](/samples/azure-samples/testdeviceregconnectivity/testdeviceregconnectivity/) script. ++## Managed domains ++We think most organizations will deploy hybrid Azure AD join with managed domains. Managed domains use [password hash sync (PHS)](../hybrid/whatis-phs.md) or [pass-through authentication (PTA)](../hybrid/how-to-connect-pta.md) with [seamless single sign-on](../hybrid/how-to-connect-sso.md). Managed domain scenarios don't require configuring a federation server. ++Configure hybrid Azure AD join by using Azure AD Connect for a managed domain: ++1. Start Azure AD Connect, and then select **Configure**. +1. In **Additional tasks**, select **Configure device options**, and then select **Next**. +1. In **Overview**, select **Next**. +1. In **Connect to Azure AD**, enter the credentials of a Global Administrator for your Azure AD tenant. +1. In **Device options**, select **Configure Hybrid Azure AD join**, and then select **Next**. +1. In **Device operating systems**, select the operating systems that devices in your Active Directory environment use, and then select **Next**. +1. In **SCP configuration**, for each forest where you want Azure AD Connect to configure the SCP, complete the following steps, and then select **Next**. + 1. Select the **Forest**. + 1. Select an **Authentication Service**. + 1. Select **Add** to enter the enterprise administrator credentials. ++ ![Azure AD Connect SCP configuration managed domain](./media/how-to-hybrid-join/azure-ad-connect-scp-configuration-managed.png) ++1. In **Ready to configure**, select **Configure**. +1. In **Configuration complete**, select **Exit**. ++## Federated domains ++A federated environment should have an identity provider that supports the following requirements. If you have a federated environment using Active Directory Federation Services (AD FS), then the below requirements are already supported. ++- **WIAORMULTIAUTHN claim:** This claim is required to do hybrid Azure AD join for Windows down-level devices. +- **WS-Trust protocol:** This protocol is required to authenticate Windows current hybrid Azure AD joined devices with Azure AD. When you're using AD FS, you need to enable the following WS-Trust endpoints: + - `/adfs/services/trust/2005/windowstransport` + - `/adfs/services/trust/13/windowstransport` + - `/adfs/services/trust/2005/usernamemixed` + - `/adfs/services/trust/13/usernamemixed` + - `/adfs/services/trust/2005/certificatemixed` + - `/adfs/services/trust/13/certificatemixed` ++> [!WARNING] +> Both **adfs/services/trust/2005/windowstransport** and **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**. ++Configure hybrid Azure AD join by using Azure AD Connect for a federated environment: ++1. Start Azure AD Connect, and then select **Configure**. +1. On the **Additional tasks** page, select **Configure device options**, and then select **Next**. +1. On the **Overview** page, select **Next**. +1. On the **Connect to Azure AD** page, enter the credentials of a Global Administrator for your Azure AD tenant, and then select **Next**. +1. On the **Device options** page, select **Configure Hybrid Azure AD join**, and then select **Next**. +1. On the **SCP** page, complete the following steps, and then select **Next**: + 1. Select the forest. + 1. Select the authentication service. You must select **AD FS server** unless your organization has exclusively Windows 10 or newer clients and you have configured computer/device sync, or your organization uses seamless SSO. + 1. Select **Add** to enter the enterprise administrator credentials. + + ![Azure AD Connect SCP configuration federated domain](./media/how-to-hybrid-join/azure-ad-connect-scp-configuration-federated.png) ++1. On the **Device operating systems** page, select the operating systems that the devices in your Active Directory environment use, and then select **Next**. +1. On the **Federation configuration** page, enter the credentials of your AD FS administrator, and then select **Next**. +1. On the **Ready to configure** page, select **Configure**. +1. On the **Configuration complete** page, select **Exit**. ++### Federation caveats ++With Windows 10 1803 or newer, if instantaneous hybrid Azure AD join for a federated environment using AD FS fails, we rely on Azure AD Connect to sync the computer object in Azure AD that's then used to complete the device registration for hybrid Azure AD join. ++## Other scenarios ++Organizations can test hybrid Azure AD join on a subset of their environment before a full rollout. The steps to complete a targeted deployment can be found in the article [Hybrid Azure AD join targeted deployment](hybrid-azuread-join-control.md). Organizations should include a sample of users from varying roles and profiles in this pilot group. A targeted rollout will help identify any issues your plan may not have addressed before you enable for the entire organization. ++Some organizations may not be able to use Azure AD Connect to configure AD FS. The steps to configure the claims manually can be found in the article [Configure hybrid Azure Active Directory join manually](hybrid-azuread-join-manual.md). ++### US Government cloud (inclusive of GCCHigh and DoD) ++For organizations in [Azure Government](https://azure.microsoft.com/global-infrastructure/government/), hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network: ++- `https://enterpriseregistration.windows.net` **and** `https://enterpriseregistration.microsoftonline.us` +- `https://login.microsoftonline.us` +- `https://device.login.microsoftonline.us` +- `https://autologon.microsoft.us` (If you use or plan to use seamless SSO) ++## Troubleshoot hybrid Azure AD join ++If you experience issues with completing hybrid Azure AD join for domain-joined Windows devices, see: ++- [Troubleshooting devices using dsregcmd command](./troubleshoot-device-dsregcmd.md) +- [Troubleshoot hybrid Azure AD join for Windows current devices](troubleshoot-hybrid-join-windows-current.md) +- [Troubleshoot hybrid Azure AD join for Windows downlevel devices](troubleshoot-hybrid-join-windows-legacy.md) +- [Troubleshoot pending device state](/troubleshoot/azure/active-directory/pending-devices) ++## Next steps ++- [Downlevel device enablement](how-to-hybrid-join-downlevel.md) +- [Hybrid Azure AD join verification](how-to-hybrid-join-verify.md) +- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md) |
active-directory | Howto Manage Local Admin Passwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md | Conditional Access policies can be scoped to the built-in roles like Cloud Devic ### Is Windows LAPS with Azure AD management configuration supported using Group Policy Objects (GPO)? -Yes, for [hybrid Azure AD joined](concept-azure-ad-join-hybrid.md) devices only. See see [Windows LAPS Group Policy](/windows-server/identity/laps/laps-management-policy-settings#windows-laps-group-policy). +Yes, for [hybrid Azure AD joined](concept-hybrid-join.md) devices only. See see [Windows LAPS Group Policy](/windows-server/identity/laps/laps-management-policy-settings#windows-laps-group-policy). ### Is Windows LAPS with Azure AD management configuration supported using MDM? -Yes, for [Azure AD join](concept-azure-ad-join.md)/[hybrid Azure AD join](concept-azure-ad-join-hybrid.md) ([co-managed](/mem/configmgr/comanage/overview)) devices. Customers can use [Microsoft Intune](/mem/intune/protect/windows-laps-overview) or any other third party MDM of their choice. +Yes, for [Azure AD join](concept-azure-ad-join.md)/[hybrid Azure AD join](concept-hybrid-join.md) ([co-managed](/mem/configmgr/comanage/overview)) devices. Customers can use [Microsoft Intune](/mem/intune/protect/windows-laps-overview) or any other third party MDM of their choice. ### What happens when a device is deleted in Azure AD? |
active-directory | Hybrid Azuread Join Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-control.md | After you verify that everything works as expected, you can automatically regist ## Next steps - [Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md)-- [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md)+- [Configure hybrid Azure AD join](how-to-hybrid-join.md) - [Configure hybrid Azure Active Directory join manually](hybrid-azuread-join-manual.md) |
active-directory | Hybrid Azuread Join Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-manual.md | -If using Azure AD Connect is an option for you, see the guidance in [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md). Using the automation in Azure AD Connect, will significantly simplify the configuration of hybrid Azure AD join. +If using Azure AD Connect is an option for you, see the guidance in [Configure hybrid Azure AD join](how-to-hybrid-join.md). Using the automation in Azure AD Connect, will significantly simplify the configuration of hybrid Azure AD join. This article covers the manual configuration of requirements for hybrid Azure AD join including steps for managed and federated domains. You can configure hybrid Azure AD joined devices for various types of Windows de - For managed and federated domains, you must [configure a service connection point or SCP](#configure-a-service-connection-point). - For federated domains, you must ensure that your [federation service is configured to issue the appropriate claims](#set-up-issuance-of-claims). -After these configurations are complete, follow the guidance to [verify registration](howto-hybrid-join-verify.md) and [enable downlevel operating systems](howto-hybrid-join-downlevel.md) where necessary. +After these configurations are complete, follow the guidance to [verify registration](how-to-hybrid-join-verify.md) and [enable downlevel operating systems](how-to-hybrid-join-downlevel.md) where necessary. ### Configure a service connection point If you experience issues completing hybrid Azure AD join for domain-joined Windo ## Next steps -- [Hybrid Azure AD join verification](howto-hybrid-join-verify.md)-- [Downlevel device enablement](howto-hybrid-join-downlevel.md)+- [Hybrid Azure AD join verification](how-to-hybrid-join-verify.md) +- [Downlevel device enablement](how-to-hybrid-join-downlevel.md) - [Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md) - [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md) |
active-directory | Hybrid Azuread Join Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-plan.md | -> SSO access to on-premises resources is also available to devices that are Azure AD joined. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md). +> SSO access to on-premises resources is also available to devices that are Azure AD joined. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](device-sso-to-on-premises-resources.md). ## Prerequisites The following table provides details on support for these on-premises AD UPNs in ## Next steps -- [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md)+- [Configure hybrid Azure AD join](how-to-hybrid-join.md) |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/overview.md | The modern device scenario focuses on two of these methods: - Windows 11 and Windows 10 devices owned by your organization - [Windows Server 2019 and newer servers in your organization running as VMs in Azure](howto-vm-sign-in-azure-ad-windows.md) -[Hybrid Azure AD join](concept-azure-ad-join-hybrid.md) is seen as an interim step on the road to Azure AD join. Hybrid Azure AD join provides organizations support for downlevel Windows versions back to Windows 7 and Server 2008. All three scenarios can coexist in a single organization. +[Hybrid Azure AD join](concept-hybrid-join.md) is seen as an interim step on the road to Azure AD join. Hybrid Azure AD join provides organizations support for downlevel Windows versions back to Windows 7 and Server 2008. All three scenarios can coexist in a single organization. ## Resource access Registering and joining devices to Azure AD gives users Seamless Sign-on (SSO) to cloud-based resources. -Devices that are Azure AD joined benefit from [SSO to your organization's on-premises resources](azuread-join-sso.md). +Devices that are Azure AD joined benefit from [SSO to your organization's on-premises resources](device-sso-to-on-premises-resources.md). ## Provisioning Getting devices in to Azure AD can be done in a self-service manner or a control - Learn more about [Azure AD registered devices](concept-azure-ad-register.md) - Learn more about [Azure AD joined devices](concept-azure-ad-join.md)-- Learn more about [hybrid Azure AD joined devices](concept-azure-ad-join-hybrid.md)+- Learn more about [hybrid Azure AD joined devices](concept-hybrid-join.md) - To get an overview of how to manage device identities in the Azure portal, see [Managing device identities using the Azure portal](device-management-azure-portal.md). - To learn more about device-based Conditional Access, see [Configure Azure Active Directory device-based Conditional Access policies](../conditional-access/require-managed-devices.md). |
active-directory | Plan Device Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/plan-device-deployment.md | There are multiple methods to integrate your devices into Azure AD, they can wor * You can [register devices](concept-azure-ad-register.md) with Azure AD. * [Join devices](concept-azure-ad-join.md) to Azure AD (cloud-only).-* [Hybrid Azure AD join](concept-azure-ad-join-hybrid.md) devices to your on-premises Active Directory domain and Azure AD. +* [Hybrid Azure AD join](concept-hybrid-join.md) devices to your on-premises Active Directory domain and Azure AD. ## Learn Before you begin, make sure that you're familiar with the [device identity manag The key benefits of giving your devices an Azure AD identity: -* Increase productivity ΓÇô Users can do [seamless sign-on (SSO)](./azuread-join-sso.md) to your on-premises and cloud resources, enabling productivity wherever they are. +* Increase productivity ΓÇô Users can do [seamless sign-on (SSO)](./device-sso-to-on-premises-resources.md) to your on-premises and cloud resources, enabling productivity wherever they are. * Increase security ΓÇô Apply [Conditional Access policies](../conditional-access/overview.md) to resources based on the identity of the device or user. Joining a device to Azure AD is a prerequisite for increasing your security with a [Passwordless](../authentication/concept-authentication-passwordless.md) strategy. If registering your devices is the best option for your organization, see the fo Azure AD join enables you to transition towards a cloud-first model with Windows. It provides a great foundation if you're planning to modernize your device management and reduce device-related IT costs. Azure AD join works with Windows 10 or newer devices only. Consider it as the first choice for new devices. -[Azure AD joined devices can SSO to on-premises resources](azuread-join-sso.md) when they are on the organization's network, can authenticate to on-premises servers like file, print, and other applications. +[Azure AD joined devices can SSO to on-premises resources](device-sso-to-on-premises-resources.md) when they are on the organization's network, can authenticate to on-premises servers like file, print, and other applications. If this option is best for your organization, see the following resources: * This overview of [Azure AD joined devices](concept-azure-ad-join.md).-* Familiarize yourself with the [Azure AD join implementation plan](azureadjoin-plan.md). +* Familiarize yourself with the [Azure AD join implementation plan](device-join-plan.md). ### Provisioning Azure AD Joined devices To provision devices to Azure AD join, you have the following approaches: -* Self-Service: [Windows 10 first-run experience](azuread-joined-devices-frx.md) +* Self-Service: [Windows 10 first-run experience](device-join-out-of-box.md) If you have either Windows 10 Professional or Windows 10 Enterprise installed on a device, the experience defaults to the setup process for company-owned devices. If you have either Windows 10 Professional or Windows 10 Enterprise installed on * [Windows Autopilot](/windows/deployment/windows-autopilot/windows-autopilot) * [Bulk Enrollment](/mem/intune/enrollment/windows-bulk-enroll) -Choose your deployment procedure after careful [comparison of these approaches](azureadjoin-plan.md). +Choose your deployment procedure after careful [comparison of these approaches](device-join-plan.md). You may determine that Azure AD join is the best solution for a device in a different state. The following table shows how to change the state of a device. Most organizations already have domain joined devices and manage them via Group If hybrid Azure AD join is the best option for your organization, see the following resources: -* This overview of [hybrid Azure AD joined devices](concept-azure-ad-join-hybrid.md). +* This overview of [hybrid Azure AD joined devices](concept-hybrid-join.md). * Familiarize yourself with the [hybrid Azure AD join implementation](hybrid-azuread-join-plan.md) plan. ### Provisioning hybrid Azure AD join to your devices [Review your identity infrastructure](hybrid-azuread-join-plan.md). Azure AD Connect provides you with a wizard to configure hybrid Azure AD join for: -* [Managed domains](howto-hybrid-azure-ad-join.md#managed-domains) -* [Federated domains](howto-hybrid-azure-ad-join.md#federated-domains) +* [Managed domains](how-to-hybrid-join.md#managed-domains) +* [Federated domains](how-to-hybrid-join.md#federated-domains) If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure hybrid Azure AD join](hybrid-azuread-join-manual.md). Administrators can also [deploy virtual desktop infrastructure (VDI) platforms]( ## Next steps -* [Plan your Azure AD join implementation](azureadjoin-plan.md) +* [Plan your Azure AD join implementation](device-join-plan.md) * [Plan your hybrid Azure AD join implementation](hybrid-azuread-join-plan.md) * [Manage device identities](device-management-azure-portal.md) |
active-directory | Licensing Service Plan Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md | When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic - **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]->This information last updated on July 6th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). +>This information last updated on July 28th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). ><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic | Dynamics 365 Commerce Trial | DYN365_RETAIL_TRIAL | 1508ad2d-5802-44e6-bfe8-6fb65de63d28 | DYN365_RETAIL_TRIAL (874d6da5-2a67-45c1-8635-96e8b3e300ea)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Retail Trial (874d6da5-2a67-45c1-8635-96e8b3e300ea)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Customer Engagement Plan | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | D365_CSI_EMBED_CE (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>D365_ProjectOperations (69f07c66-bee4-4222-b051-195095efee5b)<br/>D365_ProjectOperationsCDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Forms_Pro_CE (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_FOR_PROJECT_OPERATIONS (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 Customer Service Insights for CE Plan (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>Dynamics 365 P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>Dynamics 365 Project Operations (69f07c66-bee4-4222-b051-195095efee5b)<br/>Dynamics 365 Project Operations CDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Microsoft Dynamics 365 Customer Voice for Customer Engagement Plan (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>Microsoft Social Engagement Enterprise (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Dynamics 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>Project for Project Operations (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | Dynamics 365 Customer Insights Attach | DYN365_CUSTOMER_INSIGHTS_ATTACH | a3d0cd86-8068-4071-ad40-4dc5b5908c4b | CDS_CUSTOMER_INSIGHTS_BASE (d04ca659-b119-4a92-b8fc-3ede584a9d65)<br/>CDS_CUSTOMER_INSIGHTS (ca00cff5-2568-4d03-bb6c-a653a8f360ca)<br/>DYN365_CUSTOMER_INSIGHTS_BASE (ee85d528-c4b4-4a99-9b07-fb9a1365dc93)<br/>Customer_Voice_Customer_Insights (46c5ea0a-2343-49d9-ae4f-1c268b232d53)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dataverse for Customer Insights BASE (d04ca659-b119-4a92-b8fc-3ede584a9d65)<br/>Common Data Service for Customer Insights (ca00cff5-2568-4d03-bb6c-a653a8f360ca)<br/>Dynamics 365 Customer Insights (ee85d528-c4b4-4a99-9b07-fb9a1365dc93)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights App (46c5ea0a-2343-49d9-ae4f-1c268b232d53)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |+| Dynamics 365 Customer Insights Standalone | DYN365_CUSTOMER_INSIGHTS_BASE | 0c250654-c7f7-461f-871a-7222f6592cf2 | CDS_CUSTOMER_INSIGHTS_BASE (d04ca659-b119-4a92-b8fc-3ede584a9d65)<br/>CDS_CUSTOMER_INSIGHTS (ca00cff5-2568-4d03-bb6c-a653a8f360ca)<br/>DYN365_CUSTOMER_INSIGHTS_BASE (ee85d528-c4b4-4a99-9b07-fb9a1365dc93)<br/>DYN365_CUSTOMER_INSIGHTS_ENGAGEMENT_INSIGHTS_BASE (b3c26516-3b8d-492f-a5a3-64d70ad3f8d0)<br/>Customer_Voice_Customer_Insights (46c5ea0a-2343-49d9-ae4f-1c268b232d53)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dataverse for Cust Insights BASE (d04ca659-b119-4a92-b8fc-3ede584a9d65)<br/>Common Data Service for Customer Insights (ca00cff5-2568-4d03-bb6c-a653a8f360ca)<br/>Dynamics 365 Customer Insights (ee85d528-c4b4-4a99-9b07-fb9a1365dc93)<br/>Dynamics 365 Customer Insights Engagement Insights (b3c26516-3b8d-492f-a5a3-64d70ad3f8d0)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights App (46c5ea0a-2343-49d9-ae4f-1c268b232d53)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Customer Insights vTrial | DYN365_CUSTOMER_INSIGHTS_VIRAL | 036c2481-aa8a-47cd-ab43-324f0c157c2d | CDS_CUSTOMER_INSIGHTS_TRIAL (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>DYN365_CUSTOMER_INSIGHTS_ENGAGEMENT_INSIGHTS_BASE_TRIAL (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>DYN365_CUSTOMER_INSIGHTS_VIRAL (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Forms_Pro_Customer_Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | Common Data Service for Customer Insights Trial (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>Dynamics 365 Customer Insights Engagement Insights Viral (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>Dynamics 365 Customer Insights Viral Plan (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | | Dynamics 365 Customer Service Enterprise Viral Trial | Dynamics_365_Customer_Service_Enterprise_viral_trial | 1e615a51-59db-4807-9957-aa83c3657351 | CUSTOMER_VOICE_DYN365_VIRAL_TRIAL (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>DYN365_CS_MESSAGING_VIRAL_TRIAL (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>DYN365_CS_ENTERPRISE_VIRAL_TRIAL (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>DYNB365_CSI_VIRAL_TRIAL (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>DYN365_CS_VOICE_VIRAL_TRIAL (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb) | Customer Voice for Dynamics 365 vTrial (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Dynamics 365 Customer Service Digital Messaging vTrial (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>Dynamics 365 Customer Service Enterprise vTrial (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>Dynamics 365 Customer Service Insights vTrial (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>Dynamics 365 Customer Service Voice vTrial (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb) | | Dynamics 365 for Customer Service Enterprise Attach to Qualifying Dynamics 365 Base Offer A | D365_CUSTOMER_SERVICE_ENT_ATTACH | eb18b715-ea9d-4290-9994-2ebf4b5042d2 | D365_CUSTOMER_SERVICE_ENT_ATTACH (61a2665f-1873-488c-9199-c3d0bc213fdf)<br/>Power_Pages_Internal_User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Customer Service Enterprise Attach (61a2665f-1873-488c-9199-c3d0bc213fdf)<br/>Power Pages Internal User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic | Dynamics 365 for Marketing USL | D365_MARKETING_USER | 4b32a493-9a67-4649-8eb9-9fc5a5f75c12 | DYN365_MARKETING_MSE_USER (2824c69a-1ac5-4397-8592-eae51cb8b581)<br/>DYN365_MARKETING_USER (5d7a6abc-eebd-46ab-96e1-e4a2f54a2248)<br/>Forms_Pro_Marketing (76366ba0-d230-47aa-8087-b6d55dae454f)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Dynamics 365 for Marketing MSE User (2824c69a-1ac5-4397-8592-eae51cb8b581)<br/>Dynamics 365 for Marketing USL (5d7a6abc-eebd-46ab-96e1-e4a2f54a2248)<br/>Microsoft Dynamics 365 Customer Voice for Marketing (76366ba0-d230-47aa-8087-b6d55dae454f)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) | | Dynamics 365 for Sales and Customer Service Enterprise Edition | DYN365_ENTERPRISE_SALES_CUSTOMERSERVICE | 8edc2cf8-6438-4fa9-b6e3-aa1660c640cc | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | Dynamics 365 for Sales Enterprise Edition | DYN365_ENTERPRISE_SALES | 1e1a282c-9c54-43a2-9310-98ef728faace | DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |+| Dynamics 365 Sales, Field Service and Customer Service Partner Sandbox | Dynamics_365_Sales_Field_Service_and_Customer_Service_Partner_Sandbox | 494721b8-1f30-4315-aba6-70ca169358d9 | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>Forms_Pro_Service (67bf4812-f90b-4db9-97e7-c0bbbf7b2d09)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda) | Dynamics 365 Customer Engagement Plan (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>Microsoft Dynamics 365 Customer Voice for Customer Service Enterprise (67bf4812-f90b-4db9-97e7-c0bbbf7b2d09)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda) | | Dynamics 365 Sales Premium | DYN365_SALES_PREMIUM | 2edaa1dc-966d-4475-93d6-8ee8dfd96877 | DYN365_SALES_INSIGHTS (fedc185f-0711-4cc0-80ed-0a92da1a8384)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Microsoft_Viva_Sales_PowerAutomate (a933a62f-c3fb-48e5-a0b7-ac92b94b4420)<br/>Microsoft_Viva_Sales_PremiumTrial (8ba1ff15-7bf6-4620-b65c-ecedb6942766)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power_Pages_Internal_User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>Forms_Pro_SalesEnt (8839ef0e-91f1-4085-b485-62e06e7c7987)<br/>DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7) | Dynamics 365 AI for Sales (Embedded) (fedc185f-0711-4cc0-80ed-0a92da1a8384)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Microsoft Viva Sales Premium with Power Automate (a933a62f-c3fb-48e5-a0b7-ac92b94b4420)<br/>Microsoft Viva Sales Premium & Trial (8ba1ff15-7bf6-4620-b65c-ecedb6942766)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Pages Internal User (60bf28f9-2b70-4522-96f7-335f5e06c941)<br/>Microsoft Dynamics 365 Customer Voice for Sales Enterprise (8839ef0e-91f1-4085-b485-62e06e7c7987)<br/>Dynamics 365 for Sales (2da8e897-7791-486b-b08f-cc63c8129df7) | | Dynamics 365 for Sales Professional | D365_SALES_PRO | be9f9771-1c64-4618-9907-244325141096 | DYN365_SALES_PRO (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_SALES_PRO (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>FLOW_SALES_PRO (f944d685-f762-4371-806d-a1f48e5bea13)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Sales Professional (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Sales Pro (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>Power Automate for Sales Pro (f944d685-f762-4371-806d-a1f48e5bea13)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | Dynamics 365 for Sales Professional Trial | D365_SALES_PRO_IW | 9c7bff7a-3715-4da7-88d3-07f57f8d0fb6 | D365_SALES_PRO_IW (73f205fc-6b15-47a5-967e-9e64fdf72d0a)<br/>D365_SALES_PRO_IW_Trial (db39a47e-1f4f-462b-bf5b-2ec471fb7b88) | Dynamics 365 for Sales Professional Trial (73f205fc-6b15-47a5-967e-9e64fdf72d0a)<br/>Dynamics 365 for Sales Professional Trial (db39a47e-1f4f-462b-bf5b-2ec471fb7b88) | |
active-directory | B2b Sponsors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-sponsors.md | When you invite a guest user, you became their sponsor by default. If you need t ## Next steps - [Add and invite guest users](add-users-administrator.md)-- [Crete a new access package](/azure/active-directory/governance/entitlement-management-access-package-create#approval)+- [Create a new access package](/azure/active-directory/governance/entitlement-management-access-package-create#approval) - [Manage user profile info](/azure/active-directory/fundamentals/how-to-manage-user-profile-info) |
active-directory | B2b Tutorial Require Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md | To complete the scenario in this tutorial, you need: 1. In the left menu, under **Manage**, select **Users**. 1. Select **New user**, and then select **Invite external user**. - :::image type="content" source="media/tutorial-mfa/tutorial-mfa-new-user.png" alt-text="Screenshot showing where to select the new guest user option."::: + :::image type="content" source="media/tutorial-mfa/tutorial-mfa-new-user.png" alt-text="Screenshot showing where to select the new guest user option." lightbox="media/tutorial-mfa/tutorial-mfa-new-user.png"::: -1. Under **Identity**, enter the email address of the external user. Optionally, include a name and welcome message. +1. Under **Identity** on the **Basics** tab, enter the email address of the external user. Optionally, include a display name and welcome message. :::image type="content" source="media/tutorial-mfa/tutorial-mfa-new-user-identity.png" alt-text="Screenshot showing where to enter the guest email."::: -1. Select **Invite** to automatically send the invitation to the guest user. A **Successfully invited user** message appears. +1. Optionally, you can add further details to the user under the **Properties** and **Assignments** tabs. +1. Select **Review + invite** to automatically send the invitation to the guest user. A **Successfully invited user** message appears. 1. After you send the invitation, the user account is automatically added to the directory as a guest. ## Test the sign-in experience before MFA setup To complete the scenario in this tutorial, you need: 1. On the **Conditional Access** page, in the toolbar on the top, select **New policy**. 1. On the **New** page, in the **Name** textbox, type **Require MFA for B2B portal access**. 1. In the **Assignments** section, choose the link under **Users and groups**.-1. On the **Users and groups** page, choose **Select users and groups**, and then choose **Guest or external users**. You can assign the policy to different [external user types](authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types), built-in [directory roles](../conditional-access/concept-conditional-access-users-groups.md#include-users), or users and groups. +1. On the **Users and groups** page, choose **Select users and groups**, and then choose **Guest or external users**. You can assign the policy to different [external user types](authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types), built-in directory roles, or users and groups. :::image type="content" source="media/tutorial-mfa/tutorial-mfa-user-access.png" alt-text="Screenshot showing selecting all guest users."::: |
active-directory | Cross Tenant Access Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md | By default, B2B collaboration with other Azure AD organizations is enabled, and - **Inbound access settings** control whether users from external Azure AD organizations can access resources in your organization. You can apply these settings to everyone, or specify individual users, groups, and applications. -- **Trust settings** (inbound) determine whether your Conditional Access policies will trust the multi-factor authentication (MFA), compliant device, and [hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md) claims from an external organization if their users have already satisfied these requirements in their home tenants. For example, when you configure your trust settings to trust MFA, your MFA policies are still applied to external users, but users who have already completed MFA in their home tenants won't have to complete MFA again in your tenant.+- **Trust settings** (inbound) determine whether your Conditional Access policies will trust the multi-factor authentication (MFA), compliant device, and [hybrid Azure AD joined device](../devices/concept-hybrid-join.md) claims from an external organization if their users have already satisfied these requirements in their home tenants. For example, when you configure your trust settings to trust MFA, your MFA policies are still applied to external users, but users who have already completed MFA in their home tenants won't have to complete MFA again in your tenant. ## Default settings |
active-directory | How To Desktop App Electron Sample Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-desktop-app-electron-sample-sign-in.md | -# Sign in users in a sample Electron desktop application by using +# Sign in users in a sample Electron desktop application This how-to guide uses a sample Electron desktop application to show how to add authentication to a desktop application. The sample application enables users to sign in and sign out. The sample web application uses [Microsoft Authentication Library (MSAL)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) for Node to handle authentication. You can now test the sample Electron desktop app. After you run the app, the des - [Configure sign-in with Google](how-to-google-federation-customers.md). -- [Explore the Electron desktop app sample code](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/tree/main/1-Authentication/3-sign-in-electron#about-the-code).+- [Explore the Electron desktop app sample code](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/tree/main/1-Authentication/3-sign-in-electron#about-the-code). |
active-directory | How To Protect Web Api Dotnet Core Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-protect-web-api-dotnet-core-overview.md | - Title: Secure an ASP.NET web API using Microsoft Entra -description: Learn how to secure a web API registered in customer's tenant using Microoft Entra --------- Previously updated : 05/10/2023---#Customer intent: As a dev, I want to secure my web API registered in the customer's tenant using Microsoft Entra. ---# Secure an ASP.NET web API by using Microsoft Entra --Web APIs may contain sensitive information that requires user authentication and authorization. Microsoft identity platform provides capabilities for you to protect your web API against unauthorized access. Applications can use delegated access, acting on behalf of a signed-in user, or app-only access, acting only as the application's own identity when calling protected web APIs. --## Prerequisites --- [An API registration](how-to-register-ciam-app.md?tabs=webapi&preserve-view=true) that exposes at least one scope (delegated permissions) and one app role (application permission) such as *ToDoList.Read*. If you haven't already, register an API in the Microsoft Entra admin center by following the registration steps.--## Protecting a web API --The following are the steps you complete to protect your web API: --1. [Register your web API](how-to-register-ciam-app.md?tabs=webapi&preserve-view=true) in the Microsoft Entra admin center. -1. [Configure your web API](how-to-protect-web-api-dotnet-core-prepare-api.md). -1. [Protect your web API endpoints](how-to-protect-web-api-dotnet-core-protect-endpoints.md). -1. [Test your protected web API](how-to-protect-web-api-dotnet-core-test-api.md). --## Next steps --> [!div class="nextstepaction"] -> [Configure your web API >](how-to-protect-web-api-dotnet-core-prepare-api.md) |
active-directory | How To Protect Web Api Dotnet Core Prepare Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-protect-web-api-dotnet-core-prepare-api.md | - Title: Configure web API for protection using Microsoft Entra -description: Learn how to configure web API settings so as to protect it using Microsoft Entra. --------- Previously updated : 05/10/2023--#Customer intent: As a dev, I want to configure my web API settings so as to protect it using Microsoft Entra. ---# Secure an ASP.NET web API by using Microsoft Entra - configure your web API --In this how-to article, we go through the steps you take to configure your web API before securing its endpoints. When using the Microsoft identity platform to secure your web API, you first need to have it registered before configuring your API. --## Prerequisites --Go through the [overview of creating a protected web API](how-to-protect-web-api-dotnet-core-overview.md) before proceeding further with this tutorial. --## Create an ASP.NET Core web API --In this how to guide, we use Visual Studio Code and .NET 7.0. If you're using Visual Studio to create the API, see the [create a Create a web API with ASP.NET Core](/aspnet/core/tutorials/first-web-api). --1. Open the [integrated terminal](https://code.visualstudio.com/docs/editor/integrated-terminal). -1. Navigate to the folder where you want your project to live. -1. Run the following commands: -- ```dotnetcli - dotnet new webapi -o ToDoListAPI - cd ToDoListAPI - ``` --1. When a dialog box asks if you want to add required assets to the project, select **Yes**. --## Add packages --Install the following packages: --- `Microsoft.EntityFrameworkCore.InMemory` that allows Entity Framework Core to be used with an in-memory database. It's not designed for production use.-- `Microsoft.Identity.Web` simplifies adding authentication and authorization support to web apps and web APIs integrating with the Microsoft identity platform.-- ```dotnetcli - dotnet add package Microsoft.EntityFrameworkCore.InMemory - dotnet add package Microsoft.Identity.Web - ``` --## Configure app registration details --To protect your web API, you need to have the Application (Client) ID, Directory / Tenant ID and Directory / Tenant name that you have obtained during registration on the Microsoft Entra admin center. If you haven't registered your web API yet, kindly follow the [web API registration instructions](how-to-register-ciam-app.md?tabs=webapi&preserve-view=true) before proceeding. --Open the *appsettings.json* file in your app folder and add in the app registration details. --```json -{ - "AzureAd": { - "Instance": "https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/", - "TenantId": "Enter_the_Tenant_Id_Here", - "ClientId": "Enter_the_Application_Id_Here", - }, - "Logging": {...}, - "AllowedHosts": "*" -} -``` --Replace the following placeholders as shown: --- Replace `Enter_the_Application_Id_Here` with your application (client) ID.-- Replace `Enter_the_Tenant_Id_Here` with your Directory (tenant) ID.-- Replace `Enter_the_Tenant_Subdomain_Here` with your Directory (tenant) subdomain. For example, if your primary domain is *contoso.onmicrosoft.com*, replace `Enter_the_Tenant_Subdomain_Here` with *contoso*. --If you don't have these values, learn how to [read tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details) --## Add app role and scope --All APIs must publish a minimum of one scope, also called [Delegated Permission](/azure/active-directory/develop/permissions-consent-overview#types-of-permissions), for the client apps to obtain an access token for a user successfully. In a similar sense, APIs should also publish a minimum of one app role for applications, also called [Application Permission](/azure/active-directory/develop/permissions-consent-overview#types-of-permissions), for the client apps to obtain an access token as themselves, that is, when they aren't signing-in a user. --We specify these permissions in the *appsettings.json* file as configuration parameters. These permissions are registered via the Microsoft Entra admin center. For the purposes of this tutorial, we have registered four permissions. *ToDoList.ReadWrite* and *ToDoList.Read* as the delegated permissions, and *ToDoList.ReadWrite.All* and *ToDoList.Read.All* as the application permissions. --```json -{ - "AzureAd": { - "Instance": "https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/", - "TenantId": "Enter_the_Tenant_Id_Here", - "ClientId": "Enter_the_Application_Id_Here", - "Scopes": { - "Read": ["ToDoList.Read", "ToDoList.ReadWrite"], - "Write": ["ToDoList.ReadWrite"] - }, - "AppPermissions": { - "Read": ["ToDoList.Read.All", "ToDoList.ReadWrite.All"], - "Write": ["ToDoList.ReadWrite.All"] - } - }, - "Logging": {...}, - "AllowedHosts": "*" -} -``` ---## Add authentication scheme --An [authentication scheme](/aspnet/core/security/authorization/limitingidentitybyscheme) is named when the authentication service is configured during authentication. In this article, we use the JWT bearer authentication scheme. --Add the following code in the *Programs.cs* file to add authentication scheme. --```csharp -// Add the following to your imports -using Microsoft.AspNetCore.Authentication.JwtBearer; -using Microsoft.Identity.Web; --// Add authentication scheme -builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) - .AddMicrosoftIdentityWebApi(builder.Configuration); -``` --## Create your models --Create a folder called *Models* in the root folder of your project. Navigate to the folder and create a file called *ToDo.cs* and add the following code. This code creates a model called *ToDo*. --```csharp -using System; --namespace ToDoListAPI.Models; --public class ToDo -{ - public int Id { get; set; } - public Guid Owner { get; set; } - public string Description { get; set; } = string.Empty; -} -``` --## Add a database context --The database context is the main class that coordinates Entity Framework functionality for a data model. This class is created by deriving from the [*Microsoft.EntityFrameworkCore.DbContext*](/dotnet/api/microsoft.entityframeworkcore.dbcontext) class. In this article, we use an in-memory database for testing purposes. --Create a folder called *DbContext* in the root folder of the project. Navigate into that folder and create a file called *ToDoContext.cs*. Add the following contents to that file: --```csharp -using Microsoft.EntityFrameworkCore; -using ToDoListAPI.Models; --namespace ToDoListAPI.Context; --public class ToDoContext : DbContext -{ - public ToDoContext(DbContextOptions<ToDoContext> options) : base(options) - { - } -- public DbSet<ToDo> ToDos { get; set; } -} -``` --Add the following code in the *Program.cs* file. --```csharp -// Add the following to your imports -using ToDoListAPI.Context; -using Microsoft.EntityFrameworkCore; --builder.Services.AddDbContext<ToDoContext>(opt => - opt.UseInMemoryDatabase("ToDos")); -``` --## Next steps --> [!div class="nextstepaction"] -> [Protect your web API endpoints >](how-to-protect-web-api-dotnet-core-protect-endpoints.md) |
active-directory | How To Protect Web Api Dotnet Core Protect Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-protect-web-api-dotnet-core-protect-endpoints.md | - Title: Secure web API endpoints using Microsoft Entra -description: Learn how to secure endpoints of a web API registered in customer's tenant using Microoft Entra --------- Previously updated : 05/10/2023--#Customer intent: As a dev, I want to secure endpoints of my web API registered in the customer's tenant using Microsoft Entra. ---# Secure an ASP.NET web API by using Microsoft Entra - secure web API endpoints --Controllers handle requests that come in through the API endpoints. Controllers are made of Action methods. To protect our resources, we protect the API endpoints by adding security features to our controllers. Create a folder called *Controllers* in the project root folder. Navigate into this folder and create a file called *ToDoListController.cs*. --## Prerequisites --[Configure your web API](how-to-protect-web-api-dotnet-core-prepare-api.md) before going through this article. --## Add the code --We begin adding controller actions to our controller. In most cases, the controller would have more than one action. Typically Create, Read, Update, and Delete (CRUD) actions. For more information, see the article on [how to create a .NET web API doc](/aspnet/core/tutorials/first-web-api?view=aspnetcore-7.0&tabs=visual-studio-code&preserve-view=true#scaffold-a-controller). For the purposes of this article, we demonstrate using two action items, a read all action item and a create action item, how to protect your endpoints. For a full example, see the [samples file](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/blob/main/2-Authorization/3-call-own-api-dotnet-core-daemon/ToDoListAPI/Controllers/ToDoListController.cs). --Our boiler plate code for the controller looks as follows: --```csharp -using Microsoft.AspNetCore.Authorization; -using Microsoft.AspNetCore.Mvc; -using Microsoft.EntityFrameworkCore; -using Microsoft.Identity.Web; -using Microsoft.Identity.Web.Resource; -using ToDoListAPI.Models; -using ToDoListAPI.Context; --namespace ToDoListAPI.Controllers; --[Authorize] -[Route("api/[controller]")] -[ApiController] -public class ToDoListController : ControllerBase -{ - private readonly ToDoContext _toDoContext; -- public ToDoListController(ToDoContext toDoContext) - { - _toDoContext = toDoContext; - } -- [HttpGet()] - [RequiredScopeOrAppPermission()] - public async Task<IActionResult> GetAsync(){...} - - [HttpPost] - [RequiredScopeOrAppPermission()] - public async Task<IActionResult> PostAsync([FromBody] ToDo toDo){...} -- private bool RequestCanAccessToDo(Guid userId){...} -- private Guid GetUserId(){...} -- private bool IsAppMakingRequest(){...} -} -``` --## Explanation for the code snippets --In this section, we go through the code to see we protect our API by adding code into the placeholders we created. The focus here isn't on building the API, but rather protecting it. --1. Import the necessary packages. The [*Microsoft.Identity.Web*](/azure/active-directory/develop/microsoft-identity-web) package is an MSAL wrapper that helps us easily handle authentication logic, for example, by handling token validation. We also use the permissions definitions that we defined in the *appsettings.json* configuration file. To ensure that our endpoints require authorization, we use the inbuilt [*Microsoft.AspNetCore.Authorization*](/dotnet/api/microsoft.aspnetcore.authorization) package. --1. Since we granted permissions for this API to be called either using delegated permissions on behalf of the user or application permissions where the client calls as itself and not on the user's behalf, it's important to know whether the call is being made by the app on its own behalf. To do this, we check the claims to find whether the access token contains the `idtyp` optional claim. This claim is the most accurate way for the API to determine whether a token is an app token or an app + user token. Configure your API to use this [optional claim](/azure/active-directory/develop/active-directory-optional-claims) via your API app registration if you haven't. -- ```csharp - private bool IsAppMakingRequest() - { - // Add in the optional 'idtyp' claim to check if the access token is coming from an application or user. - // See: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-optional-claims - if (HttpContext.User.Claims.Any(c => c.Type == "idtyp")) - { - return HttpContext.User.Claims.Any(c => c.Type == "idtyp" && c.Value == "app"); - } - else - { - // alternatively, if an AT contains the roles claim but no scp claim, that indicates it's an app token - return HttpContext.User.Claims.Any(c => c.Type == "roles") && !HttpContext.User.Claims.Any(c => c.Type == "scp"); - } - } - ``` --1. Add a helper function that determines whether the request being made contains enough permissions to carry out the intended action. To do this, we check whether it's the app making the request on its own behalf or whether the app is making the call on behalf of a user who owns the given resource by validating the user ID. -- ```csharp - private bool RequestCanAccessToDo(Guid userId) - { - return IsAppMakingRequest() || (userId == GetUserId()); - } -- private Guid GetUserId() - { - Guid userId; -- if (!Guid.TryParse(HttpContext.User.GetObjectId(), out userId)) - { - throw new Exception("User ID is not valid."); - } -- return userId; - } - ``` --1. Plug in our permission definitions to protect our routes. We protect our API by adding the `[Authorize]` attribute to the controller class. This ensures the controller actions can be called only if the API is called with an authorized identity. The permission definitions define what kinds of permissions are needed to perform these actions. -- ```csharp - [Authorize] - [Route("api/[controller]")] - [ApiController] - public class ToDoListController: ControllerBase{...} - ``` -- Here, we add permissions to the GET all endpoint and the POST endpoint. We do this by using the [*RequiredScopeOrAppPermission*](/dotnet/api/microsoft.identity.web.resource.requiredscopeorapppermissionattribute) method that is part of the *Microsoft.Identity.Web.Resource* namespace. We then pass our scopes and permissions to this method via the *RequiredScopesConfigurationKey* and *RequiredAppPermissionsConfigurationKey* attributes. -- ```csharp - [HttpGet] - [RequiredScopeOrAppPermission( - RequiredScopesConfigurationKey = "AzureAD:Scopes:Read", - RequiredAppPermissionsConfigurationKey = "AzureAD:AppPermissions:Read" - )] - public async Task<IActionResult> GetAsync() - { - var toDos = await _toDoContext.ToDos! - .Where(td => RequestCanAccessToDo(td.Owner)) - .ToListAsync(); -- return Ok(toDos); - } -- [HttpPost] - [RequiredScopeOrAppPermission( - RequiredScopesConfigurationKey = "AzureAD:Scopes:Write", - RequiredAppPermissionsConfigurationKey = "AzureAD:AppPermissions:Write" - )] - public async Task<IActionResult> PostAsync([FromBody] ToDo toDo) - { - // Only let applications with global to-do access set the user ID or to-do's - var ownerIdOfTodo = IsAppMakingRequest() ? toDo.Owner : GetUserId(); -- var newToDo = new ToDo() - { - Owner = ownerIdOfTodo, - Description = toDo.Description - }; -- await _toDoContext.ToDos!.AddAsync(newToDo); - await _toDoContext.SaveChangesAsync(); -- return Created($"/todo/{newToDo!.Id}", newToDo); - } - ``` --## Run your API --Run your API to ensure that it's running well without any errors using the command `dotnet run`. If you intend to use https protocol even during testing, you need to [trust .NET's development certificate](/aspnet/core/tutorials/first-web-api#test-the-project). --## Next steps --> [!div class="nextstepaction"] -> [Test your protected web API >](how-to-protect-web-api-dotnet-core-test-api.md) |
active-directory | How To Register Ciam App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-register-ciam-app.md | During app registration, you specify the redirect URI. The redirect URI is the e Azure AD for customers supports authentication for various modern application architectures, for example web app or single-page app. The interaction of each application type with the customer tenant is different, therefore, you must specify the type of application you want to register. -In this article, you’ll learn how to register an application in your customer tenant. +In this article, you learn how to register an application in your customer tenant. ## Prerequisites In this article, you’ll learn how to register an application in your customer ## Choose your app type # [Single-page app (SPA)](#tab/spa)-## How to register your Single-page app? +## Register your Single-page app -The following steps show you how to register your app in the admin center: +Azure AD for customers supports authentication for Single-page apps (SPAs). ++The following steps show you how to register your SPA in the Microsoft Entra admin center: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). The following steps show you how to register your app in the admin center: 1. In the **Register an application page** that appears, enter your application's registration information: - 1. In the **Name** section, enter a meaningful application name that will be displayed to users of the app, for example *ciam-client-app*. + 1. In the **Name** section, enter a meaningful application name that is displayed to users of the app, for example *ciam-client-app*. 1. Under **Supported account types**, select **Accounts in this organizational directory only**. The following steps show you how to register your app in the admin center: [!INCLUDE [add about redirect URI](../customers/includes/register-app/about-redirect-url.md)] -### Add delegated permissions +### Grant delegated permissions This app signs in users. You can add delegated permissions to it, by following the steps below: [!INCLUDE [grant permision for signing in users](../customers/includes/register-app/grant-api-permission-sign-in.md)] -### To call an API follow the steps below (optional): +### Grant API permissions (optional): ++If your SPA needs to call an API, you must grant your SPA API permissions so it can call the API. You must also [register the web API](how-to-register-ciam-app.md?tabs=webapi) that you need to call. + [!INCLUDE [grant permisions for calling an API](../customers/includes/register-app/grant-api-permission-call-api.md)] If you'd like to learn how to expose the permissions by adding a link, go to the [Web API](how-to-register-ciam-app.md?tabs=webapi) section. If you'd like to learn how to expose the permissions by adding a link, go to the - [Sign in users in a sample vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-sample-sign-in.md) # [Web app](#tab/webapp)-## How to register your Web app? +## Register your Web app ++Azure AD for customers supports authentication for web apps. -The following steps show you how to register your app in the admin center: +The following steps show you how to register your web app in the Microsoft Entra admin center: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). The following steps show you how to register your app in the admin center: 1. In the **Register an application page** that appears, enter your application's registration information: - 1. In the **Name** section, enter a meaningful application name that will be displayed to users of the app, for example *ciam-client-app*. + 1. In the **Name** section, enter a meaningful application name that is displayed to users of the app, for example *ciam-client-app*. 1. Under **Supported account types**, select **Accounts in this organizational directory only**. - 1. Under **Redirect URI (optional)**, select **Web** and then, in the URL box, enter `http://localhost:3000/`. + 1. Under **Redirect URI (optional)**, select **Web** and then, in the URL box, enter a URL such as, `http://localhost:3000/`. 1. Select **Register**. This app signs in users. You can add delegated permissions to it, by following t ### Create a client secret  [!INCLUDE [add a client secret](../customers/includes/register-app/add-app-client-secret.md)] -### To call an API follow the steps below (optional): +### Grant API permissions (optional) ++If your web app needs to call an API, you must grant your web app API permissions so it can call the API. You must also [register the web API](how-to-register-ciam-app.md?tabs=webapi) that you need to call. + [!INCLUDE [grant permissions for calling an API](../customers/includes/register-app/grant-api-permission-call-api.md)] ## Next steps This app signs in users. You can add delegated permissions to it, by following t - [Sign in users in a sample Node.js web app](how-to-web-app-node-sample-sign-in.md) # [Web API](#tab/webapi)-## How to register your Web API? +## Register your Web API + [!INCLUDE [register app](../customers/includes/register-app/register-api-app.md)] This app signs in users. You can add delegated permissions to it, by following t [!INCLUDE [expose permissions](../customers/includes/register-app/add-api-scopes.md)] -### To add app roles follow the steps below (optional): +### Add app roles [!INCLUDE [configure app roles](../customers/includes/register-app/add-app-role.md)] This app signs in users. You can add delegated permissions to it, by following t - [Create a sign-up and sign-in user flow](how-to-user-flow-sign-up-sign-in-customers.md) # [Desktop or Mobile app](#tab/desktopmobileapp)-## How to register your Desktop or Mobile app? +## Register your Desktop or Mobile app -The following steps show you how to register your app in the admin center: +The following steps show you how to register your app in the Microsoft Entra admin center: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). The following steps show you how to register your app in the admin center: 1. In the **Register an application page** that appears, enter your application's registration information: - 1. In the **Name** section, enter a meaningful application name that will be displayed to users of the app, for example *ciam-client-app*. + 1. In the **Name** section, enter a meaningful application name that is displayed to users of the app, for example *ciam-client-app*. 1. Under **Supported account types**, select **Accounts in this organizational directory only**. The following steps show you how to register your app in the admin center: ### Add delegated permissions [!INCLUDE [grant permission for signing in users](../customers/includes/register-app/grant-api-permission-sign-in.md)] -### To call an API follow the steps below (optional): +### Grant API permissions (optional) ++If your mobile app needs to call an API, you must grant your mobile app API permissions so it can call the API. You must also [register the web API](how-to-register-ciam-app.md?tabs=webapi) that you need to call. [!INCLUDE [grant permissions for calling an API](../customers/includes/register-app/grant-api-permission-call-api.md)] ## Next steps The following steps show you how to register your app in the admin center: - [Sign in users in a sample Electron desktop app](how-to-desktop-app-electron-sample-sign-in.md) # [Daemon app](#tab/daemonapp)-## How to register your Daemon app? +## Register your Daemon app [!INCLUDE [register daemon app](../customers/includes/register-app/register-daemon-app.md)] -### To call an API follow the steps below (optional) -A daemon app signs-in as itself using the [OAuth 2.0 client credentials flow](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow), you add application permissions, which is required by apps that authenticate as themselves: +### Grant API permissions ++A daemon app signs-in as itself using the [OAuth 2.0 client credentials flow](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow). You grant application permissions (app roles), which is required by apps that authenticate as themselves. You must also [register the web API](how-to-register-ciam-app.md?tabs=webapi) that your daemon app needs to call. [!INCLUDE [register daemon app](../customers/includes/register-app/grant-api-permissions-app-permissions.md)] A daemon app signs-in as itself using the [OAuth 2.0 client credentials flow](/a - [Create a sign-up and sign-in user flow](how-to-user-flow-sign-up-sign-in-customers.md) # [Microsoft Graph API](#tab/graphapi)-## How to register a Microsoft Graph API application? +## Register a Microsoft Graph API application [!INCLUDE [register client app](../customers/includes/register-app/register-client-app-common.md)] ### Grant API Access to your application A daemon app signs-in as itself using the [OAuth 2.0 client credentials flow](/a [!INCLUDE [add app client secret](../customers/includes/register-app/add-app-client-secret.md)] ## Next steps-- Learn more how to manage [Azure Active Directory for customers resources with Microsoft Graph](microsoft-graph-operations.md)+- Learn more how to manage [Azure Active Directory for customers resources with Microsoft Graph](microsoft-graph-operations.md) |
active-directory | How To Web App Node Sign In Call Api Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-sign-in-call-api-call-api.md | You add code in *routes/todos.js*, *controller/todolistController.js* and *fetc At this point, you're ready to test your client web app and web API. -1. Use the steps you learned in [Secure an ASP.NET web API](how-to-protect-web-api-dotnet-core-overview.md) article to start your web API. Your web API is now ready to serve client requests. +1. Use the steps you learned in [Secure an ASP.NET web API](./tutorial-protect-web-api-dotnet-core-build-app.md) article to start your web API. Your web API is now ready to serve client requests. 1. In your terminal, make sure you're in the project folder that contains your client web app such as `ciam-sign-in-call-api-node-express-web-app`, then run the following command: You may want to: - [Configure sign-in with Google](how-to-google-federation-customers.md) -- [Sign in users in your own Node.js web application](how-to-web-app-node-sign-in-overview.md)+- [Sign in users in your own Node.js web application](tutorial-web-app-node-sign-in-prepare-tenant.md) |
active-directory | How To Web App Node Sign In Call Api Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-sign-in-call-api-prepare-app.md | In this article, you create app projects for both the client web app and web API ## Build ASP.NET web API -You must first create a protected web API, which the client web calls by presenting a valid token. To do so, complete the steps in [Secure an ASP.NET web API](how-to-protect-web-api-dotnet-core-overview.md) article. In this article, you learn how to create and protect ASP.NET API endpoints, and run and test the API. +You must first create a protected web API, which the client web calls by presenting a valid token. To do so, complete the steps in [Secure an ASP.NET web API](./tutorial-protect-web-api-dotnet-core-build-app.md) article. In this article, you learn how to create and protect ASP.NET API endpoints, and run and test the API. Before you proceed to this article, make sure you've [registered a web API app in Microsoft Entra admin center](how-to-web-app-node-sign-in-call-api-prepare-tenant.md#register-a-web-application-and-a-web-api). |
active-directory | How To Web App Node Sign In Call Api Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-sign-in-call-api-prepare-tenant.md | |
active-directory | How To Web App Node Sign In Call Api Sign In Acquire Access Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-sign-in-call-api-sign-in-acquire-access-token.md | The `/` route is the entry point to the application. It renders the `views/index 1. In your code editor, open *auth/AuthProvider.js* file, then add the code from [AuthProvider.js](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/blob/main/2-Authorization/4-call-api-express/App/auth/AuthProvider.js) to it. -The `/signin`, `/signout` and `/redirect` routes are defined in the *routes/auth.js* file, but their logic live in *auth/AuthProvider.js* file. +The `/signin`, `/signout` and `/redirect` routes are defined in the *routes/auth.js* file, but you implement their logic in *auth/AuthProvider.js* file. - The `login` method handles`/signin` route: In your code editor, open *routes/users.js* file, then add the following code: module.exports = router; ```-If the user is authenticated, the `/id` route displays ID token claims by using the `views/id.hbs` view. You added this view earlier in [Build app UI components](how-to-web-app-node-sign-in-prepare-app.md#build-app-ui-components). +If the user is authenticated, the `/id` route displays ID token claims by using the `views/id.hbs` view. You added this view earlier in [Build app UI components](tutorial-web-app-node-sign-in-prepare-app.md#build-app-ui-components). To extract a specific ID token claim, such as *given name*: ```javascript const givenName = req.session.account.idTokenClaims.given_name |
active-directory | How To Web App Node Sign In Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-sign-in-overview.md | - Title: Sign in users in your own Node.js web application -description: Learn about how to Sign in users in your own Node.js web application. --------- Previously updated : 05/22/2023--#Customer intent: As a dev, devops, I want to learn about how to enable authentication in my own Node.js web app with Azure Active Directory (Azure AD) for customers tenant ---# Sign in users in your own Node.js web application --In this article, you learn how to sign in users in your own Node.js web application that you build. You add authentication to your web application against your Azure Active Directory (Azure AD) for customers tenant. --We've organized the content into three separate articles so it's easy for you to follow: --- [Prepare your Azure AD for customers tenant](how-to-web-app-node-sign-in-prepare-tenant.md) tenant guides you how to register your app and configure user flows in the Microsoft Entra admin center.--- [Prepare your web application](how-to-web-app-node-sign-in-prepare-app.md) guides you how to set up your Node.js app structure.--- [Add sign-in and sign-out](how-to-web-app-node-sign-in-sign-in-out.md) guides you how to add authentication to your application by using MSAL.--## Overview --OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. You can use OIDC to securely sign users in to an application. The application you build uses [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) to simplify adding authentication to your node web application. --The sign-in flow involves the following steps: --1. Users go to the web app and initiate a sign-in flow. --1. The app initiates an authentication request and redirects users to Azure AD for customers. --1. Users sign up, sign in or reset the password. Users cal also sign in with a social account it's configured. --1. After users sign in successfully, Azure AD for customers returns an ID token to the web app. --1. The web app reads the ID token claims, and then displays a secure page to users. --## Prerequisites --- [Node.js](https://nodejs.org).--- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.--- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>. ---If you want to run a sample Node.js web application to get a feel of how things work, complete the steps in [Sign in users in a sample Node.js web application](how-to-web-app-node-sample-sign-in.md) --## Next steps --Next, learn how to prepare your Azure AD for customers tenant. --> [!div class="nextstepaction"] -> [Prepare your Azure AD for customers tenant for authentication >](how-to-web-app-node-sign-in-prepare-tenant.md) |
active-directory | How To Web App Node Use Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-use-certificate.md | Once you associate your app registration with the certificate, you need to updat }); //... ```-1. Use the steps in [Run and test the web app](how-to-web-app-node-sign-in-sign-in-out.md#run-and-test-the-web-app) to test your app. +1. Use the steps in [Run and test the web app](tutorial-web-app-node-sign-in-sign-out.md#run-and-test-the-web-app) to test your app. ## Use a self-signed certificate directly from Azure Key Vault You can use your existing certificate directly from Azure Key Vault: } ``` -1. Use the steps in [Run and test the web app](how-to-web-app-node-sign-in-sign-in-out.md#run-and-test-the-web-app) to test your app. +1. Use the steps in [Run and test the web app](tutorial-web-app-node-sign-in-sign-out.md#run-and-test-the-web-app) to test your app. ## Next steps |
active-directory | How To Web App Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-role-based-access-control.md | In this article, you learn how to receive user roles or group membership or both - If you've not done so, complete the steps in [Using role-based access control for applications](how-to-use-app-roles-customers.md) article. This article shows you how to create roles for your application, how to assign users and groups to those roles, how to add members to a group and how to add a group claim to a to security token. Learn more about [ID tokens](../../develop/id-tokens.md) and [access tokens](../../develop/access-tokens.md). -- If you've not done so, complete the steps in [Sign in users in your own Node.js web application](how-to-web-app-node-sign-in-overview.md)+- If you've not done so, complete the steps in [Sign in users in your own Node.js web application](tutorial-web-app-node-sign-in-prepare-tenant.md) ## Receive groups and roles claims in your Node.js web app |
active-directory | Sample Daemon Dotnet Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-daemon-dotnet-call-api.md | |
active-directory | Sample Desktop Wpf Dotnet Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-desktop-wpf-dotnet-sign-in.md | + + Title: Sign in users in a sample WPF desktop application +description: Learn how to configure a sample WPF desktop to sign in and sign out users. +++++++++ Last updated : 07/26/2023+++#Customer intent: As a dev, devops, I want to learn about how to configure a sample WPF desktop app to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant +++# Sign in users in a sample WPF desktop application ++This article uses a sample Windows Presentation Foundation (WPF) application to show you how to add authentication to a WPF desktop application. The sample application enables users to sign in and sign out. The sample desktop application uses [Microsoft Authentication Library for .NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) for .NET to handle authentication. ++## Prerequisites ++- [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later. ++- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor. ++- Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). ++## Register the desktop app +++## Specify your app platform +++## Grant API permissions ++Since this app signs-in users, add delegated permissions: +++## Create a user flow +++## Associate the WPF application with the user flow +++## Clone or download sample WPF application ++To get the WPF desktop app sample code, you can do either of the following tasks: ++- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/archive/refs/heads/main.zip) or clone the sample desktop application from GitHub by running the following command: ++ ```console + git clone https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial.git + ``` ++If you choose to download the *.zip* file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters. ++## Configure the sample WPF app ++1. Open the project in your IDE (like Visual Studio or Visual Studio Code) to configure the code. ++1. In your code editor, open the *appsettings.json* file in the **ms-identity-ciam-dotnet-tutorial** > **1-Authentication** > **5-sign-in-dotnet-wpf** folder. ++1. Replace `Enter_the_Application_Id_Here` with the Application (client) ID of the app you registered earlier. + +1. Replace `Enter_the_Tenant_Subdomain_Here` with the Directory (tenant) subdomain. For example, if your primary domain is *contoso.onmicrosoft.com*, replace `Enter_the_Tenant_Subdomain_Here` with *contoso*. If you don't have your primary domain, learn how to [read tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). ++## Run and test sample WPF desktop app ++1. Open a console window, and change to the directory that contains the WPF desktop sample app: ++ ```console + cd 1-Authentication\5-sign-in-dotnet-wpf + ``` ++1. In your terminal, run the app by running the following command: ++ ```console + dotnet run + ``` ++1. After you launch the sample you should see a window with a **Sign-In** button. Select the **Sign-In** button. ++ :::image type="content" source="./media/sample-wpf-dotnet-sign-in/wpf-sign-in-screen.png" alt-text="Screenshot of sign-in screen for a WPF desktop application."::: ++1. On the sign-in page, enter your account email address. If you don't have an account, select **No account? Create one**, which starts the sign-up flow. Follow through this flow to create a new account and sign in. +1. Once you sign in, you'll see a screen displaying successful sign-in and basic information about your user account stored in the retrieved token. ++ :::image type="content" source="./media/sample-wpf-dotnet-sign-in/wpf-successful-sign-in.png" alt-text="Screenshot of successful sign-in for desktop WPF app."::: ++### How it works ++The main configuration for the public client application is handled within the *App.xaml.cs* file. A `PublicClientApplication` is initialized along with a cache for storing access tokens. The application will first check whether there's a cached token that can be used to sign the user in. If there's no cached token, the user will be prompted to provide credentials and sign-in. Upon signing-out, the cache is cleared of all accounts and all corresponding access tokens. ++## Next steps ++See the tutorial on how to [build your own WPF desktop app that authenticates users](tutorial-desktop-wpf-dotnet-sign-in-prepare-tenant.md) |
active-directory | Sample Single Page App Angular Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-app-angular-sign-in.md | + + Title: Sign in users in a sample Angular single-page application. +description: Learn how to configure a sample Angular Single Page Application (SPA) using Azure Active Directory for Customers ++++++++ Last updated : 06/23/2023+++#Customer intent: As a dev, devops, I want to learn about how to configure a sample Angular Single Page Application to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant +++# Sign in users in a sample Angular single-page application ++This how-to guide uses a sample Angular single-page application (SPA) to demonstrate how to add authentication users into a SPA. The SPA enables users to sign in and sign out by using your Azure Active Directory (Azure AD) for customers tenant. The sample uses the [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) to handle authentication. ++## Prerequisites ++* Although any IDE that supports vanilla JS applications can be used, **Visual Studio Code** is used for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page. +* [Node.js](https://nodejs.org/en/download/). +* Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>. +++## Register the SPA in the Microsoft Entra admin center +++## Grant API permissions +++## Create a user flow +++## Associate the SPA with the user flow +++## Clone or download sample SPA ++To get the sample SPA, you can choose one of the following options: ++* Clone the repository using Git: ++ ```powershell + git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git + ``` ++* [Download the sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip) ++If you choose to download the `.zip` file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters. ++## Install project dependencies ++1. Open a terminal window in the root directory of the sample project, and enter the following snippet to navigate to the project folder: ++ ```powershell + cd 1-Authentication\2-sign-in-angular\SPA + ``` ++1. Install the project dependencies: ++ ```powershell + npm install + ``` ++## Configure the sample SPA ++1. Open `SPA\src\authConfig.js` and replace the following with the values obtained from the Microsoft Entra admin center + * `clientId` - The identifier of the application, also referred to as the client. Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the overview page of the registered application. + * `authority` - The identity provider instance and sign-in audience for the app. Replace `Enter_the_Tenant_Name_Here` with the name of your CIAM tenant. + * The *Tenant ID* is the identifier of the tenant where the application is registered. Replace the `_Enter_the_Tenant_Info_Here` with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application. +1. Save the file. ++## Run your project and sign in ++All the required code snippets have been added, so the application can now be called and tested in a web browser. ++1. Open a new terminal by selecting **Terminal** > **New Terminal**. +1. Run the following command to start your web server. ++ ```powershell + cd 1-Authentication\2-sign-in-angular\SPA + npm start + ``` ++1. Open a web browser and navigate to `http://localhost:4200/`. ++1. Sign-in with an account registered to the Azure AD for customers tenant. ++1. Once you successfully sign-in, the display name is shown next to the **Sign out** button. ++## Next steps ++Learn how to use the Microsoft Authentication Library (MSAL) for JavaScript to sign in users and acquire tokens to call Microsoft Graph. |
active-directory | Sample Single Page App React Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-app-react-sign-in.md | + + Title: Sign in users in a sample React single-page application +description: Learn how to configure a sample React single-page app (SPA) to sign in and sign out users. +++++++ Last updated : 06/23/2023++#Customer intent: As a dev, devops, I want to learn about how to configure a sample React single-page app to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant +++# Sign in users in a sample React single-page app (SPA) ++This guide uses a sample React single-page application (SPA) to demonstrate how to add authentication to a SPA. This SPA enables users to sign in and sign out by using you Azure Active Directory (Azure AD) for customers tenant. The sample uses the [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) to handle authentication. ++## Prerequisites +* Although any IDE that supports React applications can be used, **Visual Studio Code** is used for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page. +* [Node.js](https://nodejs.org/en/download/). +* Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>. +++## Register the SPA in the Microsoft Entra admin center +++## Grant API permissions +++## Create a user flow +++## Associate the SPA with the user flow +++## Clone or download sample SPA ++To get the sample SPA, you can choose one of the following options: ++* Clone the repository using Git: ++ ```powershell + git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git + ``` ++* [Download the sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip) ++If you choose to download the `.zip` file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters. ++## Install project dependencies ++1. Open a terminal window in the root directory of the sample project, and enter the following snippet to navigate to the project folder: ++ ```powershell + cd 1-Authentication\1-sign-in-react\SPA + ``` ++1. Install the project dependencies: ++ ```powershell + npm install + ``` ++## Configure the sample SPA ++1. Open _SPA\src\authConfig.js_ and replace the following with the values obtained from the Microsoft Entra admin center + * `clientId` - The identifier of the application, also referred to as the client. Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the overview page of the registered application. + * `authority` - The identity provider instance and sign-in audience for the app. Replace `Enter_the_Tenant_Name_Here` with the name of your Azure AD customer tenant. + * The *Tenant ID* is the identifier of the tenant where the application is registered. Replace the `_Enter_the_Tenant_Info_Here` with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application. +1. Save the file. ++## Run your project and sign in +All the required code snippets have been added, so the application can now be called and tested in a web browser. ++1. Open a new terminal by selecting **Terminal** > **New Terminal**. +1. Run the following command to start your web server. ++ ```powershell + cd 1-Authentication\1-sign-in-react\SPA + npm start + ``` ++1. Open a web browser and navigate to `http://localhost:3000/`. ++1. Sign-in with an account registered to the Azure AD customer tenant. ++1. Once signed in the display name is shown next to the **Sign out** button. ++## Next steps +> [!div class="nextstepaction"] +> [Enable self-service password reset](./how-to-enable-password-reset-customers.md) |
active-directory | Sample Web App Node Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-web-app-node-sign-in.md | When users select the **Sign in** link, the app initiates an authentication requ When the users select the **Sign out** link, the app clears its session, the redirect the user to Azure AD for customers sign-out endpoint to notify it that the user has signed out. -If you want to build an app similar to the sample you've run, complete the steps in [Sign in users in your own Node.js web application](how-to-web-app-node-sign-in-overview.md) article. +If you want to build an app similar to the sample you've run, complete the steps in [Sign in users in your own Node.js web application](tutorial-web-app-node-sign-in-prepare-tenant.md) article. ## Next steps You may want to: - [Configure sign-in with Google](how-to-google-federation-customers.md) -- [Sign in users in your own Node.js web application](how-to-web-app-node-sign-in-overview.md)+- [Sign in users in your Node.js web application](tutorial-web-app-node-sign-in-prepare-tenant.md) |
active-directory | Samples Ciam All | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/samples-ciam-all.md | These samples and how-to guides demonstrate how to write a web application that > [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample guide | Build and integrate guide | > | - | -- | - | -> | JavaScript, Node.js (Express) | • [Sign in users](how-to-web-app-node-sample-sign-in.md)<br/> • [Sign in users and call an API](how-to-web-app-node-sample-sign-in-call-api.md) | • [Sign in users](how-to-web-app-node-sign-in-overview.md)<br/> • [Sign in users and call an API](how-to-web-app-node-sign-in-call-api-overview.md) | +> | JavaScript, Node.js (Express) | • [Sign in users](how-to-web-app-node-sample-sign-in.md)<br/> • [Sign in users and call an API](how-to-web-app-node-sample-sign-in-call-api.md) | • [Sign in users](tutorial-web-app-node-sign-in-prepare-tenant.md)<br/> • [Sign in users and call an API](how-to-web-app-node-sign-in-call-api-overview.md) | > | ASP.NET Core | • [Sign in users](how-to-web-app-dotnet-sample-sign-in.md) | • [Sign in users](how-to-web-app-dotnet-sign-in-prepare-tenant.md) | ### Web API These samples and how-to guides demonstrate how to protect a web API with the Mi > [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample guide | Build and integrate guide | > | - | -- | - |-> | ASP.NET Core | | • [Secure an ASP.NET web API](how-to-protect-web-api-dotnet-core-overview.md) | +> | ASP.NET Core | | • [Secure an ASP.NET web API](tutorial-protect-web-api-dotnet-core-build-app.md) | ### Browserless These samples and how-to guides demonstrate how to write a daemon application th > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample guide | Build and integrate guide | > | - | -- | - | -> | Web API| | • [Secure an ASP.NET web API](how-to-protect-web-api-dotnet-core-overview.md) | +> | Web API| | • [Secure an ASP.NET web API](tutorial-protect-web-api-dotnet-core-build-app.md) | > | Web app | • [Sign in users](how-to-web-app-dotnet-sample-sign-in.md) | • [Sign in users](how-to-web-app-dotnet-sign-in-prepare-tenant.md) | ### .NET (MAUI) These samples and how-to guides demonstrate how to write a daemon application th > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample guide | Build and integrate guide | > | - | -- | - |-> | Web app |• [Sign in users](how-to-web-app-node-sample-sign-in.md)<br/> • [Sign in users and call an API](how-to-web-app-node-sample-sign-in-call-api.md) | • [Sign in users](how-to-web-app-node-sign-in-overview.md)<br/> • [Sign in users and call an API](how-to-web-app-node-sign-in-call-api-overview.md) | +> | Web app |• [Sign in users](how-to-web-app-node-sample-sign-in.md)<br/> • [Sign in users and call an API](how-to-web-app-node-sample-sign-in-call-api.md) | • [Sign in users](tutorial-web-app-node-sign-in-prepare-tenant.md)<br/> • [Sign in users and call an API](how-to-web-app-node-sign-in-call-api-overview.md) | ### JavaScript, Electron These samples and how-to guides demonstrate how to write a daemon application th > | - | -- | - | > | Desktop | • [Sign in users](how-to-desktop-app-electron-sample-sign-in.md) | | -+ |
active-directory | Tutorial Browserless App Dotnet Sign In Build App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-browserless-app-dotnet-sign-in-build-app.md | |
active-directory | Tutorial Browserless App Dotnet Sign In Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-browserless-app-dotnet-sign-in-prepare-tenant.md | In this tutorial, you learn how to: ## Prerequisites -- [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later. --- [Visual Studio 2022](https://code.visualstudio.com/download) or another code editor.--- Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl).+Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). ## Register the browserless app |
active-directory | Tutorial Daemon Dotnet Call Api Build App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-dotnet-call-api-build-app.md | Before continuing with this tutorial, ensure you have all of the following items - The secret value for the daemon app you created. - The Application (client) ID of the web API app you registered. -- A protected *ToDoList* web API that is running and ready to accept requests. If you haven't created one, see the [create a protected web API tutorial](how-to-protect-web-api-dotnet-core-overview.md). Ensure this web API is using the app registration details you created in the [prepare app tutorial](tutorial-daemon-dotnet-call-api-prepare-tenant.md).+- A protected *ToDoList* web API that is running and ready to accept requests. If you haven't created one, see the [create a protected web API tutorial](./tutorial-protect-web-api-dotnet-core-build-app.md). Ensure this web API is using the app registration details you created in the [prepare tenant tutorial](tutorial-daemon-dotnet-call-api-prepare-tenant.md). - The base url and port on which the web API is running. For example, 44351. Ensure the API exposes the following endpoints via https:- - `GET /api/todolist` to get all todos. - `POST /api/todolist` to add a todo.+ - [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later. - [Visual Studio Code](https://code.visualstudio.com/download) or another code editor. |
active-directory | Tutorial Daemon Dotnet Call Api Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-dotnet-call-api-prepare-tenant.md | Title: "Tutorial: Register and configure .NET daemon app authentication details in a customer tenant" + Title: "Tutorial: Prepare your customer tenant to authorize a .NET daemon application" description: Learn about how to prepare your Azure Active Directory (Azure AD) for customers tenant to acquire an access token using client credentials flow in your .NET daemon application -# Tutorial: Register and configure .NET daemon app authentication details in a customer tenant +# Tutorial: Prepare your customer tenant to authorize a .NET daemon application The first step in securing your applications is to register them. In this tutorial, you prepare your Azure Active Directory (Azure AD) for customers tenant for authorization. This tutorial is part of a series that guides you to develop a .NET daemon app that calls your own custom protected web API using Azure AD for customers. In this tutorial, you learn how to: > - Register a client daemon application and grant it app permissions in the Microsoft Entra admin center > - Create a client secret for your daemon application in the Microsoft Entra admin center. +## Prerequisites ++Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). + ## 1. Register a web API application [!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/register-api-app.md)] In this tutorial, you learn how to: [!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/grant-api-permissions-app-permissions.md)] -## 6. Pick your registration details +## 6. Record your app registration details The next step after this tutorial is to build a daemon app that calls your web API. Ensure you have the following details: - The Application (client) ID of the client daemon app that you registered.-- The Directory (tenant) subdomain where you registered your daemon app.-- The secret value for the daemon app you created.+- The Directory (tenant) subdomain where you registered your daemon app. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). +- The application secret value for the daemon app you created. - The Application (client) ID of the web API app you registered. ## Next steps The next step after this tutorial is to build a daemon app that calls your web A In the next tutorial, you configure your daemon and web API applications. > [!div class="nextstepaction"]-> [Prepare your daemon application >](tutorial-daemon-dotnet-call-api-build-app.md) +> [Build your daemon application >](tutorial-daemon-dotnet-call-api-build-app.md) |
active-directory | Tutorial Daemon Node Call Api Build App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-node-call-api-build-app.md | In this tutorial, you'll: - [Node.js](https://nodejs.org). - [Visual Studio Code](https://code.visualstudio.com/download) or another code editor. - Registration details for the Node.js daemon app and web API you created in the [prepare tenant tutorial](tutorial-daemon-node-call-api-prepare-tenant.md).-- A protected web API that is running and ready to accept requests. If you haven't created one, see the [create a protected web API tutorial](how-to-protect-web-api-dotnet-core-overview.md). Ensure this web API is using the app registration details you created in the [prepare tenant tutorial](tutorial-daemon-node-call-api-prepare-tenant.md). Make sure your web API exposes the following endpoints via https:+- A protected web API that is running and ready to accept requests. If you haven't created one, see the [create a protected web API tutorial](./tutorial-protect-web-api-dotnet-core-build-app.md). Ensure this web API is using the app registration details you created in the [prepare tenant tutorial](tutorial-daemon-node-call-api-prepare-tenant.md). Make sure your web API exposes the following endpoints via https: - `GET /api/todolist` to get all todos. - `POST /api/todolist` to add a todo. Create a folder to host your Node.js daemon application, such as `ciam-call-api- 1. In your terminal, change directory into your Node daemon app folder, such as `cd ciam-call-api-node-daemon`, then run `npm init -y`. This command creates a default package.json file for your Node.js project. This command creates a default `package.json` file for your Node.js project. -1. Create more folders and files to achieve the following project structure: +1. Create additional folders and files to achieve the following project structure: ``` ciam-call-api-node-daemon/ |
active-directory | Tutorial Daemon Node Call Api Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-node-call-api-prepare-tenant.md | If you've already registered a client daemon application and a web API in the Mi [!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/grant-api-permissions-app-permissions.md)] -## Record your app registration details +## Collect your app registration details In the next step, you prepare your daemon app application. Make sure you've the following details: |
active-directory | Tutorial Desktop Wpf Dotnet Sign In Build App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-desktop-wpf-dotnet-sign-in-build-app.md | + + Title: "Tutorial: Authenticate users to your WPF desktop application" +description: Learn how to sign in and sign out user to your WPF desktop app. ++++++++ Last updated : 07/26/2023+++# Tutorial: Authenticate users to your WPF desktop application ++In this tutorial, you build your Windows Presentation Form (WPF) desktop app and sign in and sign out a user using Azure Active Directory (Azure AD) for customers. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Configure a WPF desktop app to use it's app registration details. +> - Build a desktop app that signs in a user and acquires a token on behalf of the user. ++## Prerequisites ++- Registration details for the WPF desktop app you created in the [prepare tenant tutorial](./tutorial-desktop-wpf-dotnet-sign-in-prepare-tenant.md). You need the following details: + - The Application (client) ID of the WPF desktop app that you registered. + - The Directory (tenant) subdomain where you registered your WPF desktop app. +- [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later. +- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor. ++## Create a WPF desktop application ++1. Open your terminal and navigate to the folder where you want your project to live. +1. Initialize a WPF desktop app and navigate to its root folder. ++ ```dotnetcli + dotnet new wpf --language "C#" --name sign-in-dotnet-wpf + cd sign-in-dotnet-wpf + ``` ++## Install packages ++Install configuration providers that help our app to read configuration data from key-value pairs in our app settings file. These configuration abstractions provide the ability to bind configuration values to instances of .NET objects. ++```dotnetcli +dotnet add package Microsoft.Extensions.Configuration +dotnet add package Microsoft.Extensions.Configuration.Json +dotnet add package Microsoft.Extensions.Configuration.Binder +``` ++Install the Microsoft Authentication Library (MSAL) that contains all the key components that you need to acquire a token. You also install the MSAL broker library that handles interactions with desktop authentication brokers. ++```dotnetcli +dotnet add package Microsoft.Identity.Client +dotnet add package Microsoft.Identity.Client.Broker +``` ++## Create appsettings.json file and add registration configs ++1. Create *appsettings.json* file in the root folder of the app. +1. Add app registration details to the *appsettings.json* file. ++ ```json + { + "AzureAd": { + "Authority": "https://<Enter_the_Tenant_Subdomain_Here>.ciamlogin.com/", + "ClientId": "<Enter_the_Application_Id_Here>" + } + } + ``` ++ - Replace `Enter_the_Tenant_Subdomain_Here` with the Directory (tenant) subdomain. + - Replace `Enter_the_Application_Id_Here` with the Application (client) ID of the app you registered earlier. ++1. After creating the app settings file, we'll create another file called *AzureAdConfig.cs* that will help you read the configs from the app settings file. Create the *AzureAdConfig.cs* file in the root folder of the app. +1. In the *AzureAdConfig.js* file, define the getters and setters for the `ClientId` and `Authority` properties. Add the following code: ++ ```csharp + namespace sign_in_dotnet_wpf + { + public class AzureAdConfig + { + public string Authority { get; set; } + public string ClientId { get; set; } + } + } + ``` ++## Modify the project file ++1. Navigate to the *sign-in-dotnet-wpf.csproj* file in the root folder of the app. +1. In this file, take the following two steps: + + 1. Modify the *sign-in-dotnet-wpf.csproj* file to instruct your app to copy the *appsettings.json* file to the output directory when the project is compiled. Add the following piece of code to the *sign-in-dotnet-wpf.csproj* file: + 1. Set the target framework to target *windows10.0.19041.0* build to help with reading cached token from the token cache as you'll see in the token cache helper class. ++ ```xml + <Project Sdk="Microsoft.NET.Sdk"> ++ ... ++ <!-- Set target framework to target windows10.0.19041.0 build --> + <PropertyGroup> + <OutputType>WinExe</OutputType> + <TargetFramework>net7.0-windows10.0.19041.0</TargetFramework> <!-- target framework --> + <RootNamespace>sign_in_dotnet_wpf</RootNamespace> + <Nullable>enable</Nullable> + <UseWPF>true</UseWPF> + </PropertyGroup> ++ <!-- Copy appsettings.json file to output folder. --> + <ItemGroup> + <None Remove="appsettings.json" /> + </ItemGroup> + + <ItemGroup> + <EmbeddedResource Include="appsettings.json"> + <CopyToOutputDirectory>Always</CopyToOutputDirectory> + </EmbeddedResource> + </ItemGroup> + </Project> + ``` ++## Create a token cache helper class ++Create a token cache helper class that initializes a token cache. The application attempts to read the token from the cache before it attempts to acquire a new token. If the token isn't found in the cache, the application acquires a new token. Upon signing-out, the cache is cleared of all accounts and all corresponding access tokens. ++1. Create a *TokenCacheHelper.cs* file in the root folder of the app. +1. Open the *TokenCacheHelper.cs* file. Add the packages and namespaces to the file. In the following steps, you populate this file with the code logic by adding the relevant logic to the `TokenCacheHelper` class. ++ ```csharp + using System.IO; + using System.Security.Cryptography; + using Microsoft.Identity.Client; ++ namespace sign_in_dotnet_wpf + { + static class TokenCacheHelper{} + } + ``` ++1. Add constructor to the `TokenCacheHelper` class that defines the cache file path. For packaged desktop apps (MSIX packages, also called desktop bridge) the executing assembly folder is read-only. In that case we need to use `Windows.Storage.ApplicationData.Current.LocalCacheFolder.Path + "\msalcache.bin"` that is a per-app read/write folder for packaged apps. ++ ```csharp + namespace sign_in_dotnet_wpf + { + static class TokenCacheHelper + { + static TokenCacheHelper() + { + try + { + CacheFilePath = Path.Combine(Windows.Storage.ApplicationData.Current.LocalCacheFolder.Path, ".msalcache.bin3"); + } + catch (System.InvalidOperationException) + { + CacheFilePath = System.Reflection.Assembly.GetExecutingAssembly().Location + ".msalcache.bin3"; + } + } + public static string CacheFilePath { get; private set; } + private static readonly object FileLock = new object(); + } + } + + ``` ++1. Add code to handle token cache serialization. The `ITokenCache` interface implements the public access to cache operations. `ITokenCache` interface contains the methods to subscribe to the cache serialization events, while the interface `ITokenCacheSerializer` exposes the methods that you need to use in the cache serialization events, in order to serialize/deserialize the cache. `TokenCacheNotificationArgs` contains parameters used by`Microsoft.Identity.Client` (MSAL) call accessing the cache. `ITokenCacheSerializer` interface is available in `TokenCacheNotificationArgs` callback. ++ Add the following code to the `TokenCacheHelper` class: ++ ```csharp + static class TokenCacheHelper + { + static TokenCacheHelper() + {...} + public static string CacheFilePath { get; private set; } + private static readonly object FileLock = new object(); ++ public static void BeforeAccessNotification(TokenCacheNotificationArgs args) + { + lock (FileLock) + { + args.TokenCache.DeserializeMsalV3(File.Exists(CacheFilePath) + ? ProtectedData.Unprotect(File.ReadAllBytes(CacheFilePath), + null, + DataProtectionScope.CurrentUser) + : null); + } + } ++ public static void AfterAccessNotification(TokenCacheNotificationArgs args) + { + if (args.HasStateChanged) + { + lock (FileLock) + { + File.WriteAllBytes(CacheFilePath, + ProtectedData.Protect(args.TokenCache.SerializeMsalV3(), + null, + DataProtectionScope.CurrentUser) + ); + } + } + } + } ++ internal static void EnableSerialization(ITokenCache tokenCache) + { + tokenCache.SetBeforeAccess(BeforeAccessNotification); + tokenCache.SetAfterAccess(AfterAccessNotification); + } + ``` + + In the `BeforeAccessNotification` method, you read the cache from the file system, and if the cache isn't empty, you deserialize it and load it. The `AfterAccessNotification` method is called after `Microsoft.Identity.Client` (MSAL) accesses the cache. If the cache has changed, you serialize it and persist the changes to the cache. ++ The `EnableSerialization` contains the `ITokenCache.SetBeforeAccess()` and `ITokenCache.SetAfterAccess()` methods: + + - `ITokenCache.SetBeforeAccess()` sets a delegate to be notified before any library method accesses the cache. This gives an option to the delegate to deserialize a cache entry for the application and accounts specified in the `TokenCacheNotificationArgs`. + - `ITokenCache.SetAfterAccess()` sets a delegate to be notified after any library method accesses the cache. This gives an option to the delegate to serialize a cache entry for the application and accounts specified in the `TokenCacheNotificationArgs`. ++## Create the WPF desktop app UI ++Modify the *MainWindow.xaml* file to add the UI elements for the app. Open the *MainWindow.xaml* file in the root folder of the app and add the following piece of code with the `<Grid></Grid>` control section. ++```xaml + <StackPanel Background="Azure"> + <StackPanel Orientation="Horizontal" HorizontalAlignment="Right"> + <Button x:Name="SignInButton" Content="Sign-In" HorizontalAlignment="Right" Padding="5" Click="SignInButton_Click" Margin="5" FontFamily="Segoe Ui"/> + <Button x:Name="SignOutButton" Content="Sign-Out" HorizontalAlignment="Right" Padding="5" Click="SignOutButton_Click" Margin="5" Visibility="Collapsed" FontFamily="Segoe Ui"/> + </StackPanel> + <Label Content="Authentication Result" Margin="0,0,0,-5" FontFamily="Segoe Ui" /> + <TextBox x:Name="ResultText" TextWrapping="Wrap" MinHeight="120" Margin="5" FontFamily="Segoe Ui"/> + <Label Content="Token Info" Margin="0,0,0,-5" FontFamily="Segoe Ui" /> + <TextBox x:Name="TokenInfoText" TextWrapping="Wrap" MinHeight="70" Margin="5" FontFamily="Segoe Ui"/> + </StackPanel> +``` ++This code adds key UI elements. The methods and objects that handle the functionality of the UI elements are defined in the *MainWindow.xaml.cs* file that we create in the next step. ++- A button that signs in the user. `SignInButton_Click` method is called when the user selects this button. +- A button that signs out the user. `SignOutButton_Click` method is called when the user selects this button. +- A text box that displays the authentication result details after the user attempts to sign in. Information displayed here's returned by the `ResultText` object. +- A text box that displays the token details after the user successfully signs in. Information displayed here's returned by the `TokenInfoText` object. ++## Add code to the MainWindow.xaml.cs file ++The *MainWindow.xaml.cs* file contains the code that provides th runtime logic for the behavior of the UI elements in the *MainWindow.xaml* file. ++1. Open the *MainWindow.xaml.cs* file in the root folder of the app. +1. Add the following code in the file to import the packages, and define placeholders for the methods we create. ++ ```csharp + using Microsoft.Identity.Client; + using System; + using System.Linq; + using System.Windows; + using System.Windows.Interop; + + namespace sign_in_dotnet_wpf + { + public partial class MainWindow : Window + { + string[] scopes = new string[] { }; + + public MainWindow() + { + InitializeComponent(); + } ++ private async void SignInButton_Click(object sender, RoutedEventArgs e){...} ++ private async void SignOutButton_Click(object sender, RoutedEventArgs e){...} ++ private void DisplayBasicTokenInfo(AuthenticationResult authResult){...} + } + } + ``` ++1. Add the following code to the `SignInButton_Click` method. This method is called when the user selects the **Sign-In** button. ++ ```csharp + private async void SignInButton_Click(object sender, RoutedEventArgs e) + { + AuthenticationResult authResult = null; + var app = App.PublicClientApp; ++ ResultText.Text = string.Empty; + TokenInfoText.Text = string.Empty; ++ IAccount firstAccount; + + var accounts = await app.GetAccountsAsync(); + firstAccount = accounts.FirstOrDefault(); ++ try + { + authResult = await app.AcquireTokenSilent(scopes, firstAccount) + .ExecuteAsync(); + } + catch (MsalUiRequiredException ex) + { + try + { + authResult = await app.AcquireTokenInteractive(scopes) + .WithAccount(firstAccount) + .WithParentActivityOrWindow(new WindowInteropHelper(this).Handle) + .WithPrompt(Prompt.SelectAccount) + .ExecuteAsync(); + } + catch (MsalException msalex) + { + ResultText.Text = $"Error Acquiring Token:{System.Environment.NewLine}{msalex}"; + } + catch (Exception ex) + { + ResultText.Text = $"Error Acquiring Token Silently:{System.Environment.NewLine}{ex}"; + return; + } ++ if (authResult != null) + { + ResultText.Text = "Sign in was successful."; + DisplayBasicTokenInfo(authResult); + this.SignInButton.Visibility = Visibility.Collapsed; + this.SignOutButton.Visibility = Visibility.Visible; + } + } + } + ``` ++ `GetAccountsAsync()` returns all the available accounts in the user token cache for the app. The `IAccount` interface represents information about a single account. ++ To acquire tokens, the app attempts to acquire the token silently using the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. The `AcquireTokenSilent` method may fail, for example, because the user signed out. When MSAL detects that the issue can be resolved by requiring an interactive action, it throws an `MsalUiRequiredException` exception. This exception causes the app to acquire a token interactively. ++ Calling the `AcquireTokenInteractive` method results in a window that prompts users to sign in. Apps usually require users to sign in interactively the first time they need to authenticate. They might also need to sign in when a silent operation to acquire a token. After `AcquireTokenInteractive` is executed for the first time, `AcquireTokenSilent` becomes the usual method to use to obtain tokens + +1. Add the following code to the `SignOutButton_Click` method. This method is called when the user selects the **Sign-Out** button. ++ ```csharp + private async void SignOutButton_Click(object sender, RoutedEventArgs e) + { + var accounts = await App.PublicClientApp.GetAccountsAsync(); + if (accounts.Any()) + { + try + { + await App.PublicClientApp.RemoveAsync(accounts.FirstOrDefault()); + this.ResultText.Text = "User has signed-out"; + this.TokenInfoText.Text = string.Empty; + this.SignInButton.Visibility = Visibility.Visible; + this.SignOutButton.Visibility = Visibility.Collapsed; + } + catch (MsalException ex) + { + ResultText.Text = $"Error signing-out user: {ex.Message}"; + } + } + } + ``` + + The `SignOutButton_Click` method clears the cache of all accounts and all corresponding access tokens. The next time the user attempts to sign in, they'll have to do so interactively. ++1. Add the following code to the `DisplayBasicTokenInfo` method. This method displays basic information about the token. ++ ```csharp + private void DisplayBasicTokenInfo(AuthenticationResult authResult) + { + TokenInfoText.Text = ""; + if (authResult != null) + { + TokenInfoText.Text += $"Username: {authResult.Account.Username}" + Environment.NewLine; + TokenInfoText.Text += $"{authResult.Account.HomeAccountId}" + Environment.NewLine; + } + } + ``` ++## Add code to the App.xaml.cs file ++*App.xaml* is where you declare resources that are used across the app. It's the entry point for your app. *App.xaml.cs* is the code behind file for *App.xaml*. *App.xaml.cs* also defines the start window for your application. ++Open the *App.xaml.cs* file in the root folder of the app, then add the following code into it. ++```csharp +using System.Windows; +using System.Reflection; +using Microsoft.Identity.Client; +using Microsoft.Identity.Client.Broker; +using Microsoft.Extensions.Configuration; +using Microsoft.Extensions.Configuration.Json; ++namespace sign_in_dotnet_wpf +{ + public partial class App : Application + { + static App() + { + CreateApplication(); + } ++ public static void CreateApplication() + { + var assembly = Assembly.GetExecutingAssembly(); + using var stream = assembly.GetManifestResourceStream("sign_in_dotnet_wpf.appsettings.json"); + AppConfiguration = new ConfigurationBuilder() + .AddJsonStream(stream) + .Build(); ++ AzureAdConfig azureADConfig = AppConfiguration.GetSection("AzureAd").Get<AzureAdConfig>(); ++ var builder = PublicClientApplicationBuilder.Create(azureADConfig.ClientId) + .WithAuthority(azureADConfig.Authority) + .WithDefaultRedirectUri(); ++ _clientApp = builder.Build(); + TokenCacheHelper.EnableSerialization(_clientApp.UserTokenCache); + } + + private static IPublicClientApplication _clientApp; + private static IConfiguration AppConfiguration; + public static IPublicClientApplication PublicClientApp { get { return _clientApp; } } + } +} +``` ++In this step, you load the *appsettings.json* file. The configuration builder helps you read the app configs defined in the *appsettings.json* file. You also define the WPF app as a public client app since it's a desktop app. The `TokenCacheHelper.EnableSerialization` method enables the token cache serialization. ++## Run the app ++Run your app and sign in to test the application ++1. In your terminal, navigate to the root folder of your WPF app and run the app by running the command `dotnet run` in your terminal. +1. After you launch the sample, you should see a window with a **Sign-In** button. Select the **Sign-In** button. ++ :::image type="content" source="./media/tutorial-desktop-wpf-dotnet-sign-in-build-app/wpf-sign-in-screen.png" alt-text="Screenshot of sign-in screen for a WPF desktop application."::: ++1. On the sign-in page, enter your account email address. If you don't have an account, select **No account? Create one**, which starts the sign-up flow. Follow through this flow to create a new account and sign in. +1. Once you sign in, you see a screen displaying successful sign-in and basic information about your user account stored in the retrieved token. ++ :::image type="content" source="./media/tutorial-desktop-wpf-dotnet-sign-in-build-app/wpf-successful-sign-in.png" alt-text="Screenshot of successful sign-in for desktop WPF app."::: ++## See also ++- [Sign in users in a sample Electron desktop application by using Azure AD for customers](./how-to-desktop-app-electron-sample-sign-in.md) +- [Sign in users in a sample .NET MAUI desktop application by using Azure AD for customers](./how-to-desktop-app-maui-sample-sign-in.md) +- [Customize branding for your sign-in experience](./how-to-customize-branding-customers.md) |
active-directory | Tutorial Desktop Wpf Dotnet Sign In Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-desktop-wpf-dotnet-sign-in-prepare-tenant.md | + + Title: "Tutorial: Prepare your customer tenant to sign in user in .NET WPF application" +description: Learn about how to prepare your Azure Active Directory (Azure AD) for customers tenant to sign in users to your .NET WPF application ++++++++ Last updated : 07/26/2023+++# Tutorial: Prepare your customer tenant to sign in user in .NET WPF application ++The first step in securing your applications is to register them. In this tutorial, you prepare your Azure Active Directory (Azure AD) for customers tenant for authentication. This tutorial is part of a series that guides you to add authentication to a .NET Windows Presentation Form (WPF) app that signs in and signs out users using Azure AD for customers. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Register a WPF desktop application in the Microsoft Entra admin center +> - Create a sign-in and sign-out user flow in customers tenant. +> - Associate your WPF desktop app with the user flow. ++## Prerequisites ++- Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). + +## Register the desktop app +++## Specify your app platform +++## Grant API permissions ++Since this app signs-in users, add delegated permissions: +++## Create a user flow +++## Associate the WPF application with the user flow +++## Record your registration details ++The next step after this tutorial is to build a WPF desktop app that authenticates users. Ensure you have the following details: ++- The Application (client) ID of the WPF desktop app that you registered. +- The Directory (tenant) subdomain where you registered your WPF desktop app. ++## Next steps ++In the next tutorial, you configure your WPF desktop app. ++> [!div class="nextstepaction"] +> [Build your WPF desktop app >](./tutorial-desktop-wpf-dotnet-sign-in-build-app.md) |
active-directory | Tutorial Protect Web Api Dotnet Core Build App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-protect-web-api-dotnet-core-build-app.md | + + Title: "Tutorial: Secure an ASP.NET web API registered in the Azure AD for customer's tenant" +description: Learn how to secure a ASP.NET web API registered in the Azure AD for customer's tenant +++++++++ Last updated : 07/27/2023+++#Customer intent: As a dev, I want to secure my ASP.NET Core web API registered in the Azure AD customer's tenant. +++# Tutorial: Secure an ASP.NET web API registered in the Azure AD for customer's tenant ++Web APIs may contain information that requires user authentication and authorization. Applications can use delegated access, acting on behalf of a signed-in user, or app-only access, acting only as the application's own identity when calling protected web APIs. ++In this tutorial, we build a web API that publishes both delegated permissions (scopes) and application permissions (app roles). Client apps such as web apps that acquire tokens on behalf of a signed-in user use the delegated permissions. Client apps such as daemon apps that acquire tokens for themselves use the application permissions. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> Configure your web API tp use it's app registration details +> Configure your web API to use delegated and application permissions registered in the app registration +> Protect your web API endpoints ++## Prerequisites ++- An API registration that exposes at least one scope (delegated permissions) and one app role (application permission) such as *ToDoList.Read*. If you haven't already, [register an API in the Microsoft Entra admin center](how-to-register-ciam-app.md?tabs=webapi&preserve-view=true) by following the registration steps. Ensure you have the following: + - Application (client) ID of the Web API + - Directory (tenant) ID of the Web API is registered + - Directory (tenant) subdomain of where the Web API is registered. For example, if your [primary domain](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details) is *contoso.onmicrosoft.com*, your Directory (tenant) subdomain is *contoso*. + - *ToDoList.Read* and *ToDoList.ReadWrite* as the [delegated permissions (scopes) exposed by the Web API](./how-to-register-ciam-app.md?tabs=webapi&preserve-view=true#expose-permissions). + - *ToDoList.Read.All* and *ToDoList.ReadWrite.All* as the [application permissions (app roles) exposed by the Web API](how-to-register-ciam-app.md?tabs=webapi&preserve-view=true#add-app-roles). ++- [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later. +- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor. ++## Create an ASP.NET Core web API ++1. Open your terminal, then navigate to the folder where you want your project to live. +1. Run the following commands: ++ ```dotnetcli + dotnet new webapi -o ToDoListAPI + cd ToDoListAPI + ``` ++1. When a dialog box asks if you want to add required assets to the project, select **Yes**. ++## Install packages ++Install the following packages: ++- `Microsoft.EntityFrameworkCore.InMemory` that allows Entity Framework Core to be used with an in-memory database. It's not designed for production use. +- `Microsoft.Identity.Web` simplifies adding authentication and authorization support to web apps and web APIs integrating with the Microsoft identity platform. ++```dotnetcli +dotnet add package Microsoft.EntityFrameworkCore.InMemory +dotnet add package Microsoft.Identity.Web +``` ++## Configure app registration details ++Open the *appsettings.json* file in your app folder and add in the app registration details you recorded after registering your web API. ++```json +{ + "AzureAd": { + "Instance": "https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/", + "TenantId": "Enter_the_Tenant_Id_Here", + "ClientId": "Enter_the_Application_Id_Here", + }, + "Logging": {...}, + "AllowedHosts": "*" +} +``` ++Replace the following placeholders as shown: ++- Replace `Enter_the_Application_Id_Here` with your application (client) ID. +- Replace `Enter_the_Tenant_Id_Here` with your Directory (tenant) ID. +- Replace `Enter_the_Tenant_Subdomain_Here` with your Directory (tenant) subdomain. ++## Add app role and scope ++All APIs must publish a minimum of one scope, also called delegated permission, for the client apps to obtain an access token for a user successfully. APIs should also publish a minimum of one app role for applications, also called application permission, for the client apps to obtain an access token as themselves, that is, when they aren't signing-in a user. ++We specify these permissions in the *appsettings.json* file. In this tutorial, we have registered four permissions. *ToDoList.ReadWrite* and *ToDoList.Read* as the delegated permissions, and *ToDoList.ReadWrite.All* and *ToDoList.Read.All* as the application permissions. ++```json +{ + "AzureAd": { + "Instance": "https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/", + "TenantId": "Enter_the_Tenant_Id_Here", + "ClientId": "Enter_the_Application_Id_Here", + "Scopes": { + "Read": ["ToDoList.Read", "ToDoList.ReadWrite"], + "Write": ["ToDoList.ReadWrite"] + }, + "AppPermissions": { + "Read": ["ToDoList.Read.All", "ToDoList.ReadWrite.All"], + "Write": ["ToDoList.ReadWrite.All"] + } + }, + "Logging": {...}, + "AllowedHosts": "*" +} +``` ++## Add authentication scheme ++An authentication scheme is named when the authentication service is configured during authentication. In this article, we use the JWT bearer authentication scheme. Add the following code in the *Programs.cs* file to add an authentication scheme. ++```csharp +// Add the following to your imports +using Microsoft.AspNetCore.Authentication.JwtBearer; +using Microsoft.Identity.Web; ++// Add authentication scheme +builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) + .AddMicrosoftIdentityWebApi(builder.Configuration); +``` ++## Create your models ++Create a folder called *Models* in the root folder of your project. Navigate to the folder and create a file called *ToDo.cs* then add the following code. This code creates a model called *ToDo*. ++```csharp +using System; ++namespace ToDoListAPI.Models; ++public class ToDo +{ + public int Id { get; set; } + public Guid Owner { get; set; } + public string Description { get; set; } = string.Empty; +} +``` ++## Add a database context ++The database context is the main class that coordinates Entity Framework functionality for a data model. This class is created by deriving from the *Microsoft.EntityFrameworkCore.DbContext* class. In this tutorial, we use an in-memory database for testing purposes. ++1. Create a folder called *DbContext* in the root folder of the project. +1. Navigate into that folder and create a file called *ToDoContext.cs* then add the following contents to that file: ++ ```csharp + using Microsoft.EntityFrameworkCore; + using ToDoListAPI.Models; + + namespace ToDoListAPI.Context; + + public class ToDoContext : DbContext + { + public ToDoContext(DbContextOptions<ToDoContext> options) : base(options) + { + } + + public DbSet<ToDo> ToDos { get; set; } + } + ``` ++1. Open the *Program.cs* file in the root folder of your app, then add the following code in the file. This code registers a `DbContext` subclass called `ToDoContext` as a scoped service in the ASP.NET Core application service provider (also known as, the dependency injection container). The context is configured to use the in-memory database. ++ ```csharp + // Add the following to your imports + using ToDoListAPI.Context; + using Microsoft.EntityFrameworkCore; + + builder.Services.AddDbContext<ToDoContext>(opt => + opt.UseInMemoryDatabase("ToDos")); + ``` ++## Add controllers ++In most cases, a controller would have more than one action. Typically *Create*, *Read*, *Update*, and *Delete* (CRUD) actions. In this tutorial, we create only two action items. A read all action item and a create action item to demonstrate how to protect your endpoints. ++1. Navigate to the *Controllers* folder in the root folder of your project. +1. Create a file called *ToDoListController.cs* inside this folder. Open the file then add the following boiler plate code: ++ ```csharp + using Microsoft.AspNetCore.Authorization; + using Microsoft.AspNetCore.Mvc; + using Microsoft.EntityFrameworkCore; + using Microsoft.Identity.Web; + using Microsoft.Identity.Web.Resource; + using ToDoListAPI.Models; + using ToDoListAPI.Context; + + namespace ToDoListAPI.Controllers; + + [Authorize] + [Route("api/[controller]")] + [ApiController] + public class ToDoListController : ControllerBase + { + private readonly ToDoContext _toDoContext; + + public ToDoListController(ToDoContext toDoContext) + { + _toDoContext = toDoContext; + } + + [HttpGet()] + [RequiredScopeOrAppPermission()] + public async Task<IActionResult> GetAsync(){...} + + [HttpPost] + [RequiredScopeOrAppPermission()] + public async Task<IActionResult> PostAsync([FromBody] ToDo toDo){...} + + private bool RequestCanAccessToDo(Guid userId){...} + + private Guid GetUserId(){...} + + private bool IsAppMakingRequest(){...} + } + ``` ++## Add code to the controller ++In this section, we add code to the placeholders we created. The focus here isn't on building the API, but rather protecting it. ++1. Import the necessary packages. The *Microsoft.Identity.Web* package is an MSAL wrapper that helps us easily handle authentication logic, for example, by handling token validation. To ensure that our endpoints require authorization, we use the inbuilt *Microsoft.AspNetCore.Authorization* package. ++1. Since we granted permissions for this API to be called either using delegated permissions on behalf of the user or application permissions where the client calls as itself and not on the user's behalf, it's important to know whether the call is being made by the app on its own behalf. The easiest way to do this is the claims to find whether the access token contains the `idtyp` optional claim. This `idtyp` claim is the easiest way for the API to determine whether a token is an app token or an app + user token. We recommend enabling the `idtyp` optional claim. ++ If the `idtyp` claim isn't enabled, you can use the `roles` and `scp` claims to determine whether the access token is an app token or an app + user token. An access token issued by Azure AD has at least one of the two claims. Access tokens issued to a user have the `scp` claim. Access tokens issued to an application have the `roles` claim. Access tokens that contain both claims are issued only to users, where the `scp` claim designates the delegated permissions, while the `roles` claim designates the user's role. Access tokens that have neither aren't to be honored. ++ ```csharp + private bool IsAppMakingRequest() + { + if (HttpContext.User.Claims.Any(c => c.Type == "idtyp")) + { + return HttpContext.User.Claims.Any(c => c.Type == "idtyp" && c.Value == "app"); + } + else + { + return HttpContext.User.Claims.Any(c => c.Type == "roles") && !HttpContext.User.Claims.Any(c => c.Type == "scp"); + } + } + ``` ++1. Add a helper function that determines whether the request being made contains enough permissions to carry out the intended action. Check whether it's the app making the request on its own behalf or whether the app is making the call on behalf of a user who owns the given resource by validating the user ID. ++ ```csharp + private bool RequestCanAccessToDo(Guid userId) + { + return IsAppMakingRequest() || (userId == GetUserId()); + } ++ private Guid GetUserId() + { + Guid userId; + if (!Guid.TryParse(HttpContext.User.GetObjectId(), out userId)) + { + throw new Exception("User ID is not valid."); + } + return userId; + } + ``` ++1. Plug in your permission definitions to protect routes. Protect your API by adding the `[Authorize]` attribute to the controller class. This ensures the controller actions can be called only if the API is called with an authorized identity. The permission definitions define what kinds of permissions are needed to perform these actions. ++ ```csharp + [Authorize] + [Route("api/[controller]")] + [ApiController] + public class ToDoListController: ControllerBase{...} + ``` ++ Add permissions to the GET all endpoint and the POST endpoint. Do this using the *RequiredScopeOrAppPermission* method that is part of the *Microsoft.Identity.Web.Resource* namespace. You then pass scopes and permissions to this method via the *RequiredScopesConfigurationKey* and *RequiredAppPermissionsConfigurationKey* attributes. ++ ```csharp + [HttpGet] + [RequiredScopeOrAppPermission( + RequiredScopesConfigurationKey = "AzureAD:Scopes:Read", + RequiredAppPermissionsConfigurationKey = "AzureAD:AppPermissions:Read" + )] + public async Task<IActionResult> GetAsync() + { + var toDos = await _toDoContext.ToDos! + .Where(td => RequestCanAccessToDo(td.Owner)) + .ToListAsync(); ++ return Ok(toDos); + } ++ [HttpPost] + [RequiredScopeOrAppPermission( + RequiredScopesConfigurationKey = "AzureAD:Scopes:Write", + RequiredAppPermissionsConfigurationKey = "AzureAD:AppPermissions:Write" + )] + public async Task<IActionResult> PostAsync([FromBody] ToDo toDo) + { + // Only let applications with global to-do access set the user ID or to-do's + var ownerIdOfTodo = IsAppMakingRequest() ? toDo.Owner : GetUserId(); ++ var newToDo = new ToDo() + { + Owner = ownerIdOfTodo, + Description = toDo.Description + }; ++ await _toDoContext.ToDos!.AddAsync(newToDo); + await _toDoContext.SaveChangesAsync(); ++ return Created($"/todo/{newToDo!.Id}", newToDo); + } + ``` ++## Run your API ++Run your API to ensure that it's running well without any errors using the command `dotnet run`. If you intend to use https protocol even during testing, you need to [trust .NET's development certificate](/aspnet/core/tutorials/first-web-api#test-the-project). ++For a full example of this API code, see the [samples file](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/blob/main/2-Authorization/3-call-own-api-dotnet-core-daemon/ToDoListAPI). ++## Next steps ++> [!div class="nextstepaction"] +> [Test your protected web API >](./tutorial-protect-web-api-dotnet-core-test-api.md) |
active-directory | Tutorial Protect Web Api Dotnet Core Test Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-protect-web-api-dotnet-core-test-api.md | + + Title: Test a protected web API +description: Learn how to test a protected web API registered in an Azure AD for customers tenant +++++++++ Last updated : 07/27/2023++#Customer intent: As a dev, I want to learn how to test a protected web API registered in the Azure AD for customers tenant. +++# Test your protected API ++This tutorial is part of a series that helps you build and test a protected web API that is registered in an Azure Active Directory (Azure AD) for customers tenant. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Test a protected web API using a lightweight daemon app that calls the web API ++## Prerequisites ++Before going through this article, ensure you have a [protected web API](./tutorial-protect-web-api-dotnet-core-build-app.md) to use for testing purposes. ++## Register the daemon app ++++## Assign app role to your daemon app ++Apps authenticating by themselves require app permissions. +++## Write code ++1. Initialize a .NET console app and navigate to its root folder ++ ```dotnetcli + dotnet new console -o MyTestApp + cd MyTestApp + ``` +1. Install MSAL to help you with handling authentication by running the following command: + + ```dotnetcli + dotnet add package Microsoft.Identity.Client + ``` +1. Run your API project and note the port on which it's running. +1. Open the *Program.cs* file and replace the "Hello world" code with the following code. ++ ```csharp + using System; + using System.Net.Http; + using System.Net.Http.Headers; ++ HttpClient client = new HttpClient(); ++ var response = await client.GetAsync("https://localhost:<your-api-port>/api/todolist"); + Console.WriteLine("Your response is: " + response.StatusCode); + ``` ++ Navigate to the daemon app root directory and run app using the command `dotnet run`. This code sends a request without an access token. You should see the string: *Your response is: Unauthorized* printed in your console. +1. Remove the code in step 4 and replace with the following to test your API by sending a request with a valid access token. ++ ```csharp + using Microsoft.Identity.Client; + using System; + using System.Net.Http; + using System.Net.Http.Headers; ++ HttpClient client = new HttpClient(); ++ var clientId = "<your-daemon-app-client-id>"; + var clientSecret = "<your-daemon-app-secret>"; + var scopes = new[] {"api://<your-web-api-application-id>/.default"}; + var tenantName= "<your-tenant-name>"; + var authority = $"https://{tenantName}.ciamlogin.com/"; ++ var app = ConfidentialClientApplicationBuilder + .Create(clientId) + .WithAuthority(authority) + .WithClientSecret(clientSecret) + .Build(); ++ var result = await app.AcquireTokenForClient(scopes).ExecuteAsync(); ++ client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken); + var response = await client.GetAsync("https://localhost:44351/api/todolist"); + Console.WriteLine("Your response is: " + response.StatusCode); + ``` ++ Navigate to the daemon app root directory and run app using the command `dotnet run`. This code sends a request with a valid access token. You should see the string: *Your response is: OK* printed in your console. |
active-directory | Tutorial Single Page App React Sign In Configure Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-react-sign-in-configure-authentication.md | + + Title: Tutorial - Handle authentication flows in a React single-page app +description: Learn how to configure authentication for a React single-page app (SPA) with your Azure Active Directory (AD) for customers tenant. ++++++++ Last updated : 06/09/2023++#Customer intent: As a developer, I want to learn how to configure a React single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. +++# Tutorial: Handle authentication flows in a React single-page app ++In the [previous article](./tutorial-single-page-app-react-sign-in-prepare-app.md), you created a React single-page app (SPA) and prepared it for authentication with your Azure Active Directory (Azure AD) for customers tenant. In this article, you'll learn how to handle authentication flows in your app by adding components. ++In this tutorial; ++> [!div class="checklist"] +> * Add a *DataDisplay* component to the app +> * Add a *ProfileContent* component to the app +> * Add a *PageLayout* component to the app ++## Prerequisites ++* Completion of the prerequisites and steps in [Prepare an single-page app for authentication](./tutorial-single-page-app-react-sign-in-prepare-app.md). ++## Add components to the application ++Functional components are the building blocks of React apps, and are used to build the sign-in and sign-out experiences in your React SPA. ++### Add the DataDisplay component ++1. Open *src/components/DataDisplay.jsx* and add the following code snippet ++ ```jsx + import { Table } from 'react-bootstrap'; + import { createClaimsTable } from '../utils/claimUtils'; + + import '../styles/App.css'; + + export const IdTokenData = (props) => { + const tokenClaims = createClaimsTable(props.idTokenClaims); + + const tableRow = Object.keys(tokenClaims).map((key, index) => { + return ( + <tr key={key}> + {tokenClaims[key].map((claimItem) => ( + <td key={claimItem}>{claimItem}</td> + ))} + </tr> + ); + }); + return ( + <> + <div className="data-area-div"> + <p> + See below the claims in your <strong> ID token </strong>. For more information, visit:{' '} + <span> + <a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/id-tokens#claims-in-an-id-token"> + docs.microsoft.com + </a> + </span> + </p> + <div className="data-area-div"> + <Table responsive striped bordered hover> + <thead> + <tr> + <th>Claim</th> + <th>Value</th> + <th>Description</th> + </tr> + </thead> + <tbody>{tableRow}</tbody> + </Table> + </div> + </div> + </> + ); + }; + ``` ++1. Save the file. ++### Add the NavigationBar component ++1. Open *src/components/NavigationBar.jsx* and add the following code snippet ++ ```jsx + import { AuthenticatedTemplate, UnauthenticatedTemplate, useMsal } from '@azure/msal-react'; + import { Navbar, Button } from 'react-bootstrap'; + import { loginRequest } from '../authConfig'; + + export const NavigationBar = () => { + const { instance } = useMsal(); + + const handleLoginRedirect = () => { + instance.loginRedirect(loginRequest).catch((error) => console.log(error)); + }; + + const handleLogoutRedirect = () => { + instance.logoutRedirect().catch((error) => console.log(error)); + }; + + /** + * Most applications will need to conditionally render certain components based on whether a user is signed in or not. + * msal-react provides 2 easy ways to do this. AuthenticatedTemplate and UnauthenticatedTemplate components will + * only render their children if a user is authenticated or unauthenticated, respectively. + */ + return ( + <> + <Navbar bg="primary" variant="dark" className="navbarStyle"> + <a className="navbar-brand" href="/"> + Microsoft identity platform + </a> + <AuthenticatedTemplate> + <div className="collapse navbar-collapse justify-content-end"> + <Button variant="warning" onClick={handleLogoutRedirect}> + Sign out + </Button> + </div> + </AuthenticatedTemplate> + <UnauthenticatedTemplate> + <div className="collapse navbar-collapse justify-content-end"> + <Button onClick={handleLoginRedirect}>Sign in</Button> + </div> + </UnauthenticatedTemplate> + </Navbar> + </> + ); + }; + ``` ++1. Save the file. ++### Add the PageLayout component ++1. Open *src/components/PageLayout.jsx* and add the following code snippet ++ ```jsx + import { AuthenticatedTemplate } from '@azure/msal-react'; + + import { NavigationBar } from './NavigationBar.jsx'; + + export const PageLayout = (props) => { + /** + * Most applications will need to conditionally render certain components based on whether a user is signed in or not. + * msal-react provides 2 easy ways to do this. AuthenticatedTemplate and UnauthenticatedTemplate components will + * only render their children if a user is authenticated or unauthenticated, respectively. + */ + return ( + <> + <NavigationBar /> + <br /> + <h5> + <center>Welcome to the Microsoft Authentication Library For React Tutorial</center> + </h5> + <br /> + {props.children} + <br /> + <AuthenticatedTemplate> + <footer> + <center> + How did we do? + <a + href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR_ivMYEeUKlEq8CxnMPgdNZUNDlUTTk2NVNYQkZSSjdaTk5KT1o4V1VVNS4u" + rel="noopener noreferrer" + target="_blank" + > + {' '} + Share your experience! + </a> + </center> + </footer> + </AuthenticatedTemplate> + </> + ); + } + ``` ++1. Save the file. ++## Next steps ++> [!div class="nextstepaction"] +> [Sign in and sign out of the React SPA](./tutorial-single-page-app-react-sign-in-sign-out.md) |
active-directory | Tutorial Single Page App React Sign In Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-react-sign-in-prepare-app.md | + + Title: Tutorial - Prepare a React single-page app (SPA) for authentication in a customer tenant +description: Learn how to prepare a React single-page app (SPA) for authentication with your Azure Active Directory (AD) for customers tenant. ++++++ Last updated : 05/23/2023++#Customer intent: As a dev, devops, or IT admin, I want to learn how to enable authentication in my own React single-page app +++# Tutorial: Prepare a React single-page app (SPA) for authentication in a customer tenant ++In the [previous article](./tutorial-single-page-app-react-sign-in-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This tutorial demonstrates how to create a React single-page app using `npm` and create files needed for authentication and authorization. ++In this tutorial; ++> [!div class="checklist"] +> * Create a React project in Visual Studio Code +> * Install identity and bootstrap packages +> * Configure the settings for the application ++## Prerequisites ++* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate users in a React single-page app (SPA)](./tutorial-single-page-app-react-sign-in-prepare-tenant.md). +* Although any integrated development environment (IDE) that supports React applications can be used, this tutorial uses **Visual Studio Code**. You can download it [here](https://visualstudio.microsoft.com/downloads/). +* [Node.js](https://nodejs.org/en/download/). ++## Create a React project ++1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project. +1. Open a new terminal by selecting **Terminal** > **New Terminal**. +1. Run the following commands to create a new React project with the name *reactspalocal*, change to the new directory and start the React project. A web browser will open with the address `http://localhost:3000/` by default. The browser remains open and re-renders for every saved change. ++ ```powershell + npx create-react-app reactspalocal + cd reactspalocal + npm start + ``` +1. Create additional folders and files to achieve the following folder structure: ++ ```text + reactspalocal + Γö£ΓöÇΓöÇΓöÇ public + Γöé ΓööΓöÇΓöÇΓöÇ https://docsupdatetracker.net/index.html + ΓööΓöÇΓöÇΓöÇsrc + Γö£ΓöÇΓöÇΓöÇ components + Γöé ΓööΓöÇΓöÇΓöÇ DataDisplay.jsx + Γöé ΓööΓöÇΓöÇΓöÇ NavigationBar.jsx + Γöé ΓööΓöÇΓöÇΓöÇ PageLayout.jsx + ΓööΓöÇΓöÇΓöÇstyles + Γöé ΓööΓöÇΓöÇΓöÇ App.css + Γöé ΓööΓöÇΓöÇΓöÇ index.css + ΓööΓöÇΓöÇΓöÇ utils + Γöé ΓööΓöÇΓöÇΓöÇ claimUtils.js + ΓööΓöÇΓöÇ App.jsx + ΓööΓöÇΓöÇ authConfig.js + ΓööΓöÇΓöÇ index.js + ``` ++## Install app dependencies ++Identity related **npm** packages must be installed in the project to enable user authentication. For project styling, **Bootstrap** is used. ++1. In the **Terminal** bar, select the **+** icon to create a new terminal. A new terminal window will open enabling the other terminal to continue running in the background. +1. If necessary, navigate to *reactspalocal* and enter the following commands into the terminal to install the `msal` and `bootstrap` packages. ++ ```powershell + npm install @azure/msal-browser @azure/msal-react + npm install react-bootstrap bootstrap + ``` ++## Create the authentication configuration file, *authConfig.js* ++1. In the *src* folder, open *authConfig.js* and add the following code snippet: ++ ```javascript + /* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. + */ + + import { LogLevel } from '@azure/msal-browser'; + + /** + * Configuration object to be passed to MSAL instance on creation. + * For a full list of MSAL.js configuration parameters, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md + */ + + export const msalConfig = { + auth: { + clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply. + authority: 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace the placeholder with your tenant subdomain + redirectUri: '/', // Points to window.location.origin. You must register this URI on Azure Portal/App Registration. + postLogoutRedirectUri: '/', // Indicates the page to navigate after logout. + navigateToLoginRequestUrl: false, // If "true", will navigate back to the original request location before processing the auth code response. + }, + cache: { + cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO between tabs. + storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge + }, + system: { + loggerOptions: { + loggerCallback: (level, message, containsPii) => { + if (containsPii) { + return; + } + switch (level) { + case LogLevel.Error: + console.error(message); + return; + case LogLevel.Info: + console.info(message); + return; + case LogLevel.Verbose: + console.debug(message); + return; + case LogLevel.Warning: + console.warn(message); + return; + default: + return; + } + }, + }, + }, + }; + + /** + * Scopes you add here will be prompted for user consent during sign-in. + * By default, MSAL.js will add OIDC scopes (openid, profile, email) to any login request. + * For more information about OIDC scopes, visit: + * https://docs.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#openid-connect-scopes + */ + export const loginRequest = { + scopes: [], + }; + + /** + * An optional silentRequest object can be used to achieve silent SSO + * between applications by providing a "login_hint" property. + */ + // export const silentRequest = { + // scopes: ["openid", "profile"], + // loginHint: "example@domain.net" + // }; + ``` ++1. Replace the following values with the values from the Azure portal: + - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center. + - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). +2. Save the file. ++## Modify *index.js* to include the authentication provider ++All parts of the app that require authentication must be wrapped in the [`MsalProvider`](/javascript/api/@azure/msal-react/#@azure-msal-react-msalprovider) component. You instantiate a [PublicClientApplication](/javascript/api/@azure/msal-browser/publicclientapplication) then pass it to `MsalProvider`. ++1. In the *src* folder, open *index.js* and replace the contents of the file with the following code snippet to use the `msal` packages and bootstrap styling: ++ ```javascript + import React from 'react'; + import ReactDOM from 'react-dom/client'; + import App from './App'; + import { PublicClientApplication, EventType } from '@azure/msal-browser'; + import { msalConfig } from './authConfig'; + + import 'bootstrap/dist/css/bootstrap.min.css'; + import './styles/index.css'; + + /** + * MSAL should be instantiated outside of the component tree to prevent it from being re-instantiated on re-renders. + * For more, visit: https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md + */ + const msalInstance = new PublicClientApplication(msalConfig); + + // Default to using the first account if no account is active on page load + if (!msalInstance.getActiveAccount() && msalInstance.getAllAccounts().length > 0) { + // Account selection logic is app dependent. Adjust as needed for different use cases. + msalInstance.setActiveAccount(msalInstance.getActiveAccount()[0]); + } + + // Listen for sign-in event and set active account + msalInstance.addEventCallback((event) => { + if (event.eventType === EventType.LOGIN_SUCCESS && event.payload.account) { + const account = event.payload.account; + msalInstance.setActiveAccount(account); + } + }); + + const root = ReactDOM.createRoot(document.getElementById('root')); + root.render( + <App instance={msalInstance}/> + ); + ``` ++## Next steps ++> [!div class="nextstepaction"] +> [Configure SPA for authentication](./tutorial-single-page-app-react-sign-in-configure-authentication.md) |
active-directory | Tutorial Single Page App React Sign In Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-react-sign-in-prepare-tenant.md | + + Title: Tutorial - Prepare your customer tenant to authenticate users in a React single-page app (SPA) +description: Learn how to configure your Azure Active Directory (AD) for customers tenant for authentication with a React single-page app (SPA). +++++++ Last updated : 05/23/2023+++#Customer intent: As a dev I want to prepare my customer tenant for building a single-page app (SPA) with React +++# Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA) ++This tutorial series demonstrates how to build a React single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences. ++In this tutorial; ++> [!div class="checklist"] +> * Register a SPA in the Microsoft Entra admin center, and record its identifiers +> * Define the platform and URLs +> * Grant permissions to the web application to access the Microsoft Graph API +> * Create a sign-in and sign-out user flow in the Microsoft Entra admin center +> * Associate your SPA with the user flow ++## Prerequisites ++- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions: ++ * Application administrator + * Application developer + * Cloud application administrator ++- An Azure AD for customers tenant. If you haven't already, [create one now](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). You can use an existing customer tenant if you have one. ++## Register the SPA and record identifiers +++## Add a platform redirect URL +++## Grant sign-in permissions +++## Create a sign-in and sign-up user flow +++## Associate the application with your user flow +++## Next steps ++> [!div class="nextstepaction"] +> [Prepare React SPA](./tutorial-single-page-app-react-sign-in-prepare-app.md) |
active-directory | Tutorial Single Page App React Sign In Sign Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-react-sign-in-sign-out.md | + + Title: Tutorial - Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant +description: Learn how to configure a React single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant. +++++++ Last updated : 05/23/2023+++#Customer intent: As a developer I want to add sign-in and sign-out functionality to my React single-page app +++# Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant ++In the [previous article](./tutorial-single-page-app-react-sign-in-configure-authentication.md), you created a React single-page app (SPA) in Visual Studio Code and configured it for authentication. This tutorial shows you how to add sign-in and sign-out functionality to the app. ++In this tutorial; ++> [!div class="checklist"] +> * Create a page layout and add the sign in and sign out experience +> * Replace the default function to render authenticated information +> * Sign in and sign out of the application using the user flow ++## Prerequisites ++* Completion of the prerequisites and steps in [Prepare an single-page app for authentication](./tutorial-single-page-app-react-sign-in-prepare-app.md). +++## Change filename and add function to render authenticated information ++By default, the application runs via a JavaScript file called *App.js*. It needs to be changed to a *.jsx* file, which is an extension that allows a developer to write HTML in React. ++1. Rename *App.js* to *App.jsx*. +1. Replace the existing code with the following snippet: ++ ```javascript + import { MsalProvider, AuthenticatedTemplate, useMsal, UnauthenticatedTemplate } from '@azure/msal-react'; + import { Container, Button } from 'react-bootstrap'; + import { PageLayout } from './components/PageLayout'; + import { IdTokenData } from './components/DataDisplay'; + import { loginRequest } from './authConfig'; + + import './styles/App.css'; + + /** + * Most applications will need to conditionally render certain components based on whether a user is signed in or not. + * msal-react provides 2 easy ways to do this. AuthenticatedTemplate and UnauthenticatedTemplate components will + * only render their children if a user is authenticated or unauthenticated, respectively. For more, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md + */ + const MainContent = () => { + /** + * useMsal is hook that returns the PublicClientApplication instance, + * that tells you what msal is currently doing. For more, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/hooks.md + */ + const { instance } = useMsal(); + const activeAccount = instance.getActiveAccount(); + + const handleRedirect = () => { + instance + .loginRedirect({ + ...loginRequest, + prompt: 'create', + }) + .catch((error) => console.log(error)); + }; + return ( + <div className="App"> + <AuthenticatedTemplate> + {activeAccount ? ( + <Container> + <IdTokenData idTokenClaims={activeAccount.idTokenClaims} /> + </Container> + ) : null} + </AuthenticatedTemplate> + <UnauthenticatedTemplate> + <Button className="signInButton" onClick={handleRedirect} variant="primary"> + Sign up + </Button> + </UnauthenticatedTemplate> + </div> + ); + }; + + + /** + * msal-react is built on the React context API and all parts of your app that require authentication must be + * wrapped in the MsalProvider component. You will first need to initialize an instance of PublicClientApplication + * then pass this to MsalProvider as a prop. All components underneath MsalProvider will have access to the + * PublicClientApplication instance via context as well as all hooks and components provided by msal-react. For more, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md + */ + const App = ({ instance }) => { + return ( + <MsalProvider instance={instance}> + <PageLayout> + <MainContent /> + </PageLayout> + </MsalProvider> + ); + }; + + export default App; + ``` ++## Run your project and sign in ++All the required code snippets have been added, so the application can now be tested in a web browser. ++1. The application should already be running in your terminal. If not, run the following command to start your app. ++ ```powershell + npm start + ``` ++1. Open a web browser and navigate to `http://localhost:3000/` if you are not automatically redirected. +1. For the purposes of this tutorial, choose the **Sign in using Popup** option. +1. After the popup window appears with the sign-in options, select the account with which to sign-in. +1. A second window may appear indicating that a code will be sent to your email address. If this happens, select **Send code**. Open the email from the sender Microsoft account team, and enter the 7-digit single-use code. Once entered, select **Sign in**. +1. For **Stay signed in**, you can select either **No** or **Yes**. +1. The app will now ask for permission to sign-in and access data. Select **Accept** to continue. ++## Sign out of the application ++1. To sign out of the application, select **Sign out** in the navigation bar. +1. A window appears asking which account to sign out of. +1. Upon successful sign out, a final window appears advising you to close all browser windows. ++## Next steps ++> [!div class="nextstepaction"] +> [Enable self-service password reset](./how-to-enable-password-reset-customers.md) |
active-directory | Tutorial Web App Node Sign In Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-web-app-node-sign-in-prepare-app.md | + + Title: 'Tutorial: Prepare a Node.js web application for authentication' +description: Learn how to create a Node web app project, then prepare it for authentication +++++++++ Last updated : 07/27/2023++#Customer intent: As a dev, devops, I want to learn about how to enable authentication in my own Node.js web app with Azure Active Directory (Azure AD) for customers tenant +++# Tutorial: Prepare a Node.js web application for authentication ++In [Tutorial: Prepare your customer tenant to sign in users in a Node.js web app](tutorial-web-app-node-sign-in-prepare-tenant.md) tutorial, you prepared your customer tenant to sign in users. In this tutorial, you create a Node.js(Express) project and organize all the folders and files you require. You enable sign-in to the application you prepare here. This Node.js(Express) web application's views use [Handlebars](https://handlebarsjs.com). ++In this tutorial you'll; ++> [!div class="checklist"] +> +> - Create a Node.js project +> - Install dependencies +> - Add app views and UI components ++## Prerequisites ++- [Node.js](https://nodejs.org). ++- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor. ++- You've completed the steps in [Tutorial: Prepare your customer tenant to sign in users in a Node.js web app](tutorial-web-app-node-sign-in-prepare-tenant.md). ++## Create the Node.js project ++1. In a location of choice in your computer, create a folder to host your node application, such as *ciam-sign-in-node-express-web-app*. ++1. In your terminal, change directory into your Node web app folder, such as `cd ciam-sign-in-node-express-web-app`, then run the following command to create a new Node.js project: ++ ```powershell + npm init -y + ``` + The `init -y` command creates a default *package.json* file for your Node.js project. ++1. Create additional folders and files to achieve the following project structure: ++ ``` + ciam-sign-in-node-express-web-app/ + Γö£ΓöÇΓöÇ server.js + ΓööΓöÇΓöÇ app.js + ΓööΓöÇΓöÇ authConfig.js + ΓööΓöÇΓöÇ package.json + ΓööΓöÇΓöÇ .env + ΓööΓöÇΓöÇ auth/ + ΓööΓöÇΓöÇ AuthProvider.js + ΓööΓöÇΓöÇ controller/ + ΓööΓöÇΓöÇ authController.js + ΓööΓöÇΓöÇ routes/ + ΓööΓöÇΓöÇ auth.js + ΓööΓöÇΓöÇ index.js + ΓööΓöÇΓöÇ users.js + ΓööΓöÇΓöÇ views/ + ΓööΓöÇΓöÇ layouts.hbs + ΓööΓöÇΓöÇ error.hbs + ΓööΓöÇΓöÇ id.hbs + ΓööΓöÇΓöÇ index.hbs + ΓööΓöÇΓöÇ public/stylesheets/ + ΓööΓöÇΓöÇ style.css + ``` ++## Install app dependencies ++To install required identity and Node.js related npm packages, run the following command in your terminal ++```powershell +npm install express dotenv hbs express-session axios cookie-parser http-errors morgan @azure/msal-node +``` ++## Build app UI components ++1. In your code editor, open *views/index.hbs* file, then add the following code: ++ ```html + <h1>{{title}}</h1> + {{#if isAuthenticated }} + <p>Hi {{username}}!</p> + <a href="/users/id">View ID token claims</a> + <br> + <a href="/auth/signout">Sign out</a> + {{else}} + <p>Welcome to {{title}}</p> + <a href="/auth/signin">Sign in</a> + {{/if}} + ``` + In this view, if the user is authenticated, we show their username and links to visit `/auth/signout` and `/users/id` endpoints, otherwise, user needs to visit the `/auth/signin` endpoint to sign in. We define the express routes for these endpoints later in this article. ++1. In your code editor, open *views/id.hbs* file, then add the following code: ++ ```html + <h1>Azure AD for customers</h1> + <h3>ID Token</h3> + <table> + <tbody> + {{#each idTokenClaims}} + <tr> + <td>{{@key}}</td> + <td>{{this}}</td> + </tr> + {{/each}} + </tbody> + </table> + <a href="/">Go back</a> + ``` + We use this view to display ID token claims that Azure AD for customers returns to this app after a user successfully signs in. ++1. In your code editor, open *views/error.hbs* file, then add the following code: ++ ```html + <h1>{{message}}</h1> + <h2>{{error.status}}</h2> + <pre>{{error.stack}}</pre> + ``` ++ We use this view to display any errors that occur when the app runs. ++1. In your code editor, open *views/layout.hbs* file, then add the following code: ++ ```html + <!DOCTYPE html> + <html> + <head> + <title>{{title}}</title> + <link rel='stylesheet' href='/stylesheets/style.css' /> + </head> + <body> + {{{content}}} + </body> + </html> + ``` + + The `layout.hbs` file is in the layout file. It contains the HTML code that we require throughout the application view. ++1. In your code editor, open *public/stylesheets/style.css*, file, then add the following code: ++ ```css + body { + padding: 50px; + font: 14px "Lucida Grande", Helvetica, Arial, sans-serif; + } + + a { + color: #00B7FF; + } + ``` ++## Next steps ++> [!div class="nextstepaction"] +> [Add sign in and sign out >](tutorial-web-app-node-sign-in-sign-out.md) |
active-directory | Tutorial Web App Node Sign In Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-web-app-node-sign-in-prepare-tenant.md | + + Title: 'Tutorial: Prepare your customer tenant to sign in users in a Node.js web app' +description: Learn how to prepare your Azure Active Directory (Azure AD) tenant for customers to sign in users in your Node.js web application. +++++++++ Last updated : 07/27/2023++#Customer intent: As a dev, devops, I want to learn about how to enable authentication in my own Node.js web app with Azure Active Directory (Azure AD) for customers tenant +++# Tutorial: Prepare your customer tenant to sign in users in a Node.js web app ++This tutorial demonstrates how to prepare your Azure Active Directory (Azure AD) for customers tenant to sign in users in a Node.js web application. +++In this tutorial, you'll; ++> [!div class="checklist"] +> +> - Register a web application in the Microsoft Entra admin center. +> - Create a sign in and sign out user flow in Microsoft Entra admin center. +> - Associate your web application with the user flow. +++If you've already registered a web application in the Microsoft Entra admin center, and associated it with a user flow, you can skip the steps in this article and move to [Prepare your Node.js web app](tutorial-web-app-node-sign-in-prepare-app.md). ++## Prerequisites ++- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>. ++## Register the web app +++## Add app client secret +++## Grant API permissions +++## Create a user flow +++## Associate the web application with the user flow +++## Collect your app registration details ++Make sure you record the following details for use is later steps: ++- The *Application (client) ID* of the client web app that you registered. +- The *Directory (tenant) subdomain* where you registered your web app. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). +- The *Client secret* value for the web app you created. ++## Next steps ++> [!div class="nextstepaction"] +> [Start building your Node.js web app >](tutorial-web-app-node-sign-in-prepare-app.md) |
active-directory | Tutorial Web App Node Sign In Sign Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-web-app-node-sign-in-sign-out.md | + + Title: 'Tutorial: Add add sign-in and sign-out in your Node.js web application' +description: Learn how to add sign-in, sign-up and sign-out in your Node.js web application. +++++++++ Last updated : 07/27/2023++#Customer intent: As a dev, devops, I want to learn about how to enable authentication in my own Node.js web app with Azure Active Directory (Azure AD) for customers tenant +++# Tutorial: Add add sign-in and sign-out in your Node.js web application ++In [Tutorial: Prepare a Node.js web application for authentication](tutorial-web-app-node-sign-in-prepare-app.md) tutorial, you created a Node.js web app. In this tutorial, you add sign in, sign-up and sign out to the Node.js web app. To simplify adding authentication to the Node.js web app, you use [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node). The sign-in flow uses OpenID Connect (OIDC) authentication protocol, which securely signs in users. ++In this tutorial, you'll: ++> [!div class="checklist"] +> +> - Add sign-in and sign-out logic +> - View ID token claims +> - Run app and test sign-in and sign-out experience. ++## Prerequisites ++- You've completed the steps in [Tutorial: Prepare a Node.js web application for authentication](tutorial-web-app-node-sign-in-prepare-app.md). ++## Create MSAL configuration object ++In your code editor, open *authConfig.js* file, then add the following code: ++```javascript +require('dotenv').config(); ++const TENANT_SUBDOMAIN = process.env.TENANT_SUBDOMAIN || 'Enter_the_Tenant_Subdomain_Here'; +const REDIRECT_URI = process.env.REDIRECT_URI || 'http://localhost:3000/auth/redirect'; +const POST_LOGOUT_REDIRECT_URI = process.env.POST_LOGOUT_REDIRECT_URI || 'http://localhost:3000'; ++/** + * Configuration object to be passed to MSAL instance on creation. + * For a full list of MSAL Node configuration parameters, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/configuration.md + */ +const msalConfig = { + auth: { + clientId: process.env.CLIENT_ID || 'Enter_the_Application_Id_Here', // 'Application (client) ID' of app registration in Azure portal - this value is a GUID + authority: process.env.AUTHORITY || `https://${TENANT_SUBDOMAIN}.ciamlogin.com/`, // replace "Enter_the_Tenant_Subdomain_Here" with your tenant name + clientSecret: process.env.CLIENT_SECRET || 'Enter_the_Client_Secret_Here', // Client secret generated from the app registration in Azure portal + }, + system: { + loggerOptions: { + loggerCallback(loglevel, message, containsPii) { + console.log(message); + }, + piiLoggingEnabled: false, + logLevel: 'Info', + }, + }, +}; ++module.exports = { + msalConfig, + REDIRECT_URI, + POST_LOGOUT_REDIRECT_URI, + TENANT_SUBDOMAIN +}; +``` ++The `msalConfig` object contains a set of configuration options that you use to customize the behavior of your authentication flows. ++In your *authConfig.js* file, replace: ++- `Enter_the_Application_Id_Here` with the Application (client) ID of the app you registered earlier. ++- `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). + +- `Enter_the_Client_Secret_Here` with the app secret value you copied earlier. ++If you use the *.env* file to store your configuration information: ++1. In your code editor, open *.env* file, then add the following code. ++ ``` + CLIENT_ID=Enter_the_Application_Id_Here + TENANT_SUBDOMAIN=Enter_the_Tenant_Subdomain_Here + CLIENT_SECRET=Enter_the_Client_Secret_Here + REDIRECT_URI=http://localhost:3000/auth/redirect + POST_LOGOUT_REDIRECT_URI=http://localhost:3000 + ``` ++1. Replace the `Enter_the_Application_Id_Here`, `Enter_the_Tenant_Subdomain_Here` and `Enter_the_Client_Secret_Here` placeholders as explained earlier. ++You export `msalConfig`, `REDIRECT_URI`, `TENANT_SUBDOMAIN` and `POST_LOGOUT_REDIRECT_URI` variables in the *authConfig.js* file, which makes them accessible wherever you require the file. ++## Add express routes ++The Express routes provide the endpoints that enable us the execute operations such as sign in, sign out and view ID token claims. ++### App entry point ++In your code editor, open *routes/index.js* file, then add the following code: ++```javascript +const express = require('express'); +const router = express.Router(); ++router.get('/', function (req, res, next) { + res.render('index', { + Title: 'MSAL Node & Express Web App', + isAuthenticated: req.session.isAuthenticated, + username: req.session.account?.username !== '' ? req.session.account?.username : req.session.account?.name, + }); +}); +module.exports = router; +``` ++The `/` route is the entry point to the application. It renders the *views/index.hbs* view that you created earlier in [Build app UI components](tutorial-web-app-node-sign-in-prepare-app.md#build-app-ui-components). `isAuthenticated` is a boolean variable that determines what you see in the view. ++### Sign in and sign out ++1. In your code editor, open *routes/auth.js* file, then add the code from [auth.js](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/blob/main/1-Authentication/5-sign-in-express/App/routes/auth.js) to it. ++1. In your code editor, open *controller/authController.js* file, then add the code from [authController.js](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/blob/main/1-Authentication/5-sign-in-express/App/controller/authController.js) to it. ++1. In your code editor, open *auth/AuthProvider.js* file, then add the code from [AuthProvider.js](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/blob/main/1-Authentication/5-sign-in-express/App/auth/AuthProvider.js) to it. ++ The `/signin`, `/signout` and `/redirect` routes are defined in the *routes/auth.js* file, but you implement their logic in the *auth/AuthProvider.js* class. ++- The `login` method handles `/signin` route: + + - It initiates sign-in flow by triggering the first leg of auth code flow. + + - It initializes a [confidential client application](../../../active-directory/develop/msal-client-applications.md) instance by using MSAL configuration object, `msalConfig`, that you created earlier. + + ```javascript + const msalInstance = this.getMsalInstance(this.config.msalConfig); + ``` + + The `getMsalInstance` method is defined as: ++ ```javascript + getMsalInstance(msalConfig) { + return new msal.ConfidentialClientApplication(msalConfig); + } + ``` + - The first leg of auth code flow generates an authorization code request URL, then redirects to that URL to obtain the authorization code. This first leg is implemented in the `redirectToAuthCodeUrl` method. Notice how we use MSALs [getAuthCodeUrl](/javascript/api/@azure/msal-node/confidentialclientapplication#@azure-msal-node-confidentialclientapplication-getauthcodeurl) method to generate authorization code URL: ++ ```javascript + //... + const authCodeUrlResponse = await msalInstance.getAuthCodeUrl(req.session.authCodeUrlRequest); + //... + ``` + + We then redirect to the authorization code URL itself. ++ ```javascript + //... + res.redirect(authCodeUrlResponse); + //... + ``` + ++- The `handleRedirect` method handles `/redirect` route: + + - You set this URL as Redirect URI for the web app in the Microsoft Entra admin center earlier in [Register the web app](sample-web-app-node-sign-in.md#register-the-web-app). + + - This endpoint implements the second leg of auth code flow uses. It uses the authorization code to request an ID token by using MSAL's [acquireTokenByCode](/javascript/api/@azure/msal-node/confidentialclientapplication#@azure-msal-node-confidentialclientapplication-acquiretokenbycode) method. + + ```javascript + //... + const tokenResponse = await msalInstance.acquireTokenByCode(authCodeRequest, req.body); + //... + ``` + + - After you receive a response, you can create an Express session and store whatever information you want in it. You need to include `isAuthenticated` and set it to `true`: + + ```javascript + //... + req.session.idToken = tokenResponse.idToken; + req.session.account = tokenResponse.account; + req.session.isAuthenticated = true; + //... + ``` ++- The `logout` method handles `/signout` route: + + ```javascript + async logout(req, res, next) { + /** + * Construct a logout URI and redirect the user to end the + * session with Azure AD. For more information, visit: + * https://docs.microsoft.com/azure/active-directory/develop/v2-protocols-oidc#send-a-sign-out-request + */ + const logoutUri = `${this.config.msalConfig.auth.authority}${TENANT_SUBDOMAIN}.onmicrosoft.com/oauth2/v2.0/logout?post_logout_redirect_uri=${this.config.postLogoutRedirectUri}`; ++ req.session.destroy(() => { + res.redirect(logoutUri); + }); + } + ``` + - It initiates sign out request. + + - When you want to sign the user out of the application, it isn't enough to end the user's session. You must redirect the user to the *logoutUri*. Otherwise, the user might be able to reauthenticate to your applications without reentering their credentials. If the name of your tenant is *contoso*, then the *logoutUri* looks similar to `https://contoso.ciamlogin.com/contoso.onmicrosoft.com/oauth2/v2.0/logout?post_logout_redirect_uri=http://localhost:3000`. + + +### View ID token claims ++In your code editor, open *routes/users.js* file, then add the following code: ++```javascript +const express = require('express'); +const router = express.Router(); ++// custom middleware to check auth state +function isAuthenticated(req, res, next) { + if (!req.session.isAuthenticated) { + return res.redirect('/auth/signin'); // redirect to sign-in route + } ++ next(); +}; ++router.get('/id', + isAuthenticated, // check if user is authenticated + async function (req, res, next) { + res.render('id', { idTokenClaims: req.session.account.idTokenClaims }); + } +); +module.exports = router; +``` ++If the user is authenticated, the `/id` route displays ID token claims by using the *views/id.hbs* view. You added this view earlier in [Build app UI components](tutorial-web-app-node-sign-in-prepare-app.md#build-app-ui-components). ++To extract a specific ID token claim, such as *given name*: ++```javascript +const givenName = req.session.account.idTokenClaims.given_name +``` ++## Finalize your web app ++1. In your code editor, open *app.js* file, then add the code from [app.js](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/blob/main/1-Authentication/5-sign-in-express/App/app.js) to it. ++1. In your code editor, open *server.js* file, then add the code from [server.js](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/blob/main/1-Authentication/5-sign-in-express/App/server.js) to it. ++1. In your code editor, open *package.json* file, then update the `scripts` property to: ++ ```json + "scripts": { + "start": "node server.js" + } + ``` ++## Run and test the web app ++1. In your terminal, make sure you're in the project folder that contains your web app such as `ciam-sign-in-node-express-web-app`. ++1. In your terminal, run the following command: ++ ```powershell + npm start + ``` ++1. Open your browser, then go to `http://localhost:3000`. You should see the page similar to the following screenshot: ++ :::image type="content" source="media/how-to-web-app-node-sample-sign-in/web-app-node-sign-in.png" alt-text="Screenshot of sign in into a node web app."::: ++1. After the page completes loading, select **Sign in** link. You're prompted to sign in. ++1. On the sign-in page, type your **Email address**, select **Next**, type your **Password**, then select **Sign in**. If you don't have an account, select **No account? Create one** link, which starts the sign-up flow. ++1. If you choose the sign-up option, after filling in your email, one-time passcode, new password and more account details, you complete the whole sign-up flow. You see a page similar to the following screenshot. You see a similar page if you choose the sign-in option. ++ :::image type="content" source="media/how-to-web-app-node-sample-sign-in/web-app-node-view-claims.png" alt-text="Screenshot of view ID token claims."::: ++1. Select **Sign out** to sign the user out of the web app or select **View ID token claims** to view all ID token claims. ++## Next steps ++Learn how to: ++- [Enable password reset](how-to-enable-password-reset-customers.md). ++- [Customize the default branding](how-to-customize-branding-customers.md). + +- [Configure sign-in with Google](how-to-google-federation-customers.md). ++- [Use client certificate for authentication in your Node.js web app instead of a client secret](how-to-web-app-node-use-certificate.md). |
active-directory | Invite Internal Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md | Sending an invitation to an existing internal account lets you retain that user ## Things to consider -- **Access to on-premises resources**: After the user is invited to B2B collaboration, they can still use their internal credentials to access on-premises resources. You can prevent this by resetting or changing the password on the internal account. The exception is [email one-time passcode authentication](one-time-passcode.md); if the user's authentication method is changed to one-time passcode, they won't be able to use their internal credentials anymore.+- **Access to on-premises resources**: After the user is invited to B2B collaboration, they can still use their internal credentials to access on-premises resources. You can prevent this by resetting or changing the password on the internal account. The exception is email one-time passcode authentication; if the user's authentication method is changed to one-time passcode, they won't be able to use their internal credentials anymore. - **Billing**: This feature doesn't change the UserType for the user, so it doesn't automatically switch the user's billing model to [External Identities monthly active user (MAU) pricing](external-identities-pricing.md). To activate MAU pricing for the user, change the UserType for the user to `guest`. Also note that your Azure AD tenant must be linked to an Azure subscription to activate MAU billing. You can use the Azure portal, PowerShell, or the invitation API to send a B2B in 1. Select the **Azure Active Directory** service. 1. Select **Users**. 1. Find the user in the list or use the search box. Then select the user.-1. In the **Overview** tab, underΓÇ»**My Feed**, select **B2B collaboration**. +1. In the **Overview** tab, underΓÇ»**My Feed**, select **Convert to external user**. - ![Screenshot of user profile Overview tab with B2B collaboration card](media/invite-internal-users/manage-b2b-collaboration-link.png) + :::image type="content" source="media/invite-internal-users/manage-b2b-collaboration-link.png" alt-text="Screenshot of user profile Overview tab with B2B collaboration card."::: > [!NOTE] > If the card says ΓÇ£Resend this B2B user's invitation or reset their redemption status.ΓÇ¥ the user has already been invited to use external credentials for B2B collaboration. -1. Next to **Invite internal user to B2B collaboration?** select **Yes**, and then select **Done**. +1. Add an external email address and select **Send**. - ![Screenshot showing the invite internal user radio button](media/invite-internal-users/invite-internal-user-selector.png) + :::image type="content" source="media/invite-internal-users/invite-internal-user-selector.png" alt-text="Screenshot showing the convert to external user page."::: > [!NOTE] > If the option is unavailable, make sure the user's **Email** property is set to the external email address they should use for B2B collaboration. ContentType: application/json The response to the API is the same response you get when you invite a new guest user to the directory. ## Next steps +- [Add and invite guest users](add-users-administrator.md) +- [Customize invitations using API](customize-invitation-api.md) - [B2B collaboration invitation redemption](redemption-experience.md) |
active-directory | Leave The Organization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md | If your organization allows users to remove themselves from external organizatio 1. Under **Other organizations you collaborate with** (or **Organizations** if you don't have a home organization), find the organization that you want to leave, and then select **Leave**. - ![Screenshot showing Leave organization option in the user interface.](media/leave-the-organization/leave-org.png) + :::image type="content" source="media/leave-the-organization/leave-org.png" alt-text="Screenshot showing Leave organization option in the user interface." lightbox="media/leave-the-organization/leave-org.png"::: 1. When asked to confirm, select **Leave**. 1. If you select **Leave** for an organization but you see the following message, it means youΓÇÖll need to contact the organization's admin, or privacy contact and ask them to remove you from their organization. |
active-directory | Entitlement Management Access Package Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md | Title: Create a new access package in entitlement management -description: Learn how to create a new access package of resources you want to share in Azure Active Directory entitlement management. + Title: Create an access package in entitlement management +description: Learn how to create an access package of resources that you want to share in Azure Active Directory entitlement management. documentationCenter: '' -#Customer intent: As an administrator, I want detailed information about the options available when creating a new access package so that the access package can be managed with minimal effort. +#Customer intent: As an administrator, I want detailed information about the options available when I'm creating a new access package so that the access package can be managed with minimal effort. -# Create a new access package in entitlement management +# Create an access package in entitlement management -An access package enables you to do a one-time setup of resources and policies that automatically administers access for the life of the access package. This article describes how to create a new access package. +An access package enables you to do a one-time setup of resources and policies that automatically administers access for the life of the access package. This article describes how to create an access package. ## Overview -All access packages must be put in a container called a catalog. A catalog defines what resources you can add to your access package. If you don't specify a catalog, your access package will be put into the general catalog. Currently, you can't move an existing access package to a different catalog. +All access packages must be in a container called a catalog. A catalog defines what resources you can add to your access package. If you don't specify a catalog, your access package goes in the general catalog. Currently, you can't move an existing access package to a different catalog. -An access package can be used to assign access to roles of multiple resources that are in the catalog. If you're an administrator or catalog owner, you can add resources to the catalog while creating an access package. -If you're an access package manager, you can't add resources you own to a catalog. You're restricted to using the resources available in the catalog. If you need to add resources to a catalog, you can ask the catalog owner. +An access package can be used to assign access to roles of multiple resources that are in the catalog. If you're an administrator or catalog owner, you can add resources to the catalog while you're creating an access package. -All access packages must have at least one policy for users to be assigned to the access package. Policies specify who can request the access package and also approval and lifecycle settings. When you create a new access package, you can create an initial policy for users in your directory, for users not in your directory, for administrator direct assignments only, or you can choose to create the policy later. +If you're an access package manager, you can't add resources that you own to a catalog. You're restricted to using the resources available in the catalog. If you need to add resources to a catalog, you can ask the catalog owner. -![Create an access package](./media/entitlement-management-access-package-create/access-package-create.png) +All access packages must have at least one policy for users to be assigned to them. Policies specify who can request the access package, along with approval and lifecycle settings. When you create an access package, you can create an initial policy for users in your directory, for users not in your directory, or for administrator direct assignments only. Or, you can choose to create the policy later. -Here are the high-level steps to create a new access package. +![Diagram of an example marketing catalog, including its resources and its access package.](./media/entitlement-management-access-package-create/access-package-create.png) -1. In Identity Governance, start the process to create a new access package. +Here are the high-level steps to create an access package: -1. Select the catalog you want to create the access package in. +1. In Identity Governance, start the process to create an access package. ++1. Select the catalog where you want to put the access package. 1. Add resource roles from resources in the catalog to your access package. -1. Specify an initial policy for users that can request access. +1. Specify an initial policy for users who can request access. -1. Specify any approval settings. +1. Specify approval settings. 1. Specify lifecycle settings. -## Start new access package +## Start the creation process [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager +To complete the following steps, you need a role of global administrator, Identity Governance administrator, user administrator, catalog owner, or access package manager. 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Select **Azure Active Directory** and then select **Identity Governance**. +1. Select **Azure Active Directory**, and then select **Identity Governance**. -1. In the left menu, select **Access packages**. +1. On the left menu, select **Access packages**. 1. Select **New access package**.- - ![Entitlement management in the Azure portal](./media/entitlement-management-shared/access-packages-list.png) -## Basics + ![Screenshot that shows the button for creating a new access package in the Azure portal.](./media/entitlement-management-shared/access-packages-list.png) ++## Configure basics On the **Basics** tab, you give the access package a name and specify which catalog to create the access package in. 1. Enter a display name and description for the access package. Users will see this information when they submit a request for the access package. -1. In the **Catalog** drop-down list, select the catalog you want to create the access package in. For example, you might have a catalog owner that manages all the marketing resources that can be requested. In this case, you could select the marketing catalog. +1. In the **Catalog** dropdown list, select the catalog where you want to put the access package. For example, you might have a catalog owner who manages all the marketing resources that can be requested. In this case, you could select the marketing catalog. - You'll only see catalogs you have permission to create access packages in. To create an access package in an existing catalog, you must be either a Global administrator, Identity Governance administrator or User administrator, or you must be a catalog owner or access package manager in that catalog. + You see only catalogs that you have permission to create access packages in. To create an access package in an existing catalog, you must be a global administrator, Identity Governance administrator, or user administrator. Or you must be a catalog owner or access package manager in that catalog. - ![Access package - Basics](./media/entitlement-management-access-package-create/basics.png) + ![Screenshot that shows basic information for a new access package.](./media/entitlement-management-access-package-create/basics.png) - If you're a Global administrator, an Identity Governance administrator, a User administrator, or catalog creator and you would like to create your access package in a new catalog that's not listed, select **Create new catalog**. Enter the Catalog name and description and then select **Create**. + If you're a global administrator, an Identity Governance administrator, a user administrator, or catalog creator, and you want to create your access package in a new catalog that's not listed, select **Create new catalog**. Enter the catalog name and description, and then select **Create**. - The access package you're creating, and any resources included in it, will be added to the new catalog. You can also add additional catalog owners later, and add attributes to the resources you put in the catalog. Read [Add resource attributes in the catalog](entitlement-management-catalog-create.md#add-resource-attributes-in-the-catalog) to learn more about how to edit the attributes list for a specific catalog resource and the prerequisite roles. + The access package that you're creating, and any resources included in it, are added to the new catalog. Later, you can add more catalog owners or add attributes to the resources that you put in the catalog. To learn more about how to edit the attributes list for a specific catalog resource and the prerequisite roles, read [Add resource attributes in the catalog](entitlement-management-catalog-create.md#add-resource-attributes-in-the-catalog). -1. Select **Next**. +1. Select **Next: Resource roles**. -## Resource roles +## Select resource roles On the **Resource roles** tab, you select the resources to include in the access package. Users who request and receive the access package will receive all the resource roles, such as group membership, in the access package. -If you're not sure which resource roles to include, you can skip adding resource roles while creating the access package, and then [add resource roles](entitlement-management-access-package-resources.md) after you've created the access package. +If you're not sure which resource roles to include, you can skip adding them while creating the access package, and then [add them](entitlement-management-access-package-resources.md) later. -1. Select the resource type you want to add (**Groups and Teams**, **Applications**, or **SharePoint sites**). +1. Select the resource type that you want to add (**Groups and Teams**, **Applications**, or **SharePoint sites**). -1. In the Select pane that appears, select one or more resources from the list. +1. In the **Select applications** panel that appears, select one or more resources from the list. - ![Access package - Resource roles](./media/entitlement-management-access-package-create/resource-roles.png) + ![Screenshot that shows the panel for selecting applications for resource roles in a new access package.](./media/entitlement-management-access-package-create/resource-roles.png) - If you're creating the access package in the General catalog or a new catalog, you'll be able to pick any resource from the directory that you own. You must be at least a Global administrator, a User administrator, or Catalog creator. + If you're creating the access package in the general catalog or a new catalog, you can choose any resource from the directory that you own. You must be at least a global administrator, a user administrator, or catalog creator. - If you're creating the access package in an existing catalog, you can select any resource that is already in the catalog without owning it. + If you're creating the access package in an existing catalog, you can select any resource that's already in the catalog without owning it. - If you're a Global administrator, a User administrator, or catalog owner, you have the additional option of selecting resources you own that aren't yet in the catalog. If you select resources not currently in the selected catalog, these resources will also be added to the catalog for other catalog administrators to build access packages with. To see all the resources that can be added to the catalog, check the **See all** check box at the top of the Select pane. If you only want to select resources that are currently in the selected catalog, leave the check box **See all** unchecked (default state). + If you're a global administrator, a user administrator, or catalog owner, you have the additional option of selecting resources that you own but that aren't yet in the catalog. If you select resources not currently in the selected catalog, these resources are also added to the catalog for other catalog administrators to build access packages with. To see all the resources that can be added to the catalog, select the **See all** checkbox at the top of the panel. If you want to select only resources that are currently in the selected catalog, leave the **See all** checkbox cleared (the default state). -1. Once you've selected the resources, in the **Role** list, select the role you want users to be assigned for the resource. For more information on selecting the appropriate roles for a resource, read [add resource roles](entitlement-management-access-package-resources.md#add-resource-roles). +1. In the **Role** list, select the role that you want users to be assigned for the resource. For more information on selecting the appropriate roles for a resource, read [Add resource roles](entitlement-management-access-package-resources.md#add-resource-roles). - ![Access package - Resource role selection](./media/entitlement-management-access-package-create/resource-roles-role.png) + ![Screenshot that shows resource role selection for a new access package.](./media/entitlement-management-access-package-create/resource-roles-role.png) -1. Select **Next**. +1. Select **Next: Requests**. >[!NOTE]->You can add dynamic groups to a catalog and to an access package. However, you will be able to select only the Owner role when managing a dynamic group resource in an access package. +>You can add dynamic groups to a catalog and to an access package. However, you can select only the owner role when you're managing a dynamic group resource in an access package. -## Requests +## Create request policies -On the **Requests** tab, you create the first policy to specify who can request the access package and also approval settings. Later, you can create more request policies to allow additional groups of users to request the access package with their own approval settings. +On the **Requests** tab, you create the first policy to specify who can request the access package. You also configure approval settings. Later, you can create more request policies to allow additional groups of users to request the access package with their own approval settings. -![Access package - Requests tab](./media/entitlement-management-access-package-create/requests.png) +![Screenshot that shows the Requests tab for a new access package.](./media/entitlement-management-access-package-create/requests.png) -Depending on who you want to be able to request this access package, perform the steps in one of the following sections. +Depending on which users you want to be able to request this access package, perform the steps in one of the following sections. [!INCLUDE [Entitlement management request policy](../../../includes/active-directory-entitlement-management-request-policy.md)] [!INCLUDE [Entitlement management lifecycle policy](../../../includes/active-directory-entitlement-management-lifecycle-policy.md)] -## Review + create +## Review and create the access package On the **Review + create** tab, you can review your settings and check for any validation errors. -1. Review the access package's settings +1. Review the access package's settings. - ![Access package - Enable policy setting](./media/entitlement-management-access-package-create/review-create.png) + ![Screenshot that shows a summary of access package configuration.](./media/entitlement-management-access-package-create/review-create.png) 1. Select **Create** to create the access package. On the **Review + create** tab, you can review your settings and check for any v ## Create an access package programmatically -There are two ways to create an access package programmatically, through Microsoft Graph and through the PowerShell cmdlets for Microsoft Graph. +There are two ways to create an access package programmatically: through Microsoft Graph and through the PowerShell cmdlets for Microsoft Graph. -### Create an access package with Microsoft Graph +### Create an access package by using Microsoft Graph -You can create an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to +You can create an access package by using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to: -1. [List the accessPackageResources in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that aren't yet in the catalog. -1. [List the accessPackageResourceRoles](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each accessPackageResource in an accessPackageCatalog. This list of roles will then be used to select a role, when later creating an accessPackageResourceRoleScope. -1. [Create an accessPackage](/graph/tutorial-access-package-api). -1. [Create an accessPackageResourceRoleScope](/graph/api/accesspackage-post-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) for each resource role needed in the access package. -1. [Create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-accesspackageassignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true) for each policy needed in the access package. +1. [List the accessPackageResources objects in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest object](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that aren't yet in the catalog. +1. [List the accessPackageResourceRoles object](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each `accessPackageResource` object in an `accessPackageCatalog` object. The user will use this list of roles to select a role later on, when creating an `accessPackageResourceRoleScope` object. +1. [Create an accessPackage object](/graph/tutorial-access-package-api). +1. [Create an accessPackageResourceRoleScope object](/graph/api/accesspackage-post-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) for each resource role needed in the access package. +1. [Create an accessPackageAssignmentPolicy object](/graph/api/entitlementmanagement-post-accesspackageassignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true) for each policy needed in the access package. +### Create an access package by using Microsoft PowerShell -### Create an access package with Microsoft PowerShell +You can also create an access package in PowerShell by using the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or later. This script illustrates using the Microsoft Graph `beta` profile. -You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or later. This script illustrates using the Graph `beta` profile. --First, you would retrieve the ID of the catalog, and of the resources and their roles in that catalog that you wish to include in the access package, using a script similar to the following. +First, retrieve the ID of the catalog (and of the resources and their roles in that catalog) that you want to include in the access package. Use a script similar to the following example: ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All" $filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rr = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource" ``` -Then, create the access package. +Then, create the access package: ```powershell $params = @{ $params = @{ $ap = New-MgEntitlementManagementAccessPackage -BodyParameter $params ```-Once the access package has been created, assign the resource roles to the access package. For example, if you wished to include the second resource role of the first resource returned earlier as a resource role of the new access package, you would use a script similar to the following. ++After you create the access package, assign the resource roles to it. For example, if you want to include the second resource role of the first resource returned earlier as a resource role of the new access package, you can use a script similar to this one: ```powershell $rparams = @{ $rparams = @{ New-MgEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $ap.Id -BodyParameter $rparams ``` -Finally, create the policies. In this policy, only the administrator can assign access, and there are no access reviews. See [create an assignment policy through PowerShell](entitlement-management-access-package-request-policy.md#create-an-access-package-assignment-policy-through-powershell) and [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true) for more examples. +Finally, create the policies. In this policy, only the administrator can assign access, and there are no access reviews. For more examples, see [Create an assignment policy through PowerShell](entitlement-management-access-package-request-policy.md#create-an-access-package-assignment-policy-through-powershell) and [Create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true). ```powershell New-MgEntitlementManagementAccessPackageAssignmentPolicy -BodyParameter $pparams ## Next steps -- [Share link to request an access package](entitlement-management-access-package-settings.md)+- [Share a link to request an access package](entitlement-management-access-package-settings.md) - [Change resource roles for an access package](entitlement-management-access-package-resources.md)-- [Directly assign a user to the access package](entitlement-management-access-package-assignments.md)+- [Directly assign a user to an access package](entitlement-management-access-package-assignments.md) - [Create an access review for an access package](entitlement-management-access-reviews-create.md) |
active-directory | Entitlement Management External Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md | The following diagram and steps provide an overview of how external users are gr 1. You [add a connected organization](entitlement-management-organization.md) for the Azure AD directory or domain you want to collaborate with. -1. You create an access package in your directory that includes a policy [For users not in your directory](entitlement-management-access-package-create.md#for-users-not-in-your-directory). +1. You create an access package in your directory that includes a policy [For users not in your directory](entitlement-management-access-package-create.md#allow-users-in-your-directory-to-request-the-access-package). 1. You send a [My Access portal link](entitlement-management-access-package-settings.md) to your contact at the external organization that they can share with their users to request the access package. |
active-directory | Entitlement Management Group Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-group-writeback.md | Using group writeback, you can now sync Microsoft 365 groups that are part of ac 1. Set the group to be written back to on-premises Active Directory. For instructions, see [Group writeback in the Azure Active Directory admin center](../enterprise-users/groups-write-back-portal.md). -1. Add the group to an access package as a resource role. See [Create a new access package](entitlement-management-access-package-create.md#resource-roles) for guidance. +1. Add the group to an access package as a resource role. See [Create a new access package](entitlement-management-access-package-create.md#select-resource-roles) for guidance. 1. Assign the user to the access package. See [View, add, and remove assignments for an access package](entitlement-management-access-package-assignments.md#directly-assign-a-user) for instructions to directly assign a user. |
active-directory | Entitlement Management Logs And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md | Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure sub 1. Select **Azure Active Directory** then select **Diagnostic settings** under Monitoring in the left navigation menu. Check if there's already a setting to send the audit logs to that workspace. -1. If there isn't already a setting, select **Add diagnostic setting**. Use the instructions in [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md#send-logs-to-azure-monitor) to send the Azure AD audit log to the Azure Monitor workspace. +1. If there isn't already a setting, select **Add diagnostic setting**. Use the instructions in [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) to send the Azure AD audit log to the Azure Monitor workspace. ![Diagnostics settings pane](./media/entitlement-management-logs-and-reporting/audit-log-diagnostics-settings.png) |
active-directory | Entitlement Management Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md | There are several ways that you can configure entitlement management for your or ### Administrator: Assign employees access automatically (preview) -1. [Create a new access package](entitlement-management-access-package-create.md#start-new-access-package) -1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#resource-roles) +1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process) +1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#select-resource-roles) 1. [Add an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md) ### Access package -1. [Create a new access package](entitlement-management-access-package-create.md#start-new-access-package) -1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#resource-roles) -1. [Add a request policy to allow users in your directory to request access](entitlement-management-access-package-create.md#for-users-in-your-directory) -1. [Specify expiration settings](entitlement-management-access-package-create.md#lifecycle) +1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process) +1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#select-resource-roles) +1. [Add a request policy to allow users in your directory to request access](entitlement-management-access-package-create.md#allow-users-in-your-directory-to-request-the-access-package) +1. [Specify expiration settings](entitlement-management-access-package-create.md#specify-a-lifecycle) ### Requestor: Request access to resources There are several ways that you can configure entitlement management for your or ### Access package -1. [Create a new access package](entitlement-management-access-package-create.md#start-new-access-package) +1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process) 1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-resources.md#add-resource-roles) 1. [Add a request policy to allow users not in your directory to request access](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory)-1. [Specify expiration settings](entitlement-management-access-package-create.md#lifecycle) +1. [Specify expiration settings](entitlement-management-access-package-create.md#specify-a-lifecycle) 1. [Copy the link to request the access package](entitlement-management-access-package-settings.md) 1. Send the link to your external partner contact partner to share with their users There are several ways that you can configure entitlement management for your or 1. [Watch video: Day-to-day management: Things have changed](https://www.microsoft.com/videoplayer/embed/RE3LD4Z) 1. Open the access package 1. [Open the lifecycle settings](entitlement-management-access-package-lifecycle-policy.md#open-lifecycle-settings)-1. [Update the expiration settings](entitlement-management-access-package-lifecycle-policy.md#lifecycle) +1. [Update the expiration settings](entitlement-management-access-package-lifecycle-policy.md#specify-a-lifecycle) ### Access package You can also manage access packages, catalogs, policies, requests and assignment ## Next steps - [Delegation and roles](entitlement-management-delegate.md)-- [Request process and email notifications](entitlement-management-process.md)+- [Request process and email notifications](entitlement-management-process.md) |
active-directory | Identity Governance Organizational Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-organizational-roles.md | If you have many resources, you can use a PowerShell script to [add each resourc Each organizational role definition can be represented with an [access package](entitlement-management-access-package-create.md) in that catalog. -You can use a PowerShell script to [create an access package in a catalog](entitlement-management-access-package-create.md#create-an-access-package-with-microsoft-powershell). +You can use a PowerShell script to [create an access package in a catalog](entitlement-management-access-package-create.md#create-an-access-package-by-using-microsoft-powershell). Once you've created an access package, then you link one or more of the roles of the resources in the catalog to the access package. This represents the permissions of the organizational role. |
active-directory | Choose Ad Authn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/choose-ad-authn.md | Details on decision questions: * **Effort**. Password hash synchronization requires the least effort regarding deployment, maintenance, and infrastructure. This level of effort typically applies to organizations that only need their users to sign in to Microsoft 365, SaaS apps, and other Azure AD-based resources. When turned on, password hash synchronization is part of the Azure AD Connect sync process and runs every two minutes. -* **User experience**. To improve users' sign-in experience, use [Azure AD joined devices (AADJ)](../../devices/concept-azure-ad-join.md) or [Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md). If you can't join your Windows devices to Azure AD, we recommend deploying seamless SSO with password hash synchronization. Seamless SSO eliminates unnecessary prompts when users are signed in. +* **User experience**. To improve users' sign-in experience, use [Azure AD joined devices](../../devices/concept-azure-ad-join.md) or [Hybrid Azure AD joined devices](../../devices/how-to-hybrid-join.md). If you can't join your Windows devices to Azure AD, we recommend deploying seamless SSO with password hash synchronization. Seamless SSO eliminates unnecessary prompts when users are signed in. * **Advanced scenarios**. If organizations choose to, it's possible to use insights from identities with Azure AD Identity Protection reports with Azure AD Premium P2. An example is the leaked credentials report. Windows Hello for Business has [specific requirements when you use password hash synchronization](/windows/access-protection/hello-for-business/hello-identity-verification). [Azure AD Domain Services](../../../active-directory-domain-services/tutorial-create-instance.md) requires password hash synchronization to provision users with their corporate credentials in the managed domain. Refer to [implementing password hash synchronization](how-to-connect-password-ha Pass-through Authentication requires unconstrained network access to domain controllers. All network traffic is encrypted and limited to authentication requests. For more information on this process, see the [security deep dive](how-to-connect-pta-security-deep-dive.md) on pass-through authentication. -* **User experience**. To improve users' sign-in experience, use [Azure AD joined devices (AADJ)](../../devices/concept-azure-ad-join.md) or [Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md). If you can't join your Windows devices to Azure AD, we recommend deploying seamless SSO with password hash synchronization. Seamless SSO eliminates unnecessary prompts when users are signed in. +* **User experience**. To improve users' sign-in experience, use [Azure AD joined devices](../../devices/concept-azure-ad-join.md) or [Hybrid Azure AD joined devices](../../devices/how-to-hybrid-join.md). If you can't join your Windows devices to Azure AD, we recommend deploying seamless SSO with password hash synchronization. Seamless SSO eliminates unnecessary prompts when users are signed in. * **Advanced scenarios**. Pass-through Authentication enforces the on-premises account policy at the time of sign-in. For example, access is denied when an on-premises user's account state is disabled, locked out, or their [password expires](how-to-connect-pta-faq.yml#what-happens-if-my-user-s-password-has-expired-and-they-try-to-sign-in-by-using-pass-through-authentication-) or the logon attempt falls outside the hours when the user is allowed to sign in. The following diagrams outline the high-level architecture components required f |What are the requirements for on-premises Internet and networking beyond the provisioning system?|None|[Outbound Internet access](how-to-connect-pta-quick-start.md) from the servers running authentication agents|[Inbound Internet access](/windows-server/identity/ad-fs/overview/ad-fs-requirements) to WAP servers in the perimeter<br><br>Inbound network access to AD FS servers from WAP servers in the perimeter<br><br>Network load balancing| |Is there a TLS/SSL certificate requirement?|No|No|Yes| |Is there a health monitoring solution?|Not required|Agent status provided by the [Azure portal](tshoot-connect-pass-through-authentication.md)|[Azure AD Connect Health](how-to-connect-health-adfs.md)|-|Do users get single sign-on to cloud resources from domain-joined devices within the company network?|Yes with [Azure AD joined devices (AADJ)](../../devices/concept-azure-ad-join.md), [Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md), the [Microsoft Enterprise SSO plug-in for Apple devices](../../develop/apple-sso-plugin.md), or [Seamless SSO](how-to-connect-sso.md)|Yes with [Azure AD joined devices (AADJ)](../../devices/concept-azure-ad-join.md), [Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md), the [Microsoft Enterprise SSO plug-in for Apple devices](../../develop/apple-sso-plugin.md), or [Seamless SSO](how-to-connect-sso.md)|Yes| -|What sign-in types are supported?|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](how-to-connect-sso.md)<br><br>[Alternate login ID](how-to-connect-install-custom.md)<br><br>[Azure AD Joined Devices](../../devices/concept-azure-ad-join.md)<br><br>[Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md)<br><br>[Certificate and smart card authentication](../../authentication/concept-certificate-based-authentication-smartcard.md)|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](how-to-connect-sso.md)<br><br>[Alternate login ID](how-to-connect-pta-faq.yml)<br><br>[Azure AD Joined Devices](../../devices/concept-azure-ad-join.md)<br><br>[Hybrid Azure AD joined devices (HAADJ)](../../devices/howto-hybrid-azure-ad-join.md)<br><br>[Certificate and smart card authentication](../../authentication/concept-certificate-based-authentication-smartcard.md)|UserPrincipalName + password<br><br>sAMAccountName + password<br><br>Windows-Integrated Authentication<br><br>[Certificate and smart card authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br><br>[Alternate login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id)| +|Do users get single sign-on to cloud resources from domain-joined devices within the company network?|Yes with [Azure AD joined devices](../../devices/concept-azure-ad-join.md), [Hybrid Azure AD joined devices](../../devices/how-to-hybrid-join.md), the [Microsoft Enterprise SSO plug-in for Apple devices](../../develop/apple-sso-plugin.md), or [Seamless SSO](how-to-connect-sso.md)|Yes with [Azure AD joined devices](../../devices/concept-azure-ad-join.md), [Hybrid Azure AD joined devices](../../devices/how-to-hybrid-join.md), the [Microsoft Enterprise SSO plug-in for Apple devices](../../develop/apple-sso-plugin.md), or [Seamless SSO](how-to-connect-sso.md)|Yes| +|What sign-in types are supported?|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](how-to-connect-sso.md)<br><br>[Alternate login ID](how-to-connect-install-custom.md)<br><br>[Azure AD Joined Devices](../../devices/concept-azure-ad-join.md)<br><br>[Hybrid Azure AD joined devices](../../devices/how-to-hybrid-join.md)<br><br>[Certificate and smart card authentication](../../authentication/concept-certificate-based-authentication-smartcard.md)|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](how-to-connect-sso.md)<br><br>[Alternate login ID](how-to-connect-pta-faq.yml)<br><br>[Azure AD Joined Devices](../../devices/concept-azure-ad-join.md)<br><br>[Hybrid Azure AD joined devices](../../devices/how-to-hybrid-join.md)<br><br>[Certificate and smart card authentication](../../authentication/concept-certificate-based-authentication-smartcard.md)|UserPrincipalName + password<br><br>sAMAccountName + password<br><br>Windows-Integrated Authentication<br><br>[Certificate and smart card authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br><br>[Alternate login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id)| |Is Windows Hello for Business supported?|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)<br><br>*Both require Windows Server 2016 Domain functional level*|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)<br><br>[Certificate trust model](/windows/security/identity-protection/hello-for-business/hello-key-trust-adfs)| |What are the multifactor authentication options?|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Custom Controls with Conditional Access*](../../conditional-access/controls.md)|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Custom Controls with Conditional Access*](../../conditional-access/controls.md)|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Third-party MFA](/windows-server/identity/ad-fs/operations/configure-additional-authentication-methods-for-ad-fs)<br><br>[Custom Controls with Conditional Access*](../../conditional-access/controls.md)| |What user account states are supported?|Disabled accounts<br>(up to 30-minute delay)|Disabled accounts<br><br>Account locked out<br><br>Account expired<br><br>Password expired<br><br>Sign-in hours|Disabled accounts<br><br>Account locked out<br><br>Account expired<br><br>Password expired<br><br>Sign-in hours| |
active-directory | Four Steps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/four-steps.md | Organizations with on-premises Active Directory should extend their directory to The simplest and recommended method to enable cloud authentication for on-premises directory objects in Azure AD is [Password Hash Synchronization](./how-to-connect-password-hash-synchronization.md) (PHS). Alternatively, some organizations may consider enabling [Pass-through Authentication](./how-to-connect-pta-quick-start.md) (PTA). -Whether you choose PHS or PTA, don't forget to consider [SSO](./how-to-connect-sso.md) to allow users to access apps without constantly entering their username and password. SSO can be achieved by using [Hybrid Azure AD joined](../../devices/concept-azure-ad-join-hybrid.md) or [Azure AD joined](../../devices/concept-azure-ad-join.md) devices while keeping access to on-premises resources. For devices that canΓÇÖt be Azure AD joined, [Seamless single sign-on (Seamless SSO)](how-to-connect-sso-quick-start.md) helps provide those capabilities. Without single sign-on, users must remember application-specific passwords and sign into each application. Likewise, IT staff needs to create and update user accounts for each application such as Microsoft 365, Box, and Salesforce. Users need to remember their passwords, plus spend the time to sign into each application. Providing a standardized single sign-on mechanism to the entire enterprise is crucial for best user experience, reduction of risk, ability to report, and governance. +Whether you choose PHS or PTA, don't forget to consider [SSO](./how-to-connect-sso.md) to allow users to access apps without constantly entering their username and password. SSO can be achieved by using [Hybrid Azure AD joined](../../devices/concept-hybrid-join.md) or [Azure AD joined](../../devices/concept-azure-ad-join.md) devices while keeping access to on-premises resources. For devices that canΓÇÖt be Azure AD joined, [Seamless single sign-on (Seamless SSO)](how-to-connect-sso-quick-start.md) helps provide those capabilities. Without single sign-on, users must remember application-specific passwords and sign into each application. Likewise, IT staff needs to create and update user accounts for each application such as Microsoft 365, Box, and Salesforce. Users need to remember their passwords, plus spend the time to sign into each application. Providing a standardized single sign-on mechanism to the entire enterprise is crucial for best user experience, reduction of risk, ability to report, and governance. For organizations already using AD FS or another on-premises authentication provider, moving to Azure AD as your identity provider can reduce complexity and improve availability. Unless you have specific use cases for using federation, we recommend migrating from federated authentication to either PHS or PTA. Doing this you can enjoy the benefits of a reduced on-premises footprint, and the flexibility the cloud offers with improved user experiences. For more information, see [Migrate from federation to password hash synchronization for Azure Active Directory](./migrate-from-federation-to-cloud-authentication.md). |
active-directory | How To Connect Password Hash Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md | The password hash synchronization feature automatically retries failed synchroni The synchronization of a password has no impact on the user who is currently signed in. Your current cloud service session is not immediately affected by a synchronized password change that occurs, while you are signed in, to a cloud service. However, when the cloud service requires you to authenticate again, you need to provide your new password. -A user must enter their corporate credentials a second time to authenticate to Azure AD, regardless of whether they're signed in to their corporate network. This pattern can be minimized, however, if the user selects the Keep me signed in (KMSI) check box at sign-in. This selection sets a session cookie that bypasses authentication for 180 days. KMSI behavior can be enabled or disabled by the Azure AD administrator. In addition, you can reduce password prompts by configuring [Azure AD join](../../devices/concept-azure-ad-join.md) or [Hybrid Azure AD join](../../devices/concept-azure-ad-join-hybrid.md), which automatically signs users in when they are on their corporate devices connected to your corporate network. +A user must enter their corporate credentials a second time to authenticate to Azure AD, regardless of whether they're signed in to their corporate network. This pattern can be minimized, however, if the user selects the Keep me signed in (KMSI) check box at sign-in. This selection sets a session cookie that bypasses authentication for 180 days. KMSI behavior can be enabled or disabled by the Azure AD administrator. In addition, you can reduce password prompts by configuring [Azure AD join](../../devices/concept-azure-ad-join.md) or [Hybrid Azure AD join](../../devices/concept-hybrid-join.md), which automatically signs users in when they are on their corporate devices connected to your corporate network. ### Additional advantages |
active-directory | How To Connect Pta Current Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-current-limitations.md | The following scenarios are _not_ supported: - [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions about the Pass-through Authentication feature. - [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security deep dive](how-to-connect-pta-security-deep-dive.md): Get deep technical information on the Pass-through Authentication feature.-- [Hybrid Azure AD join](../../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources.+- [Hybrid Azure AD join](../../devices/how-to-hybrid-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources. - [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature. - [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789): Use the Azure Active Directory Forum to file new feature requests. |
active-directory | How To Connect Pta How It Works | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-how-it-works.md | The following diagram illustrates all the components and the steps involved: - [Frequently Asked Questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions. - [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security Deep Dive](how-to-connect-pta-security-deep-dive.md): Get deep technical information on the Pass-through Authentication feature.-- [Hybrid Azure AD join](../../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources.    +- [Hybrid Azure AD join](../../devices/how-to-hybrid-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources.     - [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature. - [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789): Use the Azure Active Directory Forum to file new feature requests. |
active-directory | How To Connect Pta Quick Start | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-quick-start.md | Smart Lockout assists in locking out bad actors who are trying to guess your use - [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions. - [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security deep dive](how-to-connect-pta-security-deep-dive.md): Get technical information on the Pass-through Authentication feature.-- [Hybrid Azure AD join](../../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources. +- [Hybrid Azure AD join](../../devices/how-to-hybrid-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources. - [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature. - [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789): Use the Azure Active Directory Forum to file new feature requests. |
active-directory | How To Connect Pta | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta.md | This feature is an alternative to [Azure AD Password Hash Synchronization](how-t ![Azure AD Pass-through Authentication](./media/how-to-connect-pta/pta1.png) -You can combine Pass-through Authentication with the [Seamless single sign-on](how-to-connect-sso.md) feature. If you have Windows 10 or later machines, use [Hybrid Azure AD Join (AADJ)](../../devices/howto-hybrid-azure-ad-join.md). This way, when your users are accessing applications on their corporate machines inside your corporate network, they don't need to type in their passwords to sign in. +You can combine Pass-through Authentication with the [Seamless single sign-on](how-to-connect-sso.md) feature. If you have Windows 10 or later machines, use [Hybrid Azure AD Join (AADJ)](../../devices/how-to-hybrid-join.md). This way, when your users are accessing applications on their corporate machines inside your corporate network, they don't need to type in their passwords to sign in. ## Key benefits of using Azure AD Pass-through Authentication You can combine Pass-through Authentication with the [Seamless single sign-on](h - [Quickstart](how-to-connect-pta-quick-start.md) - Get up and running Azure AD Pass-through Authentication. - [Migrate your apps to Azure AD](../../manage-apps/migration-resources.md): Resources to help you migrate application access and authentication to Azure AD. - [Smart Lockout](../../authentication/howto-password-smart-lockout.md) - Configure Smart Lockout capability on your tenant to protect user accounts.-- [Hybrid Azure AD join](../../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources. +- [Hybrid Azure AD join](../../devices/how-to-hybrid-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources. - [Current limitations](how-to-connect-pta-current-limitations.md) - Learn which scenarios are supported and which ones are not. - [Technical Deep Dive](how-to-connect-pta-how-it-works.md) - Understand how this feature works. - [Frequently Asked Questions](how-to-connect-pta-faq.yml) - Answers to frequently asked questions. |
active-directory | How To Connect Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sso.md | Seamless SSO can be combined with either the [Password Hash Synchronization](how ## SSO via primary refresh token vs. Seamless SSO For Windows 10, Windows Server 2016 and later versions, itΓÇÖs recommended to use SSO via primary refresh token (PRT). For Windows 7 and Windows 8.1, itΓÇÖs recommended to use Seamless SSO.-Seamless SSO needs the user's device to be domain-joined, but it isn't used on Windows 10 [Azure AD joined devices](../../devices/concept-azure-ad-join.md) or [hybrid Azure AD joined devices](../../devices/concept-azure-ad-join-hybrid.md). SSO on Azure AD joined, Hybrid Azure AD joined, and Azure AD registered devices works based on the [Primary Refresh Token (PRT)](../../devices/concept-primary-refresh-token.md) +Seamless SSO needs the user's device to be domain-joined, but it isn't used on Windows 10 [Azure AD joined devices](../../devices/concept-azure-ad-join.md) or [hybrid Azure AD joined devices](../../devices/concept-hybrid-join.md). SSO on Azure AD joined, Hybrid Azure AD joined, and Azure AD registered devices works based on the [Primary Refresh Token (PRT)](../../devices/concept-primary-refresh-token.md) SSO via PRT works once devices are registered with Azure AD for hybrid Azure AD joined, Azure AD joined or personal registered devices via Add Work or School Account. For more information on how SSO works with Windows 10 using PRT, see: [Primary Refresh Token (PRT) and Azure AD](../../devices/concept-primary-refresh-token.md) |
active-directory | How To Connect Staged Rollout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-staged-rollout.md | For an overview of the feature, view this "Azure Active Directory: What is Stage For both options, we recommend enabling single sign-on (SSO) to achieve a silent sign-in experience. For Windows 7 or 8.1 domain-joined devices, we recommend using seamless SSO. For more information, see [What is seamless SSO](how-to-connect-sso.md). - For Windows 10, Windows Server 2016 and later versions, itΓÇÖs recommended to use SSO via [Primary Refresh Token (PRT)](../../devices/concept-primary-refresh-token.md) with [Azure AD joined devices](../../devices/concept-azure-ad-join.md), [hybrid Azure AD joined devices](../../devices/concept-azure-ad-join-hybrid.md) or [personal registered devices](../../devices/concept-azure-ad-register.md) via Add Work or School Account. + For Windows 10, Windows Server 2016 and later versions, itΓÇÖs recommended to use SSO via [Primary Refresh Token (PRT)](../../devices/concept-primary-refresh-token.md) with [Azure AD joined devices](../../devices/concept-azure-ad-join.md), [hybrid Azure AD joined devices](../../devices/concept-hybrid-join.md) or [personal registered devices](../../devices/concept-azure-ad-register.md) via Add Work or School Account. - You have configured all the appropriate tenant-branding and conditional access policies you need for users who are being migrated to cloud authentication. |
active-directory | How To Upgrade Previous Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-upgrade-previous-version.md | This topic describes the different methods that you can use to upgrade your Azur > It's important that you keep your servers current with the latest releases of Azure AD Connect. We are constantly making upgrades to AADConnect, and these upgrades include fixes to security issues and bugs, as well as serviceability, performance, and scalability improvements. > To see what the latest version is, and to learn what changes have been made between versions, please refer to the [release version history](./reference-connect-version-history.md) -Any versions older than Azure AD Connect 2.x are currently deprecated, see [Introduction to Azure AD Connect V2.0](whatis-azure-ad-connect-v2.md) for more information. It is currently supported to upgrade from any version of Azure AD Connect to the current version. In-place upgrades of DirSync or ADSync are not supported, and a swing migration is required. If you want to upgrade from DirSync, see [Upgrade from Azure AD sync tool (DirSync)](how-to-dirsync-upgrade-get-started.md) or the [Swing migration](#swing-migration) section. +Any versions older than Azure AD Connect 2.x are currently deprecated, see [Introduction to Azure AD Connect V2.0](whatis-azure-ad-connect-v2.md) for more information. It's currently supported to upgrade from any version of Azure AD Connect to the current version. In-place upgrades of DirSync or ADSync aren't supported, and a swing migration is required. If you want to upgrade from DirSync, see [Upgrade from Azure AD sync tool (DirSync)](how-to-dirsync-upgrade-get-started.md) or the [Swing migration](#swing-migration) section. -In practice, customers on old versions may encounter problems not directly related to Azure AD Connect. Servers that have been in production for several years typically have had several patches applied to them and not all of these can be accounted for. Customers who have not upgraded in 12-18 months (about 1 and a half years) should consider a swing upgrade instead as this is the most conservative and least risky option. +In practice, customers on old versions may encounter problems not directly related to Azure AD Connect. Servers that have been in production for several years typically have had several patches applied to them and not all of these can be accounted for. Customers who haven't upgraded in 12-18 months (about 1 and a half years) should consider a swing upgrade instead as this is the most conservative and least risky option. There are a few different strategies that you can use to upgrade Azure AD Connect. | Method | Description | Pros | Cons | | | | | | | [Automatic upgrade](how-to-connect-install-automatic-upgrade.md) |This is the easiest method for customers with an express installation |No manual intervention |Auto-upgrade version might not include the latest features |-| [In-place upgrade](#in-place-upgrade) |If you have a single server, you can upgrade the installation in-place on the same server |Does not require another server |If there is an issue while in-place upgrading, you cannot roll-back, and sync will be interrupted | -| [Swing migration](#swing-migration) |With two servers, you can prepare one of the servers with the new release or configuration and change the active server when you are ready |Safest approach and smoother transition to a newer version. Supports Windows OS (Operating Systems) upgrade. Sync is not interrupted and does not impose a risk to production |Requires another server| +| [In-place upgrade](#in-place-upgrade) |If you have a single server, you can upgrade the installation in-place on the same server |Doesn't require another server |If there's an issue while in-place upgrading, you can't roll back the new release or configuration and change the active server when you are ready |Safest approach and smoother transition to a newer version. Supports Windows OS (Operating Systems) upgrade. Sync is not interrupted and doesn't impose a risk to production |Requires another server| For permissions information, see the [permissions required for an upgrade](reference-connect-accounts-permissions.md#upgrade). For permissions information, see the [permissions required for an upgrade](refer > After you've enabled your new Azure AD Connect server to start synchronizing changes to Azure AD, you must not roll back to using DirSync or Azure AD Sync. Downgrading from Azure AD Connect to legacy clients, including DirSync and Azure AD Sync, is not supported and can lead to issues such as data loss in Azure AD. ## In-place upgrade-An in-place upgrade works for moving from Azure AD Sync or Azure AD Connect. It does not work for moving from DirSync or for a solution with Forefront Identity Manager (FIM) + Azure AD Connector. +An in-place upgrade works for moving from Azure AD Sync or Azure AD Connect. It doesn't work for moving from DirSync or for a solution with Forefront Identity Manager (FIM) + Azure AD Connector. This method is preferred when you have a single server and less than about 100,000 objects. If there are any changes to the out-of-box sync rules, a full import and full synchronization will occur after the upgrade. This method ensures that the new configuration is applied to all existing objects in the system. This run might take a few hours, depending on the number of objects that are in scope of the sync engine. The normal delta synchronization scheduler (which synchronizes every 30 minutes by default) is suspended, but password synchronization continues. You might consider doing the in-place upgrade during the weekend. If there are no changes to the out-of-box configuration with the new Azure AD Connect release, then a normal delta import/sync starts instead. Assembly version in AAD Connector configuration ("X.X.XXX.X") is earlier than th ``` ## Swing migration-For some customers, an in-place upgrade can impose a considerable risk to production in case there is an issue while upgrading and the server cannot be rolled back. A single production server might also be impractical as the initial sync cycle might take multiple days, and during this time, no delta changes are processed. +For some customers, an in-place upgrade can impose a considerable risk to production in case there's an issue while upgrading and the server can't be rolled back. A single production server might also be impractical as the initial sync cycle might take multiple days, and during this time, no delta changes are processed. The recommended method for these scenarios is to use a swing migration. You can also use this method when you need to upgrade the Windows Server operating system, or you plan to make substantial changes to your environment configuration, which need to be tested before they're pushed to production. -You need (at least) two servers - one active server and one staging server. The active server (shown with solid blue lines in the following diagram) is responsible for the active production load. The staging server (shown with dashed purple lines) is prepared with the new release or configuration. When it is fully ready, this server is made active. The previous active server, which now has the outdated version or configuration installed, is made into the staging server and is upgraded. +You need (at least) two servers - one active server and one staging server. The active server (shown with solid blue lines in the following diagram) is responsible for the active production load. The staging server (shown with dashed purple lines) is prepared with the new release or configuration. When it's fully ready, this server is made active. The previous active server, which now has the outdated version or configuration installed, is made into the staging server and is upgraded. -The two servers can use different versions. For example, the active server that you plan to decommission can use Azure AD Sync, and the new staging server can use Azure AD Connect. If you use swing migration to develop a new configuration, it is a good idea to have the same versions on the two servers. +The two servers can use different versions. For example, the active server that you plan to decommission can use Azure AD Sync, and the new staging server can use Azure AD Connect. If you use swing migration to develop a new configuration, it's a good idea to have the same versions on the two servers. ![Diagram of the staging server.](./media/how-to-upgrade-previous-version/stagingserver1.png) The two servers can use different versions. For example, the active server that These steps also work to move from Azure AD Sync or a solution with FIM + Azure AD Connector. These steps don't work for DirSync, but the same swing migration method (also called parallel deployment) with steps for DirSync is in [Upgrade Azure Active Directory sync (DirSync)](how-to-dirsync-upgrade-get-started.md). ### Use a swing migration to upgrade-1. If you only have one Azure AD Connect server, if you are upgrading from AD Sync, or upgrading from an old version, it is a good idea to install the new version on a new Windows Server. If you already have two Azure AD Connect servers, upgrade the staging server first. and promote the staging to active. It is recommended to always keep a pair of active/staging server running the same version, but it is not required. -2. If you have made a custom configuration and your staging server does not have it, follow the steps under [Move a custom configuration from the active server to the staging server](#move-a-custom-configuration-from-the-active-server-to-the-staging-server). +1. If you only have one Azure AD Connect server, if you are upgrading from AD Sync, or upgrading from an old version, it's a good idea to install the new version on a new Windows Server. If you already have two Azure AD Connect servers, upgrade the staging server first. and promote the staging to active. It's recommended to always keep a pair of active/staging server running the same version, but it's not required. +2. If you have made a custom configuration and your staging server doesn't have it, follow the steps under [Move a custom configuration from the active server to the staging server](#move-a-custom-configuration-from-the-active-server-to-the-staging-server). 3. Let the sync engine run full import and full synchronization on your staging server. 4. Verify that the new configuration did not cause any unexpected changes by using the steps under "Verify" in [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server). If something is not as expected, correct it, run a sync cycle, and verify the data until it looks good. 5. Before upgrading the other server, switch it to staging mode and promote the staging server to be the active server. This is the last step "Switch active server" in the process to [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server). 6. Upgrade the server that is now in staging mode to the latest release. Follow the same steps as before to get the data and configuration upgraded. If you upgrade from Azure AD Sync, you can now turn off and decommission your old server. > [!NOTE]-> It's important to fully decommission old Azure AD Connect servers as these may cause synchronization issues, difficult to troubleshoot, when an old sync server is left on the network or is powered up again later by mistake. Such ΓÇ£rogueΓÇ¥ servers tend to overwrite Azure AD data with its old information because, they may no longer be able to access on-premises Active Directory (for example, when the computer account is expired, the connector account password has changed, etcetera), but can still connect to Azure AD and cause attribute values to continually revert in every sync cycle (for example, every 30 minutes). To fully decommission an Azure AD Connect server, make sure you completely uninstall the product and its components or permanently delete the server if it is a virtual machine. +> It's important to fully decommission old Azure AD Connect servers as these may cause synchronization issues, difficult to troubleshoot, when an old sync server is left on the network or is powered up again later by mistake. Such ΓÇ£rogueΓÇ¥ servers tend to overwrite Azure AD data with its old information because, they may no longer be able to access on-premises Active Directory (for example, when the computer account is expired, the connector account password has changed, etcetera), but can still connect to Azure AD and cause attribute values to continually revert in every sync cycle (for example, every 30 minutes). To fully decommission an Azure AD Connect server, make sure you completely uninstall the product and its components or permanently delete the server if it's a virtual machine. ### Move a custom configuration from the active server to the staging server If you have made configuration changes to the active server, you need to make sure that the same changes are applied to the new staging server. To help with this move, you can use the feature for [exporting and importing synchronization settings](./how-to-connect-import-export-config.md). With this feature you can deploy a new staging server in a few steps, with the exact same settings as another Azure AD Connect server in your network. -For individual custom sync rules that you have created, you can move them by using PowerShell. If you must apply other changes the same way on both systems, and you cannot migrate the changes, then you might have to manually do the following configurations on both servers: ++### Moving individual custom sync rules +For individual custom sync rules that you have created, you can move them by using PowerShell. If you must apply other changes the same way on both systems, and you can't migrate the changes, then you might have to manually do the following configurations on both servers: * Connection to the same forests * Any domain and OU filtering To copy custom synchronization rules to another server, do the following: ![Screenshot showing the synchronization rules editor export window.](./media/how-to-upgrade-previous-version/exportrule.png) -3. The Connector GUID (globally-unique identifier) is different on the staging server, and you must change it. To get the GUID, start **Synchronization Rules Editor**, select one of the out-of-box rules that represent the same connected system, and click **Export**. Replace the GUID in your PS1 file with the GUID from the staging server. +3. The Connector GUID (globally unique identifier) is different on the staging server, and you must change it. To get the GUID, start **Synchronization Rules Editor**, select one of the out-of-box rules that represent the same connected system, and click **Export**. Replace the GUID in your PS1 file with the GUID from the staging server. 4. In a PowerShell prompt, run the PS1 file. This creates the custom synchronization rule on the staging server. 5. Repeat this for all your custom rules. ## How to defer full synchronization after upgrade-During in-place upgrade, there may be changes introduced that require specific synchronization activities (including Full Import step and Full Synchronization step) to be executed. For example, connector schema changes require **full import** step and out-of-box synchronization rule changes require **full synchronization** step to be executed on affected connectors. During upgrade, Azure AD Connect determines what synchronization activities are required and records them as *overrides*. In the following synchronization cycle, the synchronization scheduler picks up these overrides and executes them. Once an override is successfully executed, it is removed. +During in-place upgrade, there may be changes introduced that require specific synchronization activities (including Full Import step and Full Synchronization step) to be executed. For example, connector schema changes require **full import** step and out-of-box synchronization rule changes require **full synchronization** step to be executed on affected connectors. During upgrade, Azure AD Connect determines what synchronization activities are required and records them as *overrides*. In the following synchronization cycle, the synchronization scheduler picks up these overrides and executes them. Once an override is successfully executed, it's removed. There may be situations where you do not want these overrides to take place immediately after upgrade. For example, you have numerous synchronized objects, and you would like these synchronization steps to occur after business hours. To remove these overrides: When you upgrade Azure AD Connect from a previous version, you might hit the fol ![Error](./media/how-to-upgrade-previous-version/error1.png) -This error happens because the Azure Active Directory connector with identifier, b891884f-051e-4a83-95af-2544101c9083, does not exist in the current Azure AD Connect configuration. To verify this is the case, open a PowerShell window, run Cmdlet `Get-ADSyncConnector -Identifier b891884f-051e-4a83-95af-2544101c9083` +This error happens because the Azure Active Directory connector with identifier, b891884f-051e-4a83-95af-2544101c9083, doesn't exist in the current Azure AD Connect configuration. To verify this is the case, open a PowerShell window, run Cmdlet `Get-ADSyncConnector -Identifier b891884f-051e-4a83-95af-2544101c9083` ``` PS C:\> Get-ADSyncConnector -Identifier b891884f-051e-4a83-95af-2544101c9083 |
active-directory | Howto Troubleshoot Upn Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/howto-troubleshoot-upn-changes.md | Allow enough time for the UPN change to sync to Azure AD. After you verify the n Hybrid Azure AD joined devices are joined to Active Directory and Azure AD. You can implement Hybrid Azure AD join if your environment has an on-premises Active Directory footprint. -Learn more: [Hybrid Azure AD joined devices](../../devices/concept-azure-ad-join-hybrid.md) +Learn more: [Hybrid Azure AD joined devices](../../devices/concept-hybrid-join.md) ### Known issues and resolution |
active-directory | Migrate From Federation To Cloud Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/migrate-from-federation-to-cloud-authentication.md | The members in a group are automatically enabled for staged rollout. Nested and The version of SSO that you use is dependent on your device OS and join state. -- **For Windows 10, Windows Server 2016 and later versions**, we recommend using SSO via [Primary Refresh Token (PRT)](../../devices/concept-primary-refresh-token.md) with [Azure AD joined devices](../../devices/concept-azure-ad-join.md), [hybrid Azure AD joined devices](../../devices/concept-azure-ad-join-hybrid.md) and [Azure AD registered devices](../../devices/concept-azure-ad-register.md). +- **For Windows 10, Windows Server 2016 and later versions**, we recommend using SSO via [Primary Refresh Token (PRT)](../../devices/concept-primary-refresh-token.md) with [Azure AD joined devices](../../devices/concept-azure-ad-join.md), [hybrid Azure AD joined devices](../../devices/concept-hybrid-join.md) and [Azure AD registered devices](../../devices/concept-azure-ad-register.md). - **For macOS and iOS devices**, we recommend using SSO via the [Microsoft Enterprise SSO plug-in for Apple devices](../../develop/apple-sso-plugin.md). This feature requires that your Apple devices are managed by an MDM. If you use Intune as your MDM then follow the [Microsoft Enterprise SSO plug-in for Apple Intune deployment guide](/mem/intune/configuration/use-enterprise-sso-plug-in-ios-ipados-macos). If you use another MDM then follow the [Jamf Pro / generic MDM deployment guide](/mem/intune/configuration/use-enterprise-sso-plug-in-ios-ipados-macos-with-jamf-pro). |
active-directory | Plan Connect Performance Factors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-connect-performance-factors.md | The size of your source Active Directory topology will influence your SQL databa - Organizations with more than 100,000 users can reduce network latencies by colocating SQL database and the provisioning engine on the same server.+- SQL Named Pipes protocol is not supported as it introduces significant delays in the sync cycle and should be disabled in the SQL Server Configuration Manager under SQL Native Clients and SQL Server Network. Please note that changing Named Pipes configuration only takes effect after restarting database and ADSync services. - Due to the high disk input and output (I/O) requirements of the sync process, use Solid State Drives (SSD) for the SQL database of the provisioning engine for optimal results, if not possible, consider RAID 0 or RAID 1 configurations. - DonΓÇÖt do a full sync preemptively; it causes unnecessary churn and slower response times. |
active-directory | Reference Connect Sync Attributes Synchronized | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-sync-attributes-synchronized.md | In this case, start with the list of attributes in this topic and identify those | msExchRecipientTypeDetails |X |X |X | | | msExchRemoteRecipientType |X | | | | | msExchRequireAuthToSendTo |X |X |X | |-| msExchResourceCapacity |X | | | | +| msExchResourceCapacity |X| | |This attribute is currently not consumed by Exchange Online. | | msExchResourceDisplay |X | | | | | msExchResourceMetaData |X | | | | | msExchResourceSearchProperties |X | | | | In this case, start with the list of attributes in this topic and identify those | pwdLastSet |X | | |mechanical property. Used to know when to invalidate already issued tokens. Used by both password sync and federation. | | reportToOriginator | | |X | | | reportToOwner | | |X | |+| securityEnabled | | |X | | | sn |X |X | | | | sourceAnchor |X |X |X |mechanical property. Immutable identifier to maintain relationship between ADDS and Azure AD. | | st |X |X | | | In this case, start with the list of attributes in this topic and identify those | pwdLastSet |X | | |mechanical property. Used to know when to invalidate already issued tokens. Used by both password hash sync, pass-through authentication and federation. | | reportToOriginator | | |X | | | reportToOwner | | |X | |+| securityEnabled | | |X | | | sn |X |X | | | | sourceAnchor |X |X |X |mechanical property. Immutable identifier to maintain relationship between ADDS and Azure AD. | | st |X |X | | | In this case, start with the list of attributes in this topic and identify those | preferredLanguage |X | | | | | proxyAddresses |X |X |X | | | pwdLastSet |X | | |mechanical property. Used to know when to invalidate already issued tokens. Used by both password hash sync, pass-through authentication and federation. |+| securityEnabled | | |X | | | sn |X |X | | | | sourceAnchor |X |X |X |mechanical property. Immutable identifier to maintain relationship between ADDS and Azure AD. | | st |X |X | | | In this case, start with the list of attributes in this topic and identify those | objectSID |X | |X |mechanical property. AD user identifier used to maintain sync between Azure AD and AD. | | proxyAddresses |X |X |X |mechanical property. Used by Azure AD. Contains all secondary email addresses for the user. | | pwdLastSet |X | | |mechanical property. Used to know when to invalidate already issued tokens. |+| securityEnabled | | |X | | | sourceAnchor |X |X |X |mechanical property. Immutable identifier to maintain relationship between ADDS and Azure AD. | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. | | userPrincipalName |X | | |This UPN is the login ID for the user. Most often the same as [mail] value. | In this case, start with the list of attributes in this topic and identify those | objectSID |X | |X |mechanical property. AD user identifier used to maintain sync between Azure AD and AD. | | proxyAddresses |X |X |X | | | pwdLastSet |X | | |mechanical property. Used to know when to invalidate already issued tokens. Used by both password hash sync, pass-through authentication and federation. |+| securityEnabled | | |X | | | sourceAnchor |X |X |X |mechanical property. Immutable identifier to maintain relationship between ADDS and Azure AD. | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. | | userPrincipalName |X | | |UPN is the login ID for the user. Most often the same as [mail] value. | In this case, start with the list of attributes in this topic and identify those | postalCode |X |X | | | | preferredLanguage |X | | | | | pwdLastSet |X | | |mechanical property. Used to know when to invalidate already issued tokens. Used by both password hash sync, pass-through authentication and federation. |+| securityEnabled | | |X | | | sn |X |X | | | | sourceAnchor |X |X |X |mechanical property. Immutable identifier to maintain relationship between ADDS and Azure AD. | | st |X |X | | | This group is a set of attributes that can be used if the Azure AD directory is | objectSID |X | | |mechanical property. AD user identifier used to maintain sync between Azure AD and AD. | | proxyAddresses |X |X |X | | | pwdLastSet |X | | |mechanical property. Used to know when to invalidate already issued tokens. Used by both password hash sync, pass-through authentication and federation. |+| securityEnabled | | |X | | | sn |X |X | | | | sourceAnchor |X |X |X |mechanical property. Immutable identifier to maintain relationship between ADDS and Azure AD. | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. | |
active-directory | Datawiza Configure Sha | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-configure-sha.md | -In this tutorial, learn how to integrate Azure Active Directory (Azure AD) with [Datawiza](https://www.datawiza.com/) for [hybrid access](../devices/concept-azure-ad-join-hybrid.md). [Datawiza Access Proxy (DAP)](https://www.datawiza.com) extends Azure AD to enable single sign-on (SSO) and provide access controls to protect on-premises and cloud-hosted applications, such as Oracle E-Business Suite, Microsoft IIS, and SAP. With this solution, enterprises can transition from legacy web access managers (WAMs), such as Symantec SiteMinder, NetIQ, Oracle, and IBM, to Azure AD without rewriting applications. Enterprises can use Datawiza as a no-code, or low-code, solution to integrate new applications to Azure AD. This approach enables enterprises to implement their Zero Trust strategy while saving engineering time and reducing costs. +In this tutorial, learn how to integrate Azure Active Directory (Azure AD) with [Datawiza](https://www.datawiza.com/) for [hybrid access](../devices/concept-hybrid-join.md). [Datawiza Access Proxy (DAP)](https://www.datawiza.com) extends Azure AD to enable single sign-on (SSO) and provide access controls to protect on-premises and cloud-hosted applications, such as Oracle E-Business Suite, Microsoft IIS, and SAP. With this solution, enterprises can transition from legacy web access managers (WAMs), such as Symantec SiteMinder, NetIQ, Oracle, and IBM, to Azure AD without rewriting applications. Enterprises can use Datawiza as a no-code, or low-code, solution to integrate new applications to Azure AD. This approach enables enterprises to implement their Zero Trust strategy while saving engineering time and reducing costs. Learn more: [Zero Trust security](../../security/fundamentals/zero-trust.md) |
active-directory | Migrate Okta Sign On Policies Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-conditional-access.md | To enable hybrid Azure AD join on your Azure AD Connect server, run the configur >[!NOTE] >Hybrid Azure AD join isn't supported with the Azure AD Connect cloud provisioning agents. -1. [Configure hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md). +1. [Configure hybrid Azure AD join](../devices/how-to-hybrid-join.md). 2. On the **SCP configuration** page, select the **Authentication Service** dropdown. ![Screenshot of the Authentication Service dropdown on the Microsoft Azure Active Directory Connect dialog.](media/migrate-okta-sign-on-policies-conditional-access/scp-configuration.png) |
active-directory | Powershell Export All App Registrations Secrets And Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-all-app-registrations-secrets-and-certs.md | |
active-directory | Powershell Export All Enterprise Apps Secrets And Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md | |
active-directory | Powershell Export Apps With Expiring Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-expiring-secrets.md | |
active-directory | Powershell Export Apps With Secrets Beyond Required | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md | |
active-directory | Silverfort Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-integration.md | -Learn more: [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md). +Learn more: [Hybrid Azure AD joined devices](../devices/concept-hybrid-join.md). Silverfort connects assets with Azure AD. These bridged assets appear as regular applications in Azure AD and can be protected with [Conditional Access](../conditional-access/overview.md), single-sign-on (SSO), multi-factor authentication (MFA), auditing and more. Use Silverfort to connect assets including: |
active-directory | V2 Howto App Gallery Listing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md | When your application is added to the gallery, documentation is created that exp ## Submit your application -After you've tested that your application works with Azure AD, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign in to the portal you are presented with one of two screens. +After you've tested that your application works with Azure AD, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign in to the portal you're presented with one of two screens. -- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team will add the account in the Microsoft Application Network portal.+- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team then adds the account in the Microsoft Application Network portal. - If you see a "Request Access" page, then fill in the business justification and select **Request Access**. After your account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page. If you see the "Your sign-in was blocked" error while logging in, see [Troubleshoot sign-in to the Microsoft Application Network portal](troubleshoot-app-publishing.md). To escalate issues of any kind, send an email to the [Azure AD SSO Integration T ## Update or Remove the application from the Gallery -You can submit your application update request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign into the portal you are presented with one of two screens. +You can submit your application update request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign into the portal you're presented with one of two screens. -- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team will add the account in the Microsoft Application Network portal.+- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team then adds the account in the Microsoft Application Network portal. - If you see a "Request Access" page, then fill in the business justification and select **Request Access**. |
active-directory | Managed Identities Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md | The following Azure services support managed identities for Azure resources: | Azure Container Apps | [Managed identities in Azure Container Apps](../../container-apps/managed-identity.md) | | Azure Container Instance | [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) | | Azure Container Registry | [Use an Azure-managed identity in ACR Tasks](../../container-registry/container-registry-tasks-authentication-managed-identity.md) |-| Azure Cognitive Services | [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../ai-services/encryption/cognitive-services-encryption-keys-portal.md) | +| Azure AI services | [Configure customer-managed keys with Azure Key Vault for Azure AI services](../../ai-services/encryption/cognitive-services-encryption-keys-portal.md) | | Azure Data Box | [Use customer-managed keys in Azure Key Vault for Azure Data Box](../../databox/data-box-customer-managed-encryption-key-portal.md) | | Azure Data Explorer | [Configure managed identities for your Azure Data Explorer cluster](/azure/data-explorer/configure-managed-identities-cluster?tabs=portal) | | Azure Data Factory | [Managed identity for Data Factory](../../data-factory/data-factory-service-identity.md) | |
active-directory | Services Azure Active Directory Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md | The following services support Azure AD authentication. New services are added t | Azure App Services | [Configure your App Service or Azure Functions app to use Azure AD login](../../app-service/configure-authentication-provider-aad.md) | | Azure Batch | [Authenticate Batch service solutions with Active Directory](../../batch/batch-aad-auth.md) | | Azure Container Registry | [Authenticate with an Azure container registry](../../container-registry/container-registry-authentication.md) |-| Azure Cognitive Services | [Authenticate requests to Azure Cognitive Services](../../ai-services/authentication.md?tabs=powershell#authenticate-with-azure-active-directory) | +| Azure AI services | [Authenticate requests to Azure AI services](../../ai-services/authentication.md?tabs=powershell#authenticate-with-azure-active-directory) | | Azure Communication Services | [Authenticate to Azure Communication Services](../../communication-services/concepts/authentication.md) | | Azure Cosmos DB | [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account](../../cosmos-db/how-to-setup-rbac.md) | | Azure Databricks | [Authenticate using Azure Active Directory tokens](/azure/databricks/dev-tools/api/latest/aad/) |
active-directory | Concept Activity Logs Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md | Title: Azure Active Directory activity logs in Azure Monitor -description: Introduction to Azure Active Directory activity logs in Azure Monitor + Title: Azure Active Directory activity log integration options +description: Introduction to the options for integrating Azure Active Directory activity logs with storage and analysis tools. -# Azure AD activity logs in Azure Monitor +# Azure AD activity log integrations -Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term retention and data insights. This feature allows you to: +Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term data retention and insights. You can archive logs for storage, route to Security Information and Event Management (SIEM) tools, and integrate logs with Azure Monitor logs. -* Archive Azure AD activity logs to an Azure storage account. -* Stream Azure AD activity logs to an Azure event hub for analytics, using popular Security Information and Event Management (SIEM) tools such as Splunk, QRadar, and Microsoft Sentinel. -* Integrate Azure AD activity logs with your own custom log solutions by streaming them to an event hub. -* Send Azure AD activity logs to Azure Monitor to enable rich visualizations, monitoring, and alerting on the connected data. --> [!VIDEO https://www.youtube.com/embed/syT-9KNfug8] +With these integrations, you can enable rich visualizations, monitoring, and alerting on the connected data. This article describes the recommended uses for each integration type or access method. Cost considerations for sending Azure AD activity logs to various endpoints are also covered. ## Supported reports -You can route Azure AD audit logs and sign-in logs to your Azure Storage account, an event hub, Azure Monitor, or a custom solution. --* **Audit logs**: The [audit logs activity report](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant. -* **Sign-in logs**: With the [sign-in activity report](concept-sign-ins.md), you can determine who performed the tasks that are reported in the audit logs. -* **Provisioning logs**: With the [provisioning logs](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications. -* **Risky users logs**: With the [risky users logs](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users), you can monitor changes in user risk level and remediation activity. -* **Risk detections logs**: With the [risk detections logs](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization. --## Getting started --To use this feature, you need the appropriate license and roles. --* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). -* Azure AD Free, Basic, Premium 1, or Premium 2 [license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD. -* Azure AD Premium 1, or Premium 2 [license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), to access the Azure AD sign-in logs in the Azure portal. -* **Global Administrator** or **Security Administrator** access for the Azure AD tenant. --Depending on where you want to route the audit log data, you also need one of the following endpoints: --* An **[Azure Log Analytics workspace](tutorial-log-analytics-wizard.md)** to send Azure AD logs to Azure Monitor. -* An **[Azure storage account](../../storage/common/storage-account-create.md)** that you have `ListKeys` permissions for. - - We recommend that you use a general storage account and not a Blob storage account. - - For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage). -* An **[Azure Event Hubs namespace](../../event-hubs/event-hubs-create.md)** to integrate with third-party solutions. --Once you have your endpoint established, go to **Azure AD** and then **Diagnostic settings.** From here, you can choose what logs to send to the endpoint of your choice. For more information, see the **Create diagnostic settings** section of the [Diagnostic settings in Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md#create-diagnostic-settings) article. --## Cost considerations --If you already have an Azure AD license, you need an Azure subscription to set up the storage account and Event Hubs. The Azure subscription comes at no cost, but you have to pay to utilize Azure resources. These resources could include the storage account that you use for archival and the Event Hubs that you use for streaming. The amount of data and, thus, the cost incurred, can vary significantly depending on the tenant size. --Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md). --### Storage size for activity logs --Every audit log event uses about 2 KB of data storage. Sign-in event logs are about 4 KB of data storage. For a tenant with 100,000 users, which would incur about 1.5 million events per day, you would need about 3 GB of data storage per day. Because writes occur in approximately five-minute batches, you can anticipate around 9,000 write operations per month. +The following logs can be integrated with one of many endpoints: -The following table contains a cost estimate of, depending on the size of the tenant, a general-purpose v2 storage account in West US for at least one year of retention. To create a more accurate estimate for the data volume that you anticipate for your application, use the [Azure storage pricing calculator](https://azure.microsoft.com/pricing/details/storage/blobs/). +* The [**audit logs activity report**](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant. +* With the [**sign-in activity report**](concept-sign-ins.md), you can see when users attempt to sign in to your applications or troubleshoot sign-in errors. +* With the [**provisioning logs**](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications. +* The [**risky users logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users) helps you monitor changes in user risk level and remediation activity. +* With the [**risk detections logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization. +## Integration options -| Log category | Number of users | Events per day | Volume of data per month (est.) | Cost per month (est.) | Cost per year (est.) | -|--|--|-|--|-|| -| Audit | 100,000 | 1.5 million | 90 GB | $1.93 | $23.12 | -| Audit | 1,000 | 15,000 | 900 MB | $0.02 | $0.24 | -| Sign-ins | 1,000 | 34,800 | 4 GB | $0.13 | $1.56 | -| Sign-ins | 100,000 | 15 million | 1.7 TB | $35.41 | $424.92 | - +To help choose the right method for integrating Azure AD activity logs for storage or analysis, think about the overall task you're trying to accomplish. We've grouped the options into three main categories: -If you want to know for how long the activity data is stored in a Premium tenant, see: [How long does Azure AD store the data?](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data) +- Troubleshooting +- Long-term storage +- Analysis and monitoring -### Event Hubs messages for activity logs +### Troubleshooting -Events are batched into approximately five-minute intervals and sent as a single message that contains all the events within that timeframe. A message in the Event Hubs has a maximum size of 256 KB. If the total size of all the messages within the timeframe exceeds that volume, multiple messages are sent. +If you're performing troubleshooting tasks but you don't need to retain the logs for more than 30 days, we recommend using the Azure portal or Microsoft Graph to access activity logs. You can filter the logs for your scenario and export or download them as needed. -For example, about 18 events per second ordinarily occur for a large tenant of more than 100,000 users, a rate that equates to 5,400 events every five minutes. Audit logs are about 2 KB per event, which equates to 10.8 MB of data. Therefore, 43 messages are sent to the event hub in that five-minute interval. +If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, take a look at the long-term storage options. -The following table contains estimated costs per month for a basic event hub in West US. The volume of event data can vary from tenant to tenant, based on factors like user sign-in behavior. To calculate an accurate estimate of the data volume that you anticipate for your application, use the [Event Hubs pricing calculator](https://azure.microsoft.com/pricing/details/event-hubs/). +### Long-term storage -| Log category | Number of users | Events per second | Events per five-minute interval | Volume per interval | Messages per interval | Messages per month | Cost per month (est.) | -|--|--|-|-||||-| -| Audit | 100,000 | 18 | 5,400 | 10.8 MB | 43 | 371,520 | $10.83 | -| Audit | 1,000 | 0.1 | 52 | 104 KB | 1 | 8,640 | $10.80 | -| Sign-ins | 100,000 | 18000 | 5,400,000 | 10.8 GB | 42188 | 364,504,320 | $23.9 | -| Sign-ins | 1,000 | 178 | 53,400 | 106.8 MB | 418 | 3,611,520 | $11.06 | +If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, you can export your logs to an Azure storage account. This option is ideal of you don't plan on querying that data often. -### Azure Monitor logs cost considerations +If you need to query the data that you're retaining for more than 30 days, take a look at the analysis and monitoring options. -| Log category | Number of users | Events per day | Events per month (30 days) | Cost per month in USD (est.) | -|:-|--|--|--|-:| -| Audit and Sign-ins | 100,000 | 16,500,000 | 495,000,000 | $1093.00 | -| Audit | 100,000 | 1,500,000 | 45,000,000 | $246.66 | -| Sign-ins | 100,000 | 15,000,000 | 450,000,000 | $847.28 | +### Analysis and monitoring -To review costs related to managing the Azure Monitor logs, see [Azure Monitor Logs pricing details](../../azure-monitor/logs/cost-logs.md). +If your scenario requires that you retain data for more than 30 days *and* you plan on querying that data regularly, you've got a few options to integrate your data with SIEM tools for analysis and monitoring. -## Frequently asked questions +If you have a third party SIEM tool, we recommend setting up an Event Hubs namespace and event hub that you can stream your data through. With an event hub, you can stream logs to one of the supported SIEM tools. -This section answers frequently asked questions and discusses known issues with Azure AD logs in Azure Monitor. +If you don't plan on using a third-party SIEM tool, we recommend sending your Azure AD activity logs to Azure Monitor logs. With this integration, you can query your activity logs with Log Analytics. In Addition to Azure Monitor logs, Microsoft Sentinel provides near real-time security detection and threat hunting. If you decide to integrate with SIEM tools later, you can stream your Azure AD activity logs along with your other Azure data through an event hub. -**Q: Which logs are included?** --**A**: The sign-in activity logs and audit logs are both available for routing through this feature, although B2C-related audit events are currently not included. To find out which types of logs and which feature-based logs are currently supported, see [Audit log schema](./overview-reports.md) and [Sign-in log schema](reference-azure-monitor-sign-ins-log-schema.md). ----**Q: What happens if an Administrator changes the retention period of a diagnostic setting?** +## Cost considerations -**A**: The new retention policy will be applied to logs collected after the change. Logs collected before the policy change will be unaffected. +There's a cost for sending data to a Log Analytics workspace, archiving data in a storage account, or streaming logs to an event hub. The amount of data and the cost incurred can vary significantly depending on the tenant size, the number of policies in use, and even the time of day. -+Because the size and cost for sending logs to an endpoint is difficult to predict, the most accurate way to determine your expected costs is to route your logs to an endpoint for day or two. With this snapshot, you can get an accurate prediction for your expected costs. You can also get an estimate of your costs by downloading a sample of your logs and multiplying accordingly to get an estimate for one day. -**Q: How much will it cost to store my data?** +Other considerations for sending Azure AD logs to Azure Monitor logs are covered in the following Azure Monitor cost details articles: -**A**: The storage costs depend on both the size of your logs and the retention period you choose. For a list of the estimated costs for tenants, which depend on the volume of logs generated, see the [Storage size for activity logs](#storage-size-for-activity-logs) section. +- [Azure Monitor logs cost calculations and options](../../azure-monitor/logs/cost-logs.md) +- [Azure Monitor cost and usage](../../azure-monitor/usage-estimated-costs.md) +- [Optimize costs in Azure Monitor](../../azure-monitor/best-practices-cost.md) -+Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md). -**Q: How much will it cost to stream my data to an event hub?** +## Estimate your costs -**A**: The streaming costs depend on the number of messages you receive per minute. This article discusses how the costs are calculated and lists cost estimates, which are based on the number of messages. +To estimate the costs for your organization, you can estimate either the daily log size or the daily cost for integrating your logs with an endpoint. -+The following factors could affect costs for your organization: -**Q: How do I integrate Azure AD activity logs with my SIEM tools?** +- Audit log events use around 2 KB of data storage +- Sign-in log events use on average 11.5 KB of data storage +- A tenant of about 100,000 users could incur about 1.5 million events per day +- Events are batched into about 5-minute intervals and sent as a single message that contains all the events within that time frame -**A**: You can do integrate with your SIEM tools in two ways: +### Daily log size -- Use Azure Monitor with Event Hubs to stream logs to your SIEM tool. First, [stream the logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) and then [set up your SIEM tool](tutorial-azure-monitor-stream-logs-to-event-hub.md#access-data-from-your-event-hub) with the configured event hub. +To estimate the daily log size, gather a sample of your logs, adjust the sample to reflect your tenant size and settings, then apply that sample to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). -- Use the [Reporting Graph API](concept-reporting-api.md) to access the data, and push it into the SIEM system using your own scripts.+If you haven't downloaded logs from the Azure portal, review the [How to download logs in Azure AD](howto-download-logs.md) article. Depending on the size of your organization, you may need to choose a different sample size to start your estimation. The following sample sizes are a good place to start: -+- 1000 records +- For large tenants, 15 minutes of sign-ins +- For small to medium tenants, 1 hour of sign-ins -**Q: What SIEM tools are currently supported?** +You should also consider the geographic distribution and peak hours of your users when you capture your data sample. If your organization is based in one region, it's likely that sign-ins peak around the same time. Adjust your sample size and when you capture the sample accordingly. -**A**: Currently, Azure Monitor is supported by [Splunk](./howto-integrate-activity-logs-with-splunk.md), IBM QRadar, [Sumo Logic](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure/), [ArcSight](./howto-integrate-activity-logs-with-arcsight.md), LogRhythm, and Logz.io. For more information about how the connectors work, see [Stream Azure monitoring data to an event hub for consumption by an external tool](../../azure-monitor/essentials/stream-monitoring-data-event-hubs.md). +With the data sample captured, multiply accordingly to find out how large the file would be for one day. -+### Estimate the daily cost -**Q: How do I integrate Azure AD activity logs with my Splunk instance?** +To get an idea of how much a log integration could cost for your organization, you can enable an integration for a day or two. Use this option if your budget allows for the temporary increase. -**A**: First, [route the Azure AD activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md), then follow the steps to [Integrate activity logs with Splunk](./howto-integrate-activity-logs-with-splunk.md). +To enable a log integration, follow the steps in the [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) article. If possible, create a new resource group for the logs and endpoint you want to try out. Having a devoted resource group makes it easy to view the cost analysis and then delete it when you're done. -+With the integration enabled, navigate to **Azure portal** > **Cost Management** > **Cost analysis**. There are several ways to analyze costs. This [Cost Management quickstart](../../cost-management-billing/costs/quick-acm-cost-analysis.md) should help you get started. The figures in the following screenshot are used for example purposes and are not intended to reflect actual amounts. -**Q: How do I integrate Azure AD activity logs with Sumo Logic?** +![Screenshot of a cost analysis breakdown as a pie chart.](media/concept-activity-logs-azure-monitor/cost-analysis-breakdown.png) -**A**: First, [route the Azure AD activity logs to an event hub](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory), then follow the steps to [Install the Azure AD application and view the dashboards in SumoLogic](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards). +Make sure you're using your new resource group as the scope. Explore the daily costs and forecasts to get an idea of how much your log integration could cost. -+## Calculate estimated costs -**Q: Can I access the data from an event hub without using an external SIEM tool?** +From the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) landing page, you can estimate the costs for various products. -**A**: Yes. To access the logs from your custom application, you can use the [Event Hubs API](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md). +- [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/) +- [Azure storage](https://azure.microsoft.com/pricing/details/storage/blobs/) +- [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/) +- [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/microsoft-sentinel/) -+Once you have an estimate for the GB/day that will be sent to an endpoint, enter that value in the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). The figures in the following screenshot are used for example purposes and are not intended to reflect actual prices. +![Screenshot of the Azure pricing calculator, with 8 GB/Day used as an example.](media/concept-activity-logs-azure-monitor/azure-pricing-calculator-values.png) ## Next steps |
active-directory | Howto Access Activity Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md | -This article shows you how to access the Azure AD activity logs and provides common use cases for accessing Azure AD logs data, including recommendations for the right access method. The article also describes related reports that use the data contained in the activity logs. +You can access Azure AD activity logs and reports using the following methods: -## Prerequisites -Viewing audit logs is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. To access the sign-ins activity logs, your tenant must have an Azure AD Premium license associated with it. --The following roles provide read access to audit and sign-in logs. Always use the least privileged role, according to [Microsoft Zero Trust guidance](/security/zero-trust/zero-trust-overview). --- Reports Reader-- Security Reader-- Security Administrator-- Global Reader (sign-in logs only)-- Global Administrator+- [Stream activity logs to an **event hub** to integrate with other tools](#stream-logs-to-an-event-hub-to-integrate-with-siem-tools) +- [Access activity logs through the **Microsoft Graph API**](#access-logs-with-microsoft-graph-api) +- [Integrate activity logs with **Azure Monitor logs**](#integrate-logs-with-azure-monitor-logs) +- [Monitor activity in real-time with **Microsoft Sentinel**](#monitor-events-with-microsoft-sentinel) +- [View activity logs and reports in the **Azure portal**](#view-logs-through-the-portal) +- [Export activity logs for **storage and queries**](#export-logs-for-storage-and-queries) -## Access the activity logs in the portal +Each of these methods provides you with capabilities that may align with certain scenarios. This article describes those scenarios, including recommendations and details about related reports that use the data in the activity logs. Explore the options in this article to learn about those scenarios so you can choose the right method. [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. -1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs**. -1. Adjust the filter according to your needs. - - For more information on the filter options for audit logs, see [Azure AD audit log categories and activities](reference-audit-activities.md). - - For more information on the sign-in logs, see [Basic info in the Azure AD sign-in logs](reference-basic-info-sign-in-logs.md). +## Prerequisites -## Logs and reports that use activity log data +The required roles and licenses may vary based on the report. Global Administrator can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview). -The data captured in the Azure AD activity logs are used in many reports and services. You can review the sign-in logs, audit logs, and provisioning logs for specific scenarios or use the reports to look at patterns and trends. For example the sign-in logs are helpful when researching a user's sign-in activity or to track an application's usage. If you want to see trends or see how your policies impact the data, you can start with the Azure AD Identity Protection reports or use **Diagnostic settings** to [send your data to **Azure Monitor**](howto-integrate-activity-logs-with-log-analytics.md) for further analysis. +| Log / Report | Roles | Licenses | +|--|--|--| +| Audit | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD | +| Sign-ins | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD | +| Provisioning | Same as audit and sign-ins, plus<br>Security Operator<br>Application Administrator<br>Cloud App Administrator<br>A custom role with `provisioningLogs` permission | Premium P1/P2 | +| Usage and insights | Security Reader<br>Reports Reader<br> Security Administrator | Premium P1/P2 | +| Identity Protection* | Security Administrator<br>Security Operator<br>Security Reader<br>Global Reader | Azure AD Free/Microsoft 365 Apps<br>Azure AD Premium P1/P2 | -### Audit logs +*The level of access and capabilities for Identity Protection varies with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements). -The audit logs capture a wide variety of data. Some examples of the types of activity captured in the logs are included in the following list. New audit information is added periodically, so this list is not exhaustive. +Audit logs are available for features that you've licensed. To access the sign-ins logs using the Microsoft Graph API, your tenant must have an Azure AD Premium license associated with it. -* Password reset and registration activity -* Self-service groups activity -* Microsoft 365 Group name changes -* Account provisioning activity and errors -* Privileged Identity Management activity -* Device registration and compliance activity +## Stream logs to an event hub to integrate with SIEM tools -### Anomalous activity reports +Streaming your activity logs to an event hub is required to integrate your activity logs with Security Information and Event Management (SIEM) tools, such as Splunk and SumoLogic. Before you can stream logs to an event hub, you need to [set up an Event Hubs namespace and an event hub](../../event-hubs/event-hubs-create.md) in your Azure subscription. -Anomalous activity reports provide information on security-related risk detections that Azure AD can detect and report on. +### Recommended uses -The following table lists the Azure AD anomalous activity security reports, and corresponding risk detection types in the Azure portal. For more information, see -[Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md). +The SIEM tools you can integrate with your event hub can provide analysis and monitoring capabilities. If you're already using these tools to ingest data from other sources, you can stream your identity data for more comprehensive analysis and monitoring. We recommend streaming your activity logs to an event hub for the following types of scenarios: -| Azure AD anomalous activity report | Identity protection risk detection type| -| : | : | -| Users with leaked credentials | Leaked credentials | -| Irregular sign-in activity | Impossible travel to atypical locations | -| Sign-ins from possibly infected devices | Sign-ins from infected devices| -| Sign-ins from unknown sources | Sign-ins from anonymous IP addresses | -| Sign-ins from IP addresses with suspicious activity | Sign-ins from IP addresses with suspicious activity | -| - | Sign-ins from unfamiliar locations | +- If you need a big data streaming platform and event ingestion service to receive and process millions of events per second. +- If you're looking to transform and store data by using a real-time analytics provider or batching/storage adapters. -The following Azure AD anomalous activity security reports are not included as risk detections in the Azure portal: +### Quick steps -* Sign-ins after multiple failures -* Sign-ins from multiple geographies +1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. +1. Create an Event Hubs namespace and event hub. +1. Go to **Azure AD** > **Diagnostic settings**. +1. Choose the logs you want to stream, select the **Stream to an event hub** option, and complete the fields. + - [Set up an Event Hubs namespace and an event hub](../../event-hubs/event-hubs-create.md) + - [Learn more about streaming activity logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) ++ Your independent security vendor should provide you with instructions on how to ingest data from Azure Event Hubs into their tool. -### Risk detection and Azure AD Identity Protection +## Access logs with Microsoft Graph API -You can access reports about risk detections in **[Azure AD Identity Protection](https://portal.azure.com/#view/Microsoft_AAD_IAM/IdentityProtectionMenuBlade/~/Overview)**. With this service you can protect users by reviewing existing user and sign-in risk policies. You can also analyze current activity with the following reports: +The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance. -- Risky users-- Risky workload identities-- Risky sign-ins-- Risk detections+### Recommended uses -For more information, see [What is Identity Protection?](../identity-protection/overview-identity-protection.md) +Using Microsoft Graph explorer, you can run queries to help you with the following types of scenarios: -## Investigate a single sign-in +- View tenant activities such as who made a change to a group and when. +- Mark an Azure AD sign-in event as safe or confirmed compromised. +- Retrieve a list of application sign-ins for the last 30 days. -Investigating a single sign-in includes scenarios, in which you need to: +### Quick steps -- Do a quick investigation of a single user over a limited scope. For example, a user had trouble signing in during a period of a few hours. +1. [Configure the prerequisites](howto-configure-prerequisites-for-reporting-api.md). +1. Sign in to [Graph Explorer](https://aka.ms/ge). +1. Set the HTTP method and API version. +1. Add a query then select the **Run query** button. + - [Familiarize yourself with the Microsoft Graph properties for directory audits](/graph/api/resources/directoryaudit) + - [Complete the MS Graph Quickstart guide](quickstart-access-log-with-graph-api.md) + +## Integrate logs with Azure Monitor logs -- Quickly look through a set of related events. For example, comparing device details from a series of sign-ins from the same user. +With the Azure Monitor logs integration, you can enable rich visualizations, monitoring, and alerting on the connected data. Log Analytics provides enhanced query and analysis capabilities for Azure AD activity logs. To integrate Azure AD activity logs with Azure Monitor logs, you need a Log Analytics workspace. From there, you can run queries through Log Analytics. -### Recommendation +### Recommended uses -For these one-off investigations with a limited scope, the Azure portal is often the easiest way to find the data you need. The related user interface provides you with filter options enabling you to find the entries you need to solve your scenario. +Integrating Azure AD logs with Azure Monitor logs provides a centralized location for querying logs. We recommend integrating logs with Azure Monitor logs for the following types of scenarios: -Check out the following resources: -- [Sign-in logs in Azure Active Directory (preview)](concept-all-sign-ins.md)-- [Sign-in logs in Azure Active Directory](concept-sign-ins.md)-- [Analyze sign-ins with the Azure AD sign-ins log](quickstart-analyze-sign-in.md)+- Compare Azure AD sign-in logs with logs published by other Azure services. +- Correlate sign-in logs against Azure Application insights. +- Query logs using specific search parameters. -## Access from code +### Quick steps -There are cases where you need to periodically access activity logs from an app or a script. +1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. +1. [Create a Log Analytics workspace](../../azure-monitor/learn/quick-create-workspace.md). +1. Go to **Azure AD** > **Diagnostic settings**. +1. Choose the logs you want to stream, select the **Send to Log Analytics workspace** option, and complete the fields. +1. Go to **Azure AD** > **Log Analytics** and begin querying the data. + - [Integrate Azure AD logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) + - [Learn how to query using Log Analytics](howto-analyze-activity-logs-log-analytics.md) -### Recommendation +## Monitor events with Microsoft Sentinel -The right access method for accessing activity logs from code depends on the scope of your project. Two options you have are to access your activity logs from the [Microsoft Graph API](quickstart-access-log-with-graph-api.md) or to send your logs to [Azure Event Hubs](../../event-hubs/event-hubs-about.md). +Sending sign-in and audit logs to Microsoft Sentinel provides your security operations center with near real-time security detection and threat hunting. The term *threat hunting* refers to a proactive approach to improve the security posture of your environment. As opposed to classic protection, threat hunting tries to proactively identify potential threats that might harm your system. Your activity log data might be part of your threat hunting solution. +### Recommended uses -The **Microsoft Graph API**: +We recommend using the real-time security detection capabilities of Microsoft Sentinel if your organization needs security analytics and threat intelligence. Use Microsoft Sentinel if you need to: -- Provides a RESTful way to query sign-in data from Azure AD in Azure AD Premium tenants.-- Doesn't require an administrator or developer to set up additional infrastructure to support your script or app. -- Is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API leads to issues with pagination and performance. +- Collect security data across your enterprise. +- Detect threats with vast threat intelligence. +- Investigate critical incidents guided by AI. +- Respond rapidly and automate protection. -**Azure Event Hubs**: +### Quick steps -- Is a big data streaming platform and event ingestion service.-- Can receive and process millions of events per second.-- Transforms and stores data by using any real-time analytics provider or batching/storage adapters.+1. Learn about the [prerequisites](../../sentinel/prerequisites.md), [roles and permissions](../../sentinel/roles.md). +1. [Estimate potential costs](../../sentinel/billing.md). +1. [Onboard to Microsoft Sentinel](../../sentinel/quickstart-onboard.md). +1. [Collect Azure AD data](../../sentinel/connect-azure-active-directory.md). +1. [Begin hunting for threats](../../sentinel/hunting.md). -You can use: -- The **Microsoft Graph API** for scoped queries (a limited set of users or time).-- **Azure Event Hubs** for pulling large sets of sign-in data.+## View logs through the Portal -## Near real-time security event detection and threat hunting +For one-off investigations with a limited scope, the [Azure portal](https://portal.azure.com) is often the easiest way to find the data you need. The user interface for each of these reports provides you with filter options enabling you to find the entries you need to solve your scenario. -To detect and stop threats before they can cause harm to your environment, you might have a security solution deployed in your environment that can process activity log data in real-time. +The data captured in the Azure AD activity logs are used in many reports and services. You can review the sign-in, audit, and provisioning logs for one-off scenarios or use reports to look at patterns and trends. The data from the activity logs help populate the Identity Protection reports, which provide information security related risk detections that Azure AD can detect and report on. Azure AD activity logs also populate Usage and insights reports, which provide usage details for your tenant's applications. -The term *threat hunting* refers to a proactive approach to improve the security posture of your environment. As opposed to classic protection, thread hunting tries to proactively identify potential threats that might harm your system. Your activity log data might be part of your threat hunting solution. +### Recommended uses -### Recommendation +The reports available in the Azure portal provide a wide range of capabilities to monitor activities and usage in your tenant. The following list of uses and scenarios isn't exhaustive, so explore the reports for your needs. -For real-time security detection, use [Microsoft Sentinel](../../sentinel/overview.md), or [Azure Event Hubs](../../event-hubs/event-hubs-about.md). +- Research a user's sign-in activity or track an application's usage. +- Review details around group name changes, device registration, and password resets with audit logs. +- Use the Identity Protection reports for monitoring at risk users, risky workload identities, and risky sign-ins. +- To ensure that your users can access the applications in use in your tenant, you can review the sign-in success rate in the Azure AD application activity (preview) report from Usage and insights. +- Compare the different authentication methods your users prefer with the Authentication methods report from Usage and insights. -You can use: +### Quick steps -- **Microsoft Sentinel** to provide sign-in and audit data to your security operations center for a near real-time security detection. You can stream data to Azure Sentinel with the built-in Azure AD to Azure Sentinel connector. For more information, see [connect Azure Active Directory data to Azure Sentinel](../../sentinel/connect-azure-active-directory.md). +Use the following basic steps to access the reports in the Azure portal. +#### Azure AD activity logs -- **Azure Event Hubs** if your security operations center uses another tool. You can stream Azure AD events using an Azure Event Hubs. For more information, see [stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md). - - Your independent security vendor should provide you with instructions on how to ingest data from Azure Event Hubs into their tool. You can find instructions for some commonly used SIEM tools in the Azure AD reporting documentation: +1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs** from the **Monitoring** menu. +1. Adjust the filter according to your needs. + - [Learn how to filter activity logs](quickstart-filter-audit-log.md) + - [Explore the Azure AD audit log categories and activities](reference-audit-activities.md) + - [Learn about basic info in the Azure AD sign-in logs](reference-basic-info-sign-in-logs.md) -- [ArcSight](howto-integrate-activity-logs-with-arcsight.md)-- [Splunk](howto-integrate-activity-logs-with-splunk.md) -- [SumoLogic](howto-integrate-activity-logs-with-sumologic.md) +#### Azure AD Identity Protection reports -## Export data for long term storage +1. Go to **Azure AD** > **Security** > **Identity Protection**. +1. Explore the available reports. + - [Learn more about Identity Protection](../identity-protection/overview-identity-protection.md) + - [Learn how to investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md) -Azure AD stores your log data only for a limited amount of time. For more information, see [How long does Azure AD store reporting data](reference-reports-data-retention.md). +#### Usage and insights reports -If you need to store your log information for a longer period due to compliance or security reasons, you have a few options. +1. Go to **Azure AD** and select **Usage and insights** from the **Monitoring** menu. +1. Explore the available reports. + - [Learn more about the Usage and insights report](concept-usage-insights-report.md) -### Recommendation +## Export logs for storage and queries -The right solution for your long term storage depends on your budget and what you plan on doing with the data. +The right solution for your long-term storage depends on your budget and what you plan on doing with the data. You've got three options: -If your budget is tight, and you need cheap method to create a long-term backup of your activity logs, you can do a manual download. The user interface of the activity logs provides you with an option to download the data as **JSON** or **CSV**. For more information, see [how to download logs in Azure Active Directory](howto-download-logs.md). --One trade off of the manual download is that it requires a lot of manual interaction. If you are looking for a more professional solution, use either Azure Storage or Azure Monitor. +- Archive logs to Azure Storage +- Download logs for manual storage +- Integrate logs with Azure Monitor logs -[Azure Storage](../../storage/common/storage-introduction.md) is the right solution for you if you aren't planning on querying your data often. For more information, see [archive directory logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md). +[Azure Storage](../../storage/common/storage-introduction.md) is the right solution if you aren't planning on querying your data often. For more information, see [Archive directory logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md). ++If you plan to query the logs often to run reports or perform analysis on the stored logs, you should [integrate your data with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md). ++If your budget is tight, and you need a cheap method to create a long-term backup of your activity logs, you can [manually download your logs](howto-download-logs.md). The user interface of the activity logs in the portal provides you with an option to download the data as **JSON** or **CSV**. One trade off of the manual download is that it requires more manual interaction. If you're looking for a more professional solution, use either Azure Storage or Azure Monitor. ++### Recommended uses -If you plan to query the logs often to run reports or perform analysis on the stored logs, you should store your data in Azure Monitor. Azure Monitor provides you with built-in reporting and alerting capabilities. For more information, see [integrate Azure Active Directory logs to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md). Once you have the integration set up, you can use Azure Monitor to query your logs. For more information, see [analyze activity logs using Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md). +We recommend setting up a storage account to archive your activity logs for those governance and compliance scenarios where long-term storage is required. -## Log analysis +If you want to long-term storage *and* you want to run queries against the data, review the section on [integrating your activity logs with Azure Monitor Logs](#integrate-logs-with-azure-monitor-logs). -One common requirement is to export activity data to perform a log analysis. +We recommend manually downloading and storing your activity logs if you have budgetary constraints. -### Recommendation +### Quick steps -If you are not planning on using an independent log analysis tool, use Azure Monitor or Event Hubs. Azure Monitor provides a very easy way to analyze logs from Azure AD, as well as other Azure services and independent tools. You can easily export logs to Azure Monitor using the built-in connector. For more information, see [integrate Azure Active Directory logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md). Once you have the integration set up, you can use Azure Monitor to query your logs. For more information, see [analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md). +Use the following basic steps to archive or download your activity logs. -You can also export your logs to an independent log analysis tool, such as [Splunk](howto-integrate-activity-logs-with-splunk.md). +### Archive activity logs to a storage account +1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. +1. Create a storage account. +1. Go to **Azure AD** > **Diagnostic settings**. +1. Choose the logs you want to stream, select the **Archive to a storage account** option, and complete the fields. + - [Review the data retention policies](reference-reports-data-retention.md) ++#### Manually download activity logs ++1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. +1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs** from the **Monitoring** menu. +1. Select **Download**. + - [Learn more about how to download logs](howto-download-logs.md). ## Next steps -* [Get data using the Azure Active Directory reporting API with certificates](tutorial-access-api-with-certificates.md) -* [Audit API reference](/graph/api/resources/directoryaudit) -* [Sign-in activity report API reference](/graph/api/resources/signin) +- [Stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) +- [Archive logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md) +- [Integrate logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) + |
active-directory | Howto Analyze Activity Logs Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md | Title: Analyze activity logs using Azure Monitor logs -description: Learn how to analyze Azure Active Directory activity logs using Azure Monitor logs + Title: Analyze activity logs using Log Analytics +description: Learn how to analyze Azure Active Directory activity logs using Log Analytics -# Analyze Azure AD activity logs with Azure Monitor logs +# Analyze Azure AD activity logs with Log Analytics -After you [integrate Azure AD activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md), you can use the power of Azure Monitor logs to gain insights into your environment. You can also install the [Log analytics views for Azure AD activity logs](howto-install-use-log-analytics-views.md) to get access to pre-built reports around audit and sign-in events in your environment. +After you [integrate Azure AD activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md), you can use the power of Log Analytics and Azure Monitor logs to gain insights into your environment. -In this article, you learn how to analyze the Azure AD activity logs in your Log Analytics workspace. + * Compare your Azure AD sign-in logs against security logs published by Microsoft Defender for Cloud. + + * Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights. + * Analyze the Identity Protection risky users and risk detections logs to detect threats in your environment. -## Prerequisites +This article describes to analyze the Azure AD activity logs in your Log Analytics workspace. -To follow along, you need: +## Roles and licenses -* A [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md) in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). -* First, complete the steps to [route the Azure AD activity logs to your Log Analytics workspace](howto-integrate-activity-logs-with-log-analytics.md). -* [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace -* The following roles in Azure Active Directory (if you're accessing Log Analytics through Azure portal) - - Security Admin - - Security Reader - - Reports Reader - - Global Administrator - -## Navigate to the Log Analytics workspace +To analyze Azure AD logs with Azure Monitor, you need the following roles and licenses: ++* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). ++* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD. ++* **Reports Reader**, **Security Reader**, or **Security Administrator** access for the Azure AD tenant: These roles are required to view Log Analytics through the Azure AD portal. ++* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions. ++## Access Log Analytics ++To view the Azure AD Log Analytics, you must already be sending your activity logs from Azure AD to a Log Analytics workspace. This process is covered in the [How to integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md) article. [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] 1. Sign in to the [Azure portal](https://portal.azure.com). -2. Select **Azure Active Directory**, and then select **Logs** from the **Monitoring** section to open your Log Analytics workspace. The workspace will open with a default query. +1. Go to **Azure Active Directory** > **Log Analytics**. A default search query runs. ![Default query](./media/howto-analyze-activity-logs-log-analytics/defaultquery.png) +1. Expand the **LogManagement** category to view the list of log related queries. -## View the schema for Azure AD activity logs +1. Select or hover over the name of a query to view a description and other useful details. -The logs are pushed to the **AuditLogs** and **SigninLogs** tables in the workspace. To view the schema for these tables: + ![Screenshot of the details of a query.](media/howto-analyze-activity-logs-log-analytics/log-analytics-query-details.png) -1. From the default query view in the previous section, select **Schema** and expand the workspace. +1. Expand a query from the list to view the schema. -2. Expand the **Log Management** section and then expand either **AuditLogs** or **SigninLogs** to view the log schema. + ![Screenshot of the schema of a query.](media/howto-analyze-activity-logs-log-analytics/log-analytics-query-schema.png) -## Query the Azure AD activity logs +## Query activity logs -Now that you have the logs in your workspace, you can now run queries against them. For example, to get the top applications used in the last week, replace the default query with the following and select **Run** +You can run queries against the activity logs being routed to a Log Analytics workspace. For example, to get a list of applications with the most sign-ins from last week, enter the following query and select the **Run** button. ``` SigninLogs AuditLogs | summarize auditCount = count() by OperationName | sort by auditCount desc ```-## Alert on Azure AD activity log data --You can also set up alerts on your query. For example, to configure an alert when more than 10 applications have been used in the last week: --1. From the workspace, select **Set alert** to open the **Create rule** page. +## Set up alerts - ![Set alert](./media/howto-analyze-activity-logs-log-analytics/setalert.png) +You can also set up alerts on a query. After running a query, the **+ New alert rule** button becomes active. -2. Select the default **alert criteria** created in the alert and update the **Threshold** in the default metric to 10. +1. From Log Analytics, select the **+ New alert rule** button. + - The **Create a rule** process involves several sections to customize the criteria for the rule. + - For more information on creating alert rules, see [Create a new alert rule](../../azure-monitor/alerts/alerts-create-new-alert-rule.md) from the Azure Monitor documentation, starting with the **Condition** steps. + + ![Screenshot of the "+ New alert rule" button in Log Analytics.](media/howto-analyze-activity-logs-log-analytics/log-analytics-new-alert.png) - ![Alert criteria](./media/howto-analyze-activity-logs-log-analytics/alertcriteria.png) +1. On the **Actions** tab, select the **Action Group** that will receive the alert when the signal occurs. + - You can choose to notify your team via email or text message, or you could automate the action using webhooks, Azure functions or logic apps. + - Learn more about [creating and managing alert groups in the Azure portal](../../azure-monitor/alerts/action-groups.md). -3. Enter a name and description for the alert, and choose the severity level. For our example, we could set it to **Informational**. +1. On the **Details** tab, give the alert rule a name and associate it with a subscription and resource group. -4. Select the **Action Group** that will be alerted when the signal occurs. You can choose to notify your team via email or text message, or you could automate the action using webhooks, Azure functions or logic apps. Learn more about [creating and managing alert groups in the Azure portal](../../azure-monitor/alerts/action-groups.md). +1. After configuring all necessary details, select the **Review + Create** button. -5. Once you've configured the alert, select **Create alert** to enable it. +## Use workbooks to analyze logs -## Use pre-built workbooks for Azure AD activity logs +Azure AD workbooks provide several reports related to common scenarios involving audit, sign-in, and provisioning events. *You can also alert on any of the data provided in the reports, using the steps described in the previous section.* -The workbooks provide several reports related to common scenarios involving audit, sign-in, and provisioning events. You can also alert on any of the data provided in the reports, using the steps described in the previous section. +* **Provisioning analysis:** This workbook shows reports related to auditing provisioning activity. Activities can include the number of new users provisioned, provisioning failures, number of users updated, update failures, the number of users deprovisioned, and corresponding failures. For more information, see [Understand how provisioning integrates with Azure Monitor logs](../app-provisioning/application-provisioning-log-analytics.md). -* **Provisioning analysis**: This [workbook](../app-provisioning/application-provisioning-log-analytics.md) shows reports related to auditing provisioning activity. Activities can include the number of new users provisioned, provisioning failures, number of users updated, update failures, the number of users de-provisioned, and corresponding failures. * **Sign-ins Events**: This workbook shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, and a summary view tracking the number of sign-ins over time.-* **Conditional access insights**: The Conditional Access insights and reporting [workbook](../conditional-access/howto-conditional-access-insights-reporting.md) enables you to understand the effect of Conditional Access policies in your organization over time. ++* **Conditional access insights**: The Conditional Access insights and reporting workbook enables you to understand the effect of Conditional Access policies in your organization over time. For more information, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md). ## Next steps * [Get started with queries in Azure Monitor logs](../../azure-monitor/logs/get-started-queries.md) * [Create and manage alert groups in the Azure portal](../../azure-monitor/alerts/action-groups.md)-* [Install and use the log analytics views for Azure Active Directory](howto-install-use-log-analytics-views.md) |
active-directory | Howto Integrate Activity Logs With Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md | Title: Stream Azure Active Directory logs to Azure Monitor logs -description: Learn how to integrate Azure Active Directory logs with Azure Monitor logs + Title: Integrate Azure Active Directory logs with Azure Monitor | Microsoft Docs +description: Learn how to integrate Azure Active Directory logs with Azure Monitor -# Integrate Azure AD logs with Azure Monitor logs +# How to integrate Azure AD logs with Azure Monitor logs -Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so your sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. +Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. Integrating Azure AD logs with Azure Monitor logs enables rich visualizations, monitoring, and alerting on the connected data. -This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor. +This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor Logs. -Use the integration of Azure AD activity logs and Azure Monitor to perform the following tasks: +## Roles and licenses - * Compare your Azure AD sign-in logs against security logs published by Microsoft Defender for Cloud. - - * Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights. +To integrate Azure AD logs with Azure Monitor, you need the following roles and licenses: - * Analyze the Identity Protection risky users and risk detections logs to detect threats in your environment. - - * Identify sign-ins from applications still using the Active Directory Authentication Library (ADAL) for authentication. [Learn about the ADAL end-of-support plan.](../develop/msal-migration.md) +* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). -> [!NOTE] -> Integrating Azure Active Directory logs with Azure Monitor will automatically enable the Azure Active Directory data connector within Microsoft Sentinel. +* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD. ++* **Security Administrator access for the Azure AD tenant:** This role is required to set up the Diagnostics settings. ++* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions. ++## Integrate logs with Azure Monitor logs -This Microsoft Ignite 2018 session video shows the benefits of integrating Azure AD logs and Azure Monitor in practical scenarios: +To send Azure AD logs to Azure Monitor Logs you must first have a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md). Then you can set up the Diagnostics settings in Azure AD to send your activity logs to that workspace. -> [!VIDEO https://www.youtube.com/embed/MP5IaCTwkQg?start=1894] +### Create a Log Analytics workspace -## How do I access it? +A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). -To use this feature, you need: +Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. -* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). -* An Azure AD Premium P1 or P2 tenant. You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD. -* **Global Administrator** or **Security Administrator** access for the Azure AD tenant. -* A **Log Analytics workspace** in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). +### Set up Diagnostics settings -## Send logs to Azure Monitor +Once you have a Log Analytics workspace created, follow the steps below to send logs from Azure Active Directory to that workspace. [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] Follow the steps below to send logs from Azure Active Directory to Azure Monitor. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. -1. Sign in to the [Azure portal](https://portal.azure.com) as a **Security Administrator** or **Global Administrator**. +1. Sign in to the [Azure portal](https://portal.azure.com) as a **Security Administrator**. -1. Go to **Azure Active Directory** > **Diagnostic settings**. You can also select **Export Settings** from either the **Audit Logs** or **Sign-ins** page. +1. Go to **Azure Active Directory** > **Diagnostic settings**. You can also select **Export Settings** from the Audit logs or Sign-in logs. -1. Select **+ Add diagnostic setting** to create a new integration or select **Edit setting** for an existing integration. +1. Select **+ Add diagnostic setting** to create a new integration or select **Edit setting** to change an existing integration. 1. Enter a **Diagnostic setting name**. If you're editing an existing integration, you can't change the name. Follow the steps below to send logs from Azure Active Directory to Azure Monitor * `ManagedIdentitySignInLogs` * `ProvisioningLogs` * `ADFSSignInLogs` Active Directory Federation Services (ADFS)+ * `RiskyServicePrincipals` * `RiskyUsers`+ * `ServicePrincipalRiskEvents` * `UserRiskEvents`- * `RiskyServicePrincipals` - * `ServicePrincipalRiskEvents` 1. The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview. * `EnrichedOffice365AuditLogs` * `MicrosoftGraphActivityLogs` * `NetworkAccessTrafficLogs` -1. Select the **Destination details** for where you'd like to send the logs. Choose any or all of the following destinations. Additional fields appear, depending on your selection. -- * **Send to Log Analytics workspace:** Select the appropriate details from the menus that appear. - * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear. - * **Stream to an event hub:** Select the appropriate details from the menus that appear. - * **Send to partner solution:** Select the appropriate details from the menus that appear. +1. In the **Destination details**, select **Send to Log Analytics workspace** and choose the appropriate details from the menus that appear. + * You can also send logs to any or all of the following destinations. Additional fields appear, depending on your selection. + * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear. + * **Stream to an event hub:** Select the appropriate details from the menus that appear. + * **Send to partner solution:** Select the appropriate details from the menus that appear. 1. Select **Save** to save the setting. Follow the steps below to send logs from Azure Active Directory to Azure Monitor If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs. +> [!NOTE] +> Integrating Azure Active Directory logs with Azure Monitor will automatically enable the Azure Active Directory data connector within Microsoft Sentinel. + ## Next steps * [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) |
active-directory | Howto Integrate Activity Logs With Splunk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md | Append **body.records.category=AuditLogs** to the search. The Azure AD activity ## Next steps * [Interpret audit logs schema in Azure Monitor](./overview-reports.md)-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) -* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions) +* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) |
active-directory | Howto Integrate Activity Logs With Sumologic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md | To use this feature, you need: ## Next steps * [Interpret audit logs schema in Azure Monitor](./overview-reports.md)-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) -* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions) +* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) |
active-directory | Howto Use Azure Monitor Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md | -# How to use Azure Monitor workbooks for Azure Active Directory +# How to use Azure Active Directory Workbooks -When using Azure Workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch. +Workbooks are found in Azure AD and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks, however, workbooks for Azure Active Directory (AD) cover only those identity management scenarios that are associated with Azure AD. ++When using workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch. - **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) are a good starting point when you're just getting started with workbooks. - **Private templates** are helpful when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant. When using Azure Workbooks, you can either start with an empty workbook, or use ## Prerequisites To use Azure Workbooks for Azure AD, you need:-- An Azure Active Directory (Azure AD) tenant with a premium (P1 or P2) license. Learn how to [get a premium license](../fundamentals/active-directory-get-started-premium.md)-- The appropriate roles for the Log Analytics workspace *and* Azure AD-- A Log Analytics workspace -1. Create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) - - Access to the Log Analytics workspace is determined by the workspace settings, access to the resources sending the data to the workspace, and the method used to access the workspace. - - To ensure you have the right access, review the Azure workspace permissions in the [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md?tabs=tabs=portal#azure-rbac) article. +- An Azure AD tenant with a [Premium P1 license](../fundamentals/active-directory-get-started-premium.md) +- A Log Analytics workspace *and* access to that workspace +- The appropriate roles for Azure Monitor *and* Azure AD -2. Ensure that you have one of the following roles in Azure AD (if you're accessing the workspace through the Azure portal): - - Security Administrator - - Security Reader - - Reports Reader - - Global Administrator --3. Ensure that you have the one of the following Azure roles for the subscription: - - Global Reader - - Reports Reader - - Security Reader - - Application Administrator - - Cloud Application Administrator - - Company Administrator +### Log Analytics workspace ++You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) *before* you can use Azure AD Workbooks. There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data. ++For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md). ++### Azure Monitor roles ++Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access. ++- **View**: + - Monitoring Reader + - Log Analytics Reader ++- **View and modify settings**: + - Monitoring Contributor + - Log Analytics Contributor ++For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader). ++For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) ++### Azure AD roles ++Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace. ++- **Read**: + - Reports Reader + - Security Reader + - Global Reader ++- **Update**: - Security Administrator- - For more information on Azure subscription roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md). ++For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md). ## How to access Azure Workbooks for Azure AD |
active-directory | Howto Use Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md | Each recommendation provides the same set of details that explain what the recom - The **Impacted resources** table contains a list of resources identified by the recommendation. The resource's name, ID, date it was first detected, and status are provided. The resource could be an application or resource service principal, for example. > [!NOTE]-> In the Azure portal the impacted resources are limited to a maximum of 50 resources. To view more resources, you should use the expand query parameter at the end of your API query on Microsoft graph. For example: Get: https://graph.microsoft.com/beta/directory/recommendations?$expand=impactedResources +> In the Azure portal the impacted resources are limited to a maximum of 50 resources. To view all impacted resources for a recommendation, use this Microsoft Graph API request: +>`GET /directory/recommendations/{recommendationId}/impactedResources` +> +>For more information, see the [How to use Microsoft Graph with with Azure AD recommendations](#how-to-use-microsoft-graph-with-azure-active-directory-recommendations) section of this article. ## How to update a recommendation Continue to monitor the recommendations in your tenant for changes. ### How to use Microsoft Graph with Azure Active Directory recommendations -Azure Active Directory recommendations can be viewed and managed using Microsoft Graph on the `/beta` endpoint. You can view recommendations along with their impacted resources, postpone a recommendation for later, and more. +Azure Active Directory recommendations can be viewed and managed using Microsoft Graph on the `/beta` endpoint. You can view recommendations along with their impacted resources, postpone a recommendation for later, and more. For more information, see the [Microsoft Graph documentation for recommendations](/graph/api/resources/recommendations-api-overview). -To get started, follow these instructions to work with recommendations using Microsoft Graph in Graph Explorer. The example uses the "Migrate apps from Active Directory Federated Services (ADFS) to Azure AD" recommendation. +To get started, follow these instructions to work with recommendations using Microsoft Graph in Graph Explorer. 1. Sign in to [Graph Explorer](https://aka.ms/ge). 1. Select **GET** as the HTTP method from the dropdown. 1. Set the API version to **beta**.-1. Add the following query to retrieve recommendations, then select the **Run query** button. - ```http - GET https://graph.microsoft.com/beta/directory/recommendations - ``` +#### View all recommendations -1. To view the details of a specific `recommendationType`, use the following API. This example retrieves the detail of the "Migrate apps from AD FS to Azure AD" recommendation. +Add the following query to retrieve all recommendations for your tenant, then select the **Run query** button. - ```http - GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration' - ``` +```http +GET https://graph.microsoft.com/beta/directory/recommendations +``` -1. To view the impacted resources for a specific recommendation, expand the `impactedResources` relationship. +All recommendations that apply to your tenant appear in the response. The impact, benefits, summary of the impacted resources, and remediation steps are provided in the response. Locate the recommendation ID for any recommendation to view the impacted resources. - ```http - GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration'&$expand=impactedResources - ``` +#### View a specific recommendation -For more information, see the [Microsoft Graph documentation for recommendations](/graph/api/resources/recommendations-api-overview). +If you want to look for a specific recommendation, you can add a `recommendationType` to the request. This example retrieves the details of the `applicationCredentialExpiry` recommendation. ++```http +GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'applicationCredentialExpiry' +``` ++#### View impacted resources for a recommendation ++Some recommendations may potentially return a long list of impacted resources. To view the list of impacted resources, you need to locate the recommendation ID. The recommendation ID appears in the response when viewing all recommendations and a specific recommendation. ++To view the impacted resources for a specific recommendation, use the following query with the recommendation ID you saved. ++```http +GET /directory/recommendations/{recommendationId}/impactedResources +``` ## Next steps - [Review the Azure AD recommendations overview](overview-recommendations.md)-- [Learn about Service Health notifications](overview-service-health-notifications.md)+- [Learn about Service Health notifications](overview-service-health-notifications.md) |
active-directory | Quickstart Azure Monitor Route Logs To Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md | Title: Tutorial - Archive directory logs to a storage account -description: Learn how to set up Azure Diagnostics to push Azure Active Directory logs to a storage account + Title: Tutorial - Archive Azure Active Directory logs to a storage account +description: Learn how to route Azure Active Directory logs to a storage account To use this feature, you need: * An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). * An Azure AD tenant.-* A user who's a *global administrator* or *security administrator* for the Azure AD tenant. +* A user who's a *Global Administrator* or *Security Administrator* for the Azure AD tenant. +* To export sign-in data, you must have an Azure AD P1 or P2 license. ## Archive logs to an Azure storage account To use this feature, you need: 1. Sign in to the [Azure portal](https://portal.azure.com). -2. Select **Azure Active Directory** > **Monitoring** > **Audit logs**. +1. Select **Azure Active Directory** > **Monitoring** > **Audit logs**. -3. Select **Export Data Settings**. +1. Select **Export Data Settings**. -4. In the **Diagnostics settings** pane, do either of the following: - 1. To change existing setting, select **Edit setting** next to the diagnostic setting you want to update. - 1. To add new settings, select **Add diagnostic setting**. -- You can have up to three settings. +1. You can either create a new setting (up to three settings are allowed) or edit an existing setting. + - To change existing setting, select **Edit setting** next to the diagnostic setting you want to update. + - To add new settings, select **Add diagnostic setting**. ![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png) -5. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting. --6. Under **Destination Details** Select the **Archive to a storage account** check box. +1. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting. -7. Select the Azure subscription in the **Subscription** menu and storage account in the **Storage account** menu that you want to route the logs to. +1. Under **Destination Details** select the **Archive to a storage account** check box. Text fields for the retention period appear next to each log category. -8. Select all the relevant categories in under **Category details**: +1. Select the Azure subscription and storage account for you want to route the logs. - Do either or both of the following: - 1. select the **AuditLogs** check box to send audit logs to the storage account. - - 1. select the **SignInLogs** check box to send sign-in logs to the storage account. +1. Select all the relevant categories in under **Category details**: ![Diagnostics settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/DiagnosticSettings.png) -9. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. +1. In the **Retention days** field, enter the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. + +1. Select **Save**. ++1. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. -> [!NOTE] -> The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md). + > [!NOTE] + > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md). -10. Select **Save** to save the setting. +1. Select **Save** to save the setting. -11. Close the window to return to the Diagnostic settings pane. +1. Close the window to return to the Diagnostic settings pane. ## Next steps * [Tutorial: Configure a log analytics workspace](tutorial-log-analytics-wizard.md) * [Interpret audit logs schema in Azure Monitor](./overview-reports.md) * [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)-* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions) |
active-directory | Airbase Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airbase-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Airbase for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Airbase. +++writer: twimmers ++ms.assetid: 1388651c-4527-49bc-97d3-6fbdd203d37f ++++ Last updated : 07/26/2023++++# Tutorial: Configure Airbase for automatic user provisioning ++This tutorial describes the steps you need to perform in both Airbase and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Airbase](https://www.airbase.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Airbase. +> * Remove users in Airbase when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Airbase. +> * [Single sign-on](airbase-tutorial.md) to Airbase (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Airbase with Admin permissions. ++## Step 1. Plan your provisioning deployment +* Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +* Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +* Determine what data to [map between Azure AD and Airbase](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Airbase to support provisioning with Azure AD +Contact Airbase support to configure Airbase to support provisioning with Azure AD. ++## Step 3. Add Airbase from the Azure AD application gallery ++Add Airbase from the Azure AD application gallery to start managing provisioning to Airbase. If you have previously setup Airbase for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Airbase ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. ++### To configure automatic user provisioning for Airbase in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Airbase**. ++ ![Screenshot of the Airbase link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Airbase Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Airbase. If the connection fails, ensure your Airbase account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Airbase**. ++1. Review the user attributes that are synchronized from Azure AD to Airbase in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Airbase for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Airbase API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Airbase| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |emails[type eq "work"].value|String||✓ + |name.givenName|String|| + |name.familyName|String|| + |externalId|String||✓ + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|| + |urn:ietf:params:scim:schemas:extension:airbase:2.0:User:accountingPolicy|String|| + |urn:ietf:params:scim:schemas:extension:airbase:2.0:User:subsidiary|String|| + |urn:ietf:params:scim:schemas:extension:airbase:2.0:User:role|String|| ++1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Airbase, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users that you would like to provision to Airbase by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Airtable Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airtable-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * A user account in Airtable with Admin permissions. ## Step 1. Plan your provisioning deployment-1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). -1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -1. Determine what data to [map between Azure AD and Airtable](../app-provisioning/customize-application-attributes.md). +* Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +* Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +* Determine what data to [map between Azure AD and Airtable](../app-provisioning/customize-application-attributes.md). -## Step 2. Configure Airtable to support provisioning with Azure AD -Contact Airtable support to configure Airtable to support provisioning with Azure AD. +## Step 2. Create an Airtable Personal Access Token to authorize provisioning with Azure AD. ++1. Login to [Airtable Developer Hub](https://airtable.com) as an Admin user, and then navigate to `https://airtable.com/create/tokens`. +1. Select "Personal Access Tokens" from the left hand navigation bar. ++ ![Screenshot of Personal Access Token Selection.](media/airtable-provisioning-tutorial/developer-hub-personal-access-token.png) ++1. Create a new token with a memorable name such as "AzureAdScimProvisioning". +1. Add the "enterprise.scim.usersAndGroups:manage" scope. ++ ![Screenshot of enterprise scim scope addition.](media/airtable-provisioning-tutorial/enterprise-scim-scope.png) ++1. Select "Create Token" and copy the resulting token for use in **Step 5** below. ## Step 3. Add Airtable from the Azure AD application gallery This section guides you through the steps to configure the Azure AD provisioning ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) -1. Under the **Admin Credentials** section, input your Airtable Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Airtable. If the connection fails, ensure your Airtable account has Admin permissions and try again. +1. Under the **Admin Credentials** section, + 1. Enter `https://airtable.com/scim/v2` as the Airtable **Tenant URL**. ++ 1. Enter the Personal Access Token created in **Step 2** above as **Secret Token**. ++ Click **Test Connection** to ensure Azure AD can connect to Airtable. If the connection fails, ensure your Airtable account has Admin permissions and that your personal access token has the appropriate scope applied and try again. ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) |
active-directory | Fleet Management System Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fleet-management-system-tutorial.md | In this article, you learn how to integrate Fleet Management System with Azure A You'll configure and test Azure AD single sign-on for Fleet Management System in a test environment. Fleet Management System supports **IDP** initiated single sign-on. +> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. + ## Prerequisites To integrate Azure Active Directory with Fleet Management System, you need: Complete the following steps to enable Azure AD single sign-on in the Azure port ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") -1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already pre-integrated with Azure. +1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type one of the following URLs: ++ | Environment | URL | + |-|-| + | Production| `https://msfms.net/SAMLFms` | + | Staging | `https://test.msfms.net/SAMLFms`| ++ b. In the **Reply URL** textbox, type one of the following URLs: ++ | Environment | URL | + |-|-| + | Production| `https://msfms.net/saml2/acs` | + | Staging | `https://test.msfms.net/saml2/acs`| -1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. +1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") |
active-directory | Kintone Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kintone-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Kintone for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Kintone. +++writer: twimmers ++ms.assetid: 6c6ccabb-0a15-4a15-ba97-771fd15017d0 ++++ Last updated : 07/26/2023++++# Tutorial: Configure Kintone for automatic user provisioning ++This tutorial describes the steps you need to perform in both Kintone and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Kintone](https://www.kintone.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Kintone. +> * Remove users in Kintone when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Kintone. +> * [Single sign-on](kintone-tutorial.md) to Kintone (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Kintone with Admin permissions. ++## Step 1. Plan your provisioning deployment +* Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +* Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +* Determine what data to [map between Azure AD and Kintone](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Kintone to support provisioning with Azure AD +Contact Kintone support to configure Kintone to support provisioning with Azure AD. ++## Step 3. Add Kintone from the Azure AD application gallery ++Add Kintone from the Azure AD application gallery to start managing provisioning to Kintone. If you have previously setup Kintone for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Kintone ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. ++### To configure automatic user provisioning for Kintone in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Kintone**. ++ ![Screenshot of the Kintone link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Kintone Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Kintone. If the connection fails, ensure your Kintone account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Kintone**. ++1. Review the user attributes that are synchronized from Azure AD to Kintone in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Kintone for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Kintone API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Kintone| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |displayName|String|| + |emails[type eq "work"].value|String|| + |name.givenName|String|| + |name.familyName|String|| + |externalId|String|| ++1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Kintone, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users that you would like to provision to Kintone by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Oreilly Learning Platform Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oreilly-learning-platform-provisioning-tutorial.md | + + Title: Configure O'Reilly learning platform for automatic user provisioning with Azure Active Directory +description: Learn how to automatically provision and de-provision user accounts from Azure AD to O'Reilly learning platform. +++writer: twimmers ++ms.assetid: ba350af3-896c-4a0d-93f3-f91d8eccd5a5 ++++ Last updated : 07/26/2023++++# Tutorial: Configure O'Reilly learning platform for automatic user provisioning ++This tutorial describes the steps you need to perform in both O'Reilly learning platform and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [O'Reilly learning platform](https://www.oreilly.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in O'Reilly learning platform. +> * Remove users in O'Reilly learning platform when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and O'Reilly learning platform. +> * [Single sign-on](oreilly-learning-platform-tutorial.md) to O'Reilly learning platform (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in O'Reilly learning platform with Admin permissions. ++## Step 1. Plan your provisioning deployment +* Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +* Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +* Determine what data to [map between Azure AD and O'Reilly learning platform](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure O'Reilly learning platform to support provisioning with Azure AD +Contact O'Reilly learning platform support to configure O'Reilly learning platform to support provisioning with Azure AD. ++## Step 3. Add O'Reilly learning platform from the Azure AD application gallery ++Add O'Reilly learning platform from the Azure AD application gallery to start managing provisioning to O'Reilly learning platform. If you have previously setup O'Reilly learning platform for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to O'Reilly learning platform ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. ++### To configure automatic user provisioning for O'Reilly learning platform in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **O'Reilly learning platform**. ++ ![Screenshot of the O'Reilly learning platform link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your O'Reilly learning platform Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to O'Reilly learning platform. If the connection fails, ensure your O'Reilly learning platform account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to O'Reilly learning platform**. ++1. Review the user attributes that are synchronized from Azure AD to O'Reilly learning platform in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in O'Reilly learning platform for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the O'Reilly learning platform API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by O'Reilly learning platform| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |emails[type eq "work"].value|String||✓ + |name.givenName|String||✓ + |name.familyName|String||✓ + |externalId|String||✓ ++1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for O'Reilly learning platform, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users that you would like to provision to O'Reilly learning platform by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Wiggledesk Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wiggledesk-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * A user account in WiggleDesk with Admin permissions. ## Step 1. Plan your provisioning deployment-1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). -1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -1. Determine what data to [map between Azure AD and WiggleDesk](../app-provisioning/customize-application-attributes.md). +* Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +* Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +* Determine what data to [map between Azure AD and WiggleDesk](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure WiggleDesk to support provisioning with Azure AD Contact WiggleDesk support to configure WiggleDesk to support provisioning with Azure AD. |
active-directory | Configure Cmmc Level 2 Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md | -Azure Active Directory can help you meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To be compliant with requirements in [CMMC V2.0 level 2](https://cmmc-coe.org/maturity-level-two/), it's the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD) to complete other configurations or processes. +Azure Active Directory can help you meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To be compliant with requirements in [CMMC V2.0 level 2](https://cmmc-coe.org), it's the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD) to complete other configurations or processes. In CMMC Level 2, there are 13 domains that have one or more practices related to identity: The following table provides a list of practice statement and objectives, and Az | AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](../conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview) | | AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md) | | AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Configuration Manager, or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management) |-| AC.L2-3.1.19<br><br>**Practice statement:** Encrypt CUI on mobile devices and mobile computing platforms.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices and mobile computing platforms that process, store, or transmit CUI are identified; and<br>[b.] encryption is employed to protect CUI on identified mobile devices and mobile computing platforms. | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI.<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) | -| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Configuration Manager, or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad) +| AC.L2-3.1.19<br><br>**Practice statement:** Encrypt CUI on mobile devices and mobile computing platforms.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices and mobile computing platforms that process, store, or transmit CUI are identified; and<br>[b.] encryption is employed to protect CUI on identified mobile devices and mobile computing platforms. | **Managed Device**<br>Configure Conditional Access policies to enforce compliant or hybrid Azure AD joined device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI.<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) | +| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Configuration Manager, or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad) ### Next steps |
active-directory | Memo 22 09 Enterprise Wide Identity Management System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md | Learn more: Devices integrated with Azure AD are hybrid-joined devices or Azure AD joined devices. Separate device onboarding by client and user devices, and by physical and virtual machines that operate as infrastructure. For more information about deployment strategy for user devices, see the following guidance. * [Plan your Azure AD device deployment](../devices/plan-device-deployment.md)-* [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) +* [Hybrid Azure AD joined devices](../devices/concept-hybrid-join.md) * [Azure AD joined devices](../devices/concept-azure-ad-join.md) * [Log in to a Windows virtual machine in Azure by using Azure AD including passwordless](../devices/howto-vm-sign-in-azure-ad-windows.md) * [Log in to a Linux virtual machine in Azure by using Azure AD and OpenSSH](../devices/howto-vm-sign-in-azure-ad-linux.md) |
active-directory | Memo 22 09 Multi Factor Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-multi-factor-authentication.md | Learn more: [Authentication methods in Azure AD - Microsoft Authenticator app](. Learn more: * [Plan your hybrid Azure AD join implementation](../devices/hybrid-azuread-join-plan.md), or -* [How to: Plan your Azure AD join implementation](../devices/azureadjoin-plan.md) +* [How to: Plan your Azure AD join implementation](../devices/device-join-plan.md) * See also, [Common Conditional Access policy: Require a compliant device, hybrid Azure AD joined device, or multifactor authentication for all users](../conditional-access/howto-conditional-access-policy-compliant-device.md) >[!NOTE] |
active-directory | Pci Requirement 8 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-8.md | -|**8.2.8** If a user session has been idle for more than 15 minutes, the user is required to reauthenticate to reactivate the terminal or session.|Use endpoint management policies with Intune, and Microsoft Endpoint Manager. Then, use Conditional Access to allow access from compliant devices. [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) </br> If your CDE environment relies on group policy objects (GPO), configure GPO to set an idle timeout. Configure Azure AD to allow access from hybrid Azure AD joined devices. [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md)| +|**8.2.8** If a user session has been idle for more than 15 minutes, the user is required to reauthenticate to reactivate the terminal or session.|Use endpoint management policies with Intune, and Microsoft Endpoint Manager. Then, use Conditional Access to allow access from compliant devices. [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) </br> If your CDE environment relies on group policy objects (GPO), configure GPO to set an idle timeout. Configure Azure AD to allow access from hybrid Azure AD joined devices. [Hybrid Azure AD joined devices](../devices/concept-hybrid-join.md)| ## 8.3 Strong authentication for users and administrators is established and managed. |
advisor | Advisor Cost Optimization Workbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-optimization-workbook.md | + + Title: Understand and optimize your Azure costs with the new Azure Cost Optimization workbook. +description: Understand and optimize your Azure costs with the new Azure Cost Optimization workbook. + Last updated : 07/17/2023++++# Understand and optimize your Azure costs using the Cost Optimization workbook +The Azure Cost Optimization workbook is designed to an overview and help optimize costs for your Azure environment. It offers a set of cost-relevant insights and recommendations aligned with the WAF Cost Optimization pillar. ++## Overview +The Azure Cost Optimization workbook serves as a centralized hub for some of the most commonly used tools that can help you drive utilization and efficiency goals. It offers a range of recommendations, including Azure Advisor cost recommendations, identification of idle resources, and management of improperly deallocated Virtual Machines. Additionally, it provides insights into using Azure Hybrid benefit options for Windows, Linux, and SQL databases. The workbook template is available in Azure Advisor gallery. ++HereΓÇÖs how to get started: ++1. Navigate to [Workbooks gallery](https://aka.ms/advisorworkbooks) in Azure Advisor +1. Open **Cost Optimization (Preview)** workbook template. ++The workbook is organized into different tabs, each focusing on a specific area to help you reduce the cost of your Azure environment. +* Compute +* Azure Hybrid Benefit +* Storage +* Networking ++Each tab supports the following capabilities: +* **Filters** - use subscription, resource group and tag filters to focus on a specific workload. +* **Export** - export the recommendations to share the insights and collaborate with your team more effectively. +* **Quick Fix** - apply the recommended optimization directly from the workbook page, streamlining the optimization process. +++> [!NOTE] +> The workbook serves as guidance and does not guarantee cost reduction. ++## Compute ++### Advisor recommendations ++This query focuses on reviewing the Advisor recommendations related to compute. Some of the recommendations available in this query could be *Optimize virtual machine spend by resizing or shutting down underutilized instances* or *Buy reserved virtual machine instances to save money over pay-as-you-go costs*. ++### Virtual machines in Stopped State ++This query identifies Virtual Machines that are not properly deallocated. If a virtual machineΓÇÖs status is *Stopped* rather than *Stopped (Deallocated)*, you are still billed for the resource as the hardware remains allocated for you. ++### Web Apps +This query helps identify Azure App Services with and without Auto Scale, and App Services where the actual app might be stopped. ++### Azure Kubernetes Clusters (AKS) ++This query focuses on cost optimization opportunities specific to Azure Kubernetes Clusters (AKS). It provides recommendations such as: +* Enabling cluster autoscaler to automatically adjust the number of agent nodes in response to resource constraints. +* Considering the use of Azure Spot VMs for workloads that can handle interruptions, early terminations, or evictions. +* Utilizing the Horizontal Pod Autoscaler to adjust the number of pods in a deployment based on CPU utilization or other selected metrics. +* Using the Start/Stop feature in Azure Kubernetes Services (AKS) to optimize cost during off-peak hours. +* Using appropriate VM SKUs per node pool and considering reserved instances where long-term capacity is expected. ++### Azure Hybrid Benefit ++Windows VMs and VMSS not using Hybrid Benefit ++Azure Hybrid Benefit represents an excellent opportunity to save on Virtual Machines OS costs. You can see potential savings using Azure Hybrid Benefit Calculator Check this link to learn more about the Azure Hybrid Benefit. ++> [!NOTE] +> If you have selected Dev/Test subscription(s) within the scope of this Workbook then they should already have discounts on Windows licenses so recommendations here donΓÇÖt apply to this subscription(s) ++### Linux VM not using Hybrid Benefit ++Similar to Windows VMs, Azure Hybrid Benefit provides an excellent opportunity to save on Virtual Machine OS costs. The Azure Hybrid Benefit for Linux is a licensing benefit that significantly reduces the costs of running Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) in the cloud. ++### SQL HUB Licenses ++Azure Hybrid Benefit can also be applied to SQL services, such as SQL server on VMs, SQL Database or SQL Managed Instance. ++## Storage ++### Advisor recommendations ++Review the Advisor recommendations for Storage. This section provides insights into various recommendations such as ΓÇ£Blob storage reserved capacityΓÇ¥ or ΓÇ£Use lifecycle management.ΓÇ¥ These recommendations can help optimize your storage costs and improve efficiency. +Unattached Managed Disks +This query focuses on the list of managed unattached disks. It automatically ignores disks used by Azure Site Recovery. Use this information to identify and remove any unattached disks that are no longer needed. ++> [!NOTE] +> This query has a Quick Fix column that helps you to remove the disk if not needed. ++### Disk snapshots older than 30 days +This query identifies snapshots that are older than 30 days. Identifying and managing outdated snapshots can help you optimize storage costs and ensure efficient use of your Azure environment. ++## Networking ++### Advisor recommendations +Review the Advisor recommendations for Networking. This section provides insights into various recommendations, such as ΓÇ£Reduce costs by deleting or reconfiguring idle virtual network gatewaysΓÇ¥ or ΓÇ£Reduce costs by eliminating unprovisioned ExpressRoute circuits.ΓÇ¥ ++### Application Gateway with empty backend pool ++Review the Application Gateways with empty backend pools. App gateways are considered idle if there isnΓÇÖt any backend pool with targets. ++### Load Balancer with empty backend pool ++Review the Load Balancers with empty backend pools. Load Balancers are considered idle if there isnΓÇÖt any backend pool with targets. ++### Unattached Public IPs ++Review the list of idle Public IP Addresses. This query also shows Public IP addresses attached to idle Network Interface Cards (NICs) ++### Idle Virtual Network Gateways ++Review the Idle Virtual Network Gateways. This query shows VPN Gateways without any active connection. ++For more information, see: +* [Well-Architected cost optimization design principles](/azure/well-architected/cost/principles) +* [Cloud Adoption Framework manage cloud costs](/azure/cloud-adoption-framework/get-started/manage-costs) +* [Azure FinOps principles](/azure/cost-management-billing/finops/overview-finops) +* [Azure Advisor cost recommendations](advisor-reference-cost-recommendations.md) + |
ai-services | Developer Reference Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/developer-reference-resource.md | SDKs, REST APIs, CLI, help you develop Language Understanding (LUIS) apps in you ## Azure resource management -Use the Azure AI services Management layer to create, edit, list, and delete the Language Understanding or Azure AI services resource. +Use the Azure AI services management layer to create, edit, list, and delete the Language Understanding or Azure AI services resource. Find reference documentation based on the tool: |
ai-services | Encrypt Data At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/encrypt-data-at-rest.md | There are some limitations when using the E0 tier with existing/previously creat * The Bing Spell check feature isn't supported. * Logging end-user traffic is disabled if your application is E0. * The Speech priming capability from the Azure AI Bot Service isn't supported for applications in the E0 tier. This feature is available via the Azure AI Bot Service, which doesn't support CMK.-* The speech priming capability from the portal requires Azure Blob Storage. For more information, see [bring your own storage](../Speech-Service/speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos-for-customization-and-logging). +* The speech priming capability from the portal requires Azure Blob Storage. For more information, see [bring your own storage](../Speech-Service/speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos). ### Enable customer-managed keys |
ai-services | Luis Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-limits.md | Title: Limits - LUIS -description: This article contains the known limits of Azure AI services Language Understanding (LUIS). LUIS has several limits areas. Model limit controls intents, entities, and features in LUIS. Quota limits based on key type. Keyboard combination controls the LUIS website. +description: This article contains the known limits of Azure AI Language Understanding (LUIS). LUIS has several limits areas. Model limit controls intents, entities, and features in LUIS. Quota limits based on key type. Keyboard combination controls the LUIS website. |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md | These custom roles only apply to authoring (Language Understanding Authoring) an > * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in LUIS portal. -### Azure AI services LUIS reader +### Cognitive Services LUIS reader A user that should only be validating and reviewing LUIS applications, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets (utterances, intents, entities) to notify the app developers of any changes that need to be made, but do not have direct access to make them. A user that should only be validating and reviewing LUIS applications, typically :::column-end::: :::row-end::: -### Azure AI services LUIS writer +### Cognitive Services LUIS writer A user that is responsible for building and modifying LUIS application, as a collaborator in a larger team. The collaborator can modify the LUIS application in any way, train those changes, and validate/test those changes in the portal. However, this user wouldn't have access to deploying this application to the runtime, as they may accidentally reflect their changes in a production environment. They also wouldn't be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in a production environment. They may also create new applications under this resource, but with the restrictions mentioned. A user that is responsible for building and modifying LUIS application, as a col :::row-end::: :::row::: :::column span="":::- * All functionalities under Azure AI services LUIS Reader. + * All functionalities under Cognitive Services LUIS Reader. The ability to add: * Utterances A user that is responsible for building and modifying LUIS application, as a col :::column-end::: :::row-end::: -### Azure AI services LUIS owner +### Cognitive Services LUIS owner > [!NOTE] > * If you are assigned as an *Owner* and *LUIS Owner* you will be be shown as *LUIS Owner* in LUIS portal. These users are the gatekeepers for LUIS applications in a production environmen :::row-end::: :::row::: :::column span="":::- * All functionalities under Azure AI services LUIS Writer + * All functionalities under Cognitive Services LUIS Writer * Deploy a model * Delete an application :::column-end::: |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/whats-new.md | Title: What's New - Language Understanding (LUIS) -description: This article is regularly updated with news about the Azure AI services Language Understanding API. +description: This article is regularly updated with news about the Azure AI Language Understanding API. |
ai-services | Cognitive Services Container Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-container-support.md | Title: Use Azure AI services Containers on-premises + Title: Use Azure AI services containers on-premises description: Learn how to use Docker containers to use Azure AI services on-premises. |
ai-services | Cognitive Services Data Loss Prevention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-data-loss-prevention.md | Title: Data Loss Prevention -description: Azure AI services Data Loss Prevention capabilities allow customers to configure the list of outbound URLs their Azure AI services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss. + Title: Data loss prevention +description: Azure AI services data loss prevention capabilities allow customers to configure the list of outbound URLs their Azure AI services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss. |
ai-services | Cognitive Services Virtual Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md | Each Azure AI services resource supports up to 100 virtual network rules, which ### Required permissions -To apply a virtual network rule to an Azure AI services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Azure AI services Contributor* role. Required permissions can also be added to custom role definitions. +To apply a virtual network rule to an Azure AI services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions. Azure AI services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant. |
ai-services | Build Enrollment App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/build-enrollment-app.md | The sample app is written using JavaScript and the React Native framework. It ca 1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository. > [!WARNING]- > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Azure AI services Authentication guide](../../authentication.md) for other ways to authenticate the service. + > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Azure AI services authentication guide](../../authentication.md) for other ways to authenticate the service. 1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. The sample app is written using JavaScript and the React Native framework. It ca 1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. > [!WARNING]- > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Azure AI services Authentication guide](../../authentication.md) for other ways to authenticate the service. + > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Azure AI services authentication guide](../../authentication.md) for other ways to authenticate the service. 1. Run the app using either a simulated device from Xcode, or your own iOS device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. |
ai-services | Computer Vision How To Install Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md | In this article, you learned concepts and workflow for downloading, installing, * Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) for details about the methods supported by the container. * Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality.-* Use more [Azure AI services Containers](../cognitive-services-container-support.md) +* Use more [Azure AI services containers](../cognitive-services-container-support.md) |
ai-services | Deploy Computer Vision On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/deploy-computer-vision-on-premises.md | replicaset.apps/read-6cbbb6678 3 3 3 3s For more details on installing applications with Helm in Azure Kubernetes Service (AKS), [visit here][installing-helm-apps-in-aks]. > [!div class="nextstepaction"]-> [Azure AI services Containers][cog-svcs-containers] +> [Azure AI services containers][cog-svcs-containers] <!-- LINKS - external --> [free-azure-account]: https://azure.microsoft.com/free |
ai-services | Identity Access Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-access-token.md | If the ISV learns that a client is using the LimitedAccessToken for non-approved ## Prerequisites * [cURL](https://curl.haxx.se/) installed (or another tool that can make HTTP requests).-* The ISV needs to have either an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource or a [Azure AI services multi-service](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllInOne) resource. +* The ISV needs to have either an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource or an [Azure AI services multi-service](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllInOne) resource. * The client needs to have an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource. ## Step 1: ISV obtains client's Face resource ID |
ai-services | Read Container Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/read-container-migration-guide.md | Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which set * Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container. * Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality.-* Use more [Azure AI services Containers](../cognitive-services-container-support.md) +* Use more [Azure AI services containers](../cognitive-services-container-support.md) |
ai-services | Azure Container Instance Recipe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-container-instance-recipe.md | Title: Azure Container Instance recipe -description: Learn how to deploy Azure AI services Containers on Azure Container Instance +description: Learn how to deploy Azure AI services containers on Azure Container Instance The recipe works with any Azure AI services container. The Azure AI services res * Azure AI service resource **endpoint URL** - review your specific service's "How to install" for the container, to find where the endpoint URL is from within the Azure portal, and what a correct example of the URL looks like. The exact format can change from service to service. * Azure AI service resource **key** - the keys are on the **Keys** page for the Azure resource. You only need one of the two keys. The key is a string of 32 alpha-numeric characters. -* A single Azure AI services Container on your local host (your computer). Make sure you can: +* A single Azure AI services container on your local host (your computer). Make sure you can: * Pull down the image with a `docker pull` command. * Run the local container successfully with all required configuration settings with a `docker run` command. * Call the container's endpoint, getting a response of HTTP 2xx and a JSON response back. |
ai-services | Azure Kubernetes Recipe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-kubernetes-recipe.md | This procedure requires several tools that must be installed and run locally. Do ## Running the sample -This procedure loads and runs the Azure AI services Container sample for language detection. The sample has two containers, one for the client application and one for the Azure AI services container. We'll push both of these images to the Azure Container Registry. Once they are on your own registry, create an Azure Kubernetes Service to access these images and run the containers. When the containers are running, use the **kubectl** CLI to watch the containers performance. Access the client application with an HTTP request and see the results. +This procedure loads and runs the Azure AI services container sample for language detection. The sample has two containers, one for the client application and one for the Azure AI services container. We'll push both of these images to the Azure Container Registry. Once they are on your own registry, create an Azure Kubernetes Service to access these images and run the containers. When the containers are running, use the **kubectl** CLI to watch the containers performance. Access the client application with an HTTP request and see the results. ![A diagram showing the conceptual idea of running a container on Kubernetes](media/container-instance-sample.png) az group delete --name cogserv-container-rg ## Next steps -[Azure AI services Containers](../cognitive-services-container-support.md) +[Azure AI services containers](../cognitive-services-container-support.md) |
ai-services | Container Reuse Recipe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/container-reuse-recipe.md | -Use these container recipes to create Azure AI services Containers that can be reused. Containers can be built with some or all configuration settings so that they are _not_ needed when the container is started. +Use these container recipes to create Azure AI services containers that can be reused. Containers can be built with some or all configuration settings so that they are _not_ needed when the container is started. Once you have this new layer of container (with settings), and you have tested it locally, you can store the container in a container registry. When the container starts, it will only need those settings that are not currently stored in the container. The private registry container provides configuration space for you to pass those settings in. |
ai-services | Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/disconnected-containers.md | See the following documentation for steps on downloading and configuring the con * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md#run-the-container-disconnected-from-the-internet) * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md#run-the-container-disconnected-from-the-internet) * [Language Detection](../language-service/language-detection/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)- ++## Environment variable names in Kubernetes deployments +Some Azure AI Containers, for example Translator, require users to pass environmental variable names that include colons (`:`) when running the container. This will work fine when using Docker, but Kubernetes does not accept colons in environmental variable names. +To resolve this, you can replace colons with double underscore characters (`__`) when deploying to Kubernetes. See the following example of an acceptable format for environment variable names: ++```Kubernetes + env: + - name: Mounts__License + value: "/license" + - name: Mounts__Output + value: "/output" +``` ++This example replaces the default format for the `Mounts:License` and `Mounts:Output` environment variable names in the docker run command. ## Container image and license updates |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/language-support.md | Title: Language support - Content Moderator API -description: This is a list of natural languages that the Azure AI services Content Moderator API supports. +description: This is a list of natural languages that the Content Moderator API supports. |
ai-services | Samples Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/samples-dotnet.md | Title: Code samples - Content Moderator, .NET -description: Learn how to use Azure AI services Content Moderator in your .NET applications through the SDK. +description: Learn how to use Content Moderator in your .NET applications through the SDK. |
ai-services | Samples Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/samples-rest.md | Title: Code samples - Content Moderator, C# -description: Use Azure AI services Content Moderator feature based samples in your applications through REST API calls. +description: Use Content Moderator feature based samples in your applications through REST API calls. |
ai-services | Quickstart Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-image.md | |
ai-services | Quickstart Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-text.md | |
ai-services | Limits And Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/limits-and-quotas.md | -There are two tiers of keys for the Custom Vision service. You can sign up for a F0 (free) or S0 (standard) subscription through the Azure portal. See the corresponding [Azure AI services Pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for details on pricing and transactions. +There are two tiers of keys for the Custom Vision service. You can sign up for a F0 (free) or S0 (standard) subscription through the Azure portal. See the corresponding [Azure AI services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for details on pricing and transactions. The number of training images per project and tags per project are expected to increase over time for S0 projects. |
ai-services | Concept Business Card | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-business-card.md | See how data, including name, job title, address, email, and company name, is ex * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: |
ai-services | Concept Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom.md | Document Intelligence v2.1 supports the following tools: Extract data from your specific or unique documents using custom models. You need the following resources: * An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/).-* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal."::: For a detailed walkthrough to create your first custom extraction model, see [ho Extract data from your specific or unique documents using custom models. You need the following resources: * An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/).-* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal."::: |
ai-services | Concept General Document | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md | You need the following resources: * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: |
ai-services | Concept Id Document | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-id-document.md | Extract data, including name, birth date, and expiration date, from ID documents * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: |
ai-services | Concept Invoice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-invoice.md | See how data, including customer information, vendor details, and line items, is * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: |
ai-services | Concept Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-layout.md | See how data, including text, tables, table headers, selection marks, and struct * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: |
ai-services | Concept Read | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md | Try extracting text from forms and documents using the Document Intelligence Stu * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: |
ai-services | Concept Receipt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-receipt.md | See how Document Intelligence extracts data, including time and date of transact * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: |
ai-services | Concept W2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-w2.md | Try extracting data from W-2 forms using the Document Intelligence Studio. You n * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal."::: |
ai-services | Disconnected | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/disconnected.md | Title: Use Document Intelligence containers in disconnected environments -description: Learn how to run Azure AI services Docker containers disconnected from the internet. +description: Learn how to run Cognitive Services Docker containers disconnected from the internet. |
ai-services | Install Run | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md | You also need the following to use Document Intelligence containers: |**Azure CLI (command-line interface)** | The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell. | :::moniker range="doc-intel-2.1.0"-You also need a **Azure AI Vision API resource to process business cards, ID documents, or Receipts**. +You also need an **Azure AI Vision API resource to process business cards, ID documents, or Receipts**. * You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../ai-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image). You also need a **Azure AI Vision API resource to process business cards, ID doc ## Request approval to run container -Complete and submit the [**Azure AI services Application for Gated Services**](https://aka.ms/csgate) to request access to the container. +Complete and submit the [**Azure AI services application for Gated Services**](https://aka.ms/csgate) to request access to the container. [!INCLUDE [Request access to public preview](../../../../includes/cognitive-services-containers-request-access.md)] |
ai-services | Compose Custom Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/compose-custom-models.md | Try extracting data from custom forms using our Sample Labeling tool. You need t * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) -* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. +* An [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md | monikerRange: '<=doc-intel-3.0.0' > [!NOTE] > Form Recognizer is now **Azure AI Document Intelligence**! >-> As of July 2023, Azure AI services encompass all of what were previously known as Azure Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names "Cognitive Services" and "Azure Applied AI" continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs. +> As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs. ::: moniker range="doc-intel-3.0.0" [!INCLUDE [applies to v3.0](includes/applies-to-v3-0.md)] |
ai-services | Sdk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview.md | Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.ide 1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). -1. Grant access to Document Intelligence by assigning the **`Azure AI services User`** role to your service principal. +1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal. 1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively. Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.i 1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). -1. Grant access to Document Intelligence by assigning the **`Azure AI services User`** role to your service principal. +1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal. 1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively. Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azur 1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). -1. Grant access to Document Intelligence by assigning the **`Azure AI services User`** role to your service principal. +1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal. 1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively. Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-ide 1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). -1. Grant access to Document Intelligence by assigning the **`Azure AI services User`** role to your service principal. +1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal. 1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively. |
ai-services | Tutorial Logic Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/tutorial-logic-apps.md | Now that you have the Logic App connector resource set up and configured, let's :::image type="content" source="media/logic-apps-tutorial/one-drive-trigger-setup.png" alt-text="Screenshot of the OneDrive trigger setup."::: -1. A new node is added to the Logic App designer view. Search for "Form Recognizer (Document Intelligence forthcoming)" in the **Choose an operation** search bar and select **Analyze Document for Prebuilt or Custom models (v3.0 API)** from the list. +1. A new node is added to the Logic App designer view. Search for "Azure AI Document Intelligence (Document Intelligence forthcoming)" in the **Choose an operation** search bar and select **Analyze Document for Prebuilt or Custom models (v3.0 API)** from the list. :::image type="content" source="media/logic-apps-tutorial/analyze-prebuilt-document-action.png" alt-text="Screenshot of the Analyze Document for Prebuilt or Custom models (v3.0 API) selection button."::: Now that you have the Logic App connector resource set up and configured, let's 4. Next, we're going to add a new step to the workflow. Select the **Γ₧ò New step** button underneath the newly created OneDrive node. -1. A new node is added to the Logic App designer view. Search for "Form Recognizer (Document Intelligence forthcoming)" in the **Choose an operation** search bar and select **Analyze invoice** from the list. +1. A new node is added to the Logic App designer view. Search for "Azure AI Document Intelligence (Document Intelligence forthcoming)" in the **Choose an operation** search bar and select **Analyze invoice** from the list. :::image type="content" source="media/logic-apps-tutorial/analyze-invoice-v-2.png" alt-text="Screenshot of Analyze Invoice action."::: -1. Now, you see a window where to create your connection. Specifically, you're going to connect your Form Recognizer resource to the Logic Apps Designer Studio: +1. Now, you see a window where to create your connection. Specifically, you're going to connect your Azure AI Document Intelligence resource to the Logic Apps Designer Studio: * Enter a **Connection name**. It should be something easy to remember.- * Enter the Form Recognizer resource **Endpoint URL** and **Account Key** that you copied previously. If you skipped this step earlier or lost the strings, you can navigate back to your Form Recognizer resource and copy them again. When you're done, select **Create**. + * Enter the Azure AI Document Intelligence resource **Endpoint URL** and **Account Key** that you copied previously. If you skipped this step earlier or lost the strings, you can navigate back to your Azure AI Document Intelligence resource and copy them again. When you're done, select **Create**. :::image type="content" source="media/logic-apps-tutorial/create-logic-app-connector.png" alt-text="Screenshot of the logic app connector dialog window."::: |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md | Document Intelligence service is updated on an ongoing basis. Bookmark this page > [!NOTE] > Form Recognizer is now Azure AI Document Intelligence! >-> As of July 2023, Azure AI services encompass all of what were previously known as Azure Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names "Cognitive Services" and "Azure Applied AI" continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs. +> As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs. ## May 2023 This release introduces the Document Intelligence 2.0. In the next sections, you * Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice. ::: moniker-end- |
ai-services | Security How To Update Role Assignment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/security-how-to-update-role-assignment.md | A security bug has been discovered with Immersive Reader Azure Active Directory ## Background -A security bug was discovered that relates to Azure AD authentication for Immersive Reader. When initially creating your Immersive Reader resources and configuring them for Azure AD authentication, it is necessary to grant permissions for the Azure AD application identity to access your Immersive Reader resource. This is known as a Role Assignment. The Azure role that was previously used for permissions was the [Azure AI services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role. +A security bug was discovered that relates to Azure AD authentication for Immersive Reader. When initially creating your Immersive Reader resources and configuring them for Azure AD authentication, it is necessary to grant permissions for the Azure AD application identity to access your Immersive Reader resource. This is known as a Role Assignment. The Azure role that was previously used for permissions was the [Cognitive Services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role. -During a security audit, it was discovered that this `Azure AI services User` role has permissions to [List Keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Azure AD access token in client web apps and browsers, and if the access token were to be stolen by a bad actor or attacker, there is a concern that this access token could be used to `list keys` of your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill. +During a security audit, it was discovered that this Cognitive Services User role has permissions to [List Keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Azure AD access token in client web apps and browsers, and if the access token were to be stolen by a bad actor or attacker, there is a concern that this access token could be used to `list keys` of your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill. In practice however, this attack or exploit is not likely to occur or may not even be possible. For Immersive Reader scenarios, customers obtain Azure AD access tokens with an audience of `https://cognitiveservices.azure.com`. In order to successfully `list keys` for your resource, the Azure AD access token would need to have an audience of `https://management.azure.com`. Generally speaking, this is not too much of a concern, since the access tokens used for Immersive Reader scenarios would not work to `list keys`, as they do not have the required audience. In order to change the audience on the access token, an attacker would have to hijack the token acquisition code and change the audience before the call is made to Azure AD to acquire the token. Again, this is not likely to be exploited because, as an Immersive Reader authentication best practice, we advise that customers create Azure AD access tokens on the web application backend, not in the client or browser. In those cases, since the token acquisition happens on the backend service, it's not as likely or perhaps even possible that attacker could compromise that process and change the audience. The real concern comes when or if any customer were to acquire tokens from Azure AD directly in client code. We strongly advise against this, but since customers are free to implement as they see fit, it is possible that some customers are doing this. -To mitigate the concerns about any possibility of using the Azure AD access token to `list keys`, we have created a new built-in Azure role called `Cognitive Services Immersive Reader User` that does not have the permissions to `list keys`. This new role is not a shared role for the Azure AI services platform like `Azure AI services User` role is. This new role is specific to Immersive Reader and will only allow calls to Immersive Reader APIs. +To mitigate the concerns about any possibility of using the Azure AD access token to `list keys`, we have created a new built-in Azure role called `Cognitive Services Immersive Reader User` that does not have the permissions to `list keys`. This new role is not a shared role for the Azure AI services platform like `Cognitive Services User` role is. This new role is specific to Immersive Reader and will only allow calls to Immersive Reader APIs. -We are advising that ALL customers migrate to using the new `Cognitive Services Immersive Reader User` role instead of the original `Azure AI services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions. +We are advising that ALL customers migrate to using the new `Cognitive Services Immersive Reader User` role instead of the original `Cognitive Services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions. This recommendation applies to ALL customers, to ensure that this vulnerability is patched for everyone, no matter what the implementation scenario or likelihood of attack. You can rotate the subscription keys on the [Azure portal](https://portal.azure. Write-Host "New role assignment created successfully" } - $oldRoleName = "Azure AI services User" + $oldRoleName = "Cognitive Services User" $oldRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $oldRoleName --query "[].id" -o tsv if (-not $oldRoleExists) { Write-Host "Old role assignment for '$oldRoleName' role does not exist on resource" |
ai-services | Tutorial Ios Picture Immersive Reader | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md | -The [Azure AI Vision Azure AI services Read API](../../ai-services/computer-vision/overview-ocr.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream. +The [Azure AI Vision Read API](../../ai-services/computer-vision/overview-ocr.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream. In this tutorial, you will build an iOS app from scratch and integrate the Read API, and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [here](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios). |
ai-services | Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/developer-guide.md | It additionally enables you to use the following features, without creating any * [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api) * [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api) -As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/text-analysis-runtime/analyze-text) for additional information. +As you use this API in your application, see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2239169) for additional information. ### Question answering APIs |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/role-based-access-control.md | These custom roles only apply to Language resources. > * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in Language studio portal. -### Azure AI Language reader +### Cognitive Services Language Reader A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results. A user that should only be validating and reviewing the Language apps, typically * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export) All the Batch Testing Web APIs *[Language Runtime CLU APIs](/rest/api/language/2023-04-01/conversation-analysis-runtime)- *[Language Runtime Text Analysis APIs](/rest/api/language/2023-04-01/text-analysis-runtime/analyze-text) + *[Language Runtime Text Analysis APIs](https://go.microsoft.com/fwlink/?linkid=2239169) :::column-end::: :::row-end::: -### Azure AI Language writer +### Cognitive Services Language Writer A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldnΓÇÖt have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldnΓÇÖt be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned. A user that is responsible for building and modifying an application, as a colla :::row-end::: :::row::: :::column span="":::- * All functionalities under Azure AI Language Reader. + * All functionalities under Cognitive Services Language Reader. * Ability to: * Train * Write These users are the gatekeepers for the Language applications in production envi :::row-end::: :::row::: :::column span="":::- * All functionalities under Azure AI Language Writer + * All functionalities under Cognitive Services Language Writer * Deploy * Delete :::column-end::: |
ai-services | Tag Utterances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/tag-utterances.md | Your Language resource must have identity management, to enable it using [Langua -After enabling managed identity, assign the role `Azure AI services User` to your Azure OpenAI resource using the managed identity of your Language resource. +After enabling managed identity, assign the role `Cognitive Services User` to your Azure OpenAI resource using the managed identity of your Language resource. 1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure OpenAI resource. 2. Select the Access Control (IAM) tab on the left. 3. Select Add > Add role assignment. 4. Select "Job function roles" and click Next.- 5. Select `Azure AI services User` from the list of roles and click Next. + 5. Select `Cognitive Services User` from the list of roles and click Next. 6. Select Assign access to "Managed identity" and select "Select members". 7. Under "Managed identity" select "Language". 8. Search for your resource and select it. Then select the Select button below and next to complete the process. |
ai-services | Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/limits.md | File names may not include the following characters: > [!NOTE] > Question answering currently has no limits on the number of sources that can be added. Throughput is currently capped at 10 text records per second for both management APIs and prediction APIs.+> When using the F0 tier, upload is limited to 3 files. ### Maximum number of deep-links from URL |
ai-services | Document Summarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/document-summarization.md | The AI models used by the API are provided by the service, you just have to send > [!TIP] > If you want to start using these features, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code. -The document summarization API uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document. +The extractive summarization API uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document. -Document summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences. +Extractive summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences. There is another feature in Azure AI Language, [key phrases extraction](./../../key-phrase-extraction/how-to/call-api.md), that can extract key information. When deciding between key phrase extraction and extractive summarization, consider the following: * Key phrase extraction returns phrases while extractive summarization returns sentences. Using the above example, the API might return the following summarized sentences You can use document extractive summarization to get summaries of articles, papers, or documents. To see an example, see the [quickstart article](../quickstart.md). -You can use the `sentenceCount` parameter to specify how many sentences will be returned, with `3` being the default. The range is from 1 to 20. +You can use the `sentenceCount` parameter to guide how many sentences will be returned, with `3` being the default. The range is from 1 to 20. You can also use the `sortby` parameter to specify in what order the extracted sentences will be returned - either `Offset` or `Rank`, with `Offset` being the default. |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md | -Custom Summarization enables users to build custom AI models to summarize unstructured text, such as contracts or novels. By creating a Custom Summarization project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](custom/quickstart.md). +Note that though the services are labeled document and conversation summarization, document summarization only accepts plain text blocks, and conversation summarization will accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use document summarization for that scenario. ++Custom Summarization enables users to build custom AI models to summarize unstructured text, such as contracts or novels. By creating a Custom Summarization project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](custom/quickstart.md). # [Document summarization](#tab/document-summarization) |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-support.md | -Azure AI services enable you to build applications that see, hear, articulate, and understand users. Our burgeoning language support capabilities enable users to communicate with your application in natural ways and help facilitate global outreach. Use the links in the tables to view language support and availability by service. +Azure AI services enable you to build applications that see, hear, articulate, and understand users. Our language support capabilities enable users to communicate with your applications in natural ways and empower global outreach. Use the links in the tables to view language support and availability by service. ## Language supported services |
ai-services | Manage Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/manage-resources.md | This article provides instructions on how to recover an Azure AI services resour * If the deleted resource used customer-managed keys with Azure Key Vault and the key vault has also been deleted, then you must restore the key vault before you restore the Azure AI services resource. For more information, see [Azure Key Vault recovery management](../key-vault/general/key-vault-recovery.md). * If the deleted resource used a customer-managed storage and storage account has also been deleted, you must restore the storage account before you restore the Azure AI services resource. For instructions, see [Recover a deleted storage account](../storage/common/storage-account-recover.md). -Your subscription must have `Microsoft.CognitiveServices/locations/resourceGroups/deletedAccounts/delete` permissions to purge resources, such as [Azure AI services Contributor](../role-based-access-control/built-in-roles.md#cognitive-services-contributor) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). +Your subscription must have `Microsoft.CognitiveServices/locations/resourceGroups/deletedAccounts/delete` permissions to purge resources, such as [Cognitive Services Contributor](../role-based-access-control/built-in-roles.md#cognitive-services-contributor) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). ## Recover a deleted resource |
ai-services | Enable Anomaly Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md | Select the '+' button and choose the hook that you created, fill in other fields This section will share the practice of using an SMTP server to send email notifications on anomalies that are detected. Make sure you have a usable SMTP server and have sufficient permission to get parameters like account name and password. -**Step 1.** Assign your account as the 'Cognitive Service Metrics Advisor Administrator' role +**Step 1.** Assign your account as the 'Cognitive Services Metrics Advisor Administrator' role - A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control (IAM) tab. - Select 'Add role assignments'. |
ai-services | Multi Service Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md | keywords: Azure AI services, cognitive + Last updated 7/18/2023 |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | description: Learn about the different model capabilities that are available wit Previously updated : 07/25/2023 Last updated : 07/27/2023 GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - | | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |-| `gpt-35-turbo` (0613) | Canada East, East US, France Central, Japan East, North Central US, UK South | N/A | 4,096 | Sep 2021 | -| `gpt-35-turbo-16k` (0613) | Canada East, East US, France Central, Japan East, North Central US, UK South | N/A | 16,384 | Sep 2021 | +| `gpt-35-turbo` (0613) | Canada East, East US, East US 2, France Central, Japan East, North Central US, UK South | N/A | 4,096 | Sep 2021 | +| `gpt-35-turbo-16k` (0613) | Canada East, East US, East US 2, France Central, Japan East, North Central US, UK South | N/A | 16,384 | Sep 2021 | <sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior. |
ai-services | Switching Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md | from azure.identity import DefaultAzureCredential credential = DefaultAzureCredential() token = credential.get_token("https://cognitiveservices.azure.com/.default") -openai.api_type = "azuread" +openai.api_type = "azure_ad" openai.api_key = token.token openai.api_base = "https://example-endpoint.openai.azure.com" openai.api_version = "2023-05-15" # subject to change |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | keywords: ### New Regions -- Azure OpenAI is now also available in the Canada East, Japan East, and North Central US regions. Check the [models page](concepts/models.md), for the latest information on model availability in each region. +- Azure OpenAI is now also available in the Canada East, East US 2, Japan East, and North Central US regions. Check the [models page](concepts/models.md), for the latest information on model availability in each region. ## June 2023 |
ai-services | Network Isolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md | You can use the ServiceTag `CognitiveServicesMangement` to restrict inbound acce 2. Run the following command in the PowerShell window at the bottom of the page: ```ps-Add-AzWebAppAccessRestrictionRule -ResourceGroupName "<resource group name>" -WebAppName "<app service name>" -Name "Azure AI services Tag" -Priority 100 -Action Allow -ServiceTag "CognitiveServicesManagement" +Add-AzWebAppAccessRestrictionRule -ResourceGroupName "<resource group name>" -WebAppName "<app service name>" -Name "Cognitive Services Tag" -Priority 100 -Action Allow -ServiceTag "CognitiveServicesManagement" ``` 3. Verify the added access rule is present in the **Access Restrictions** section of the **Networking** tab: |
ai-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md | The transcription result can be stored in an Azure container. If you don't speci You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic. -The [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) is not supported for storing transcription results from a Speech resource. If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos-for-customization-and-logging). You can secure access to BYOS-associated Storage account exactly as described in the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) guide, except that the BYOS Speech resource would need **Storage Blob Data Contributor** role assignment. The results of batch transcription performed by the BYOS Speech resource will be automatically stored in the **TranscriptionData** folder of the **customspeech-artifacts** blob container. +The [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) is not supported for storing transcription results from a Speech resource. If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos). You can secure access to BYOS-associated Storage account exactly as described in the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) guide, except that the BYOS Speech resource would need **Storage Blob Data Contributor** role assignment. The results of batch transcription performed by the BYOS Speech resource will be automatically stored in the **TranscriptionData** folder of the **customspeech-artifacts** blob container. ## Next steps |
ai-services | Bring Your Own Storage Speech Resource Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource-speech-to-text.md | + + Title: Use Bring your own storage (BYOS) Speech resource for Speech to text ++description: Learn how to use Bring your own storage (BYOS) Speech resource with Speech to text. ++++++ Last updated : 03/28/2023++++# Use the Bring your own storage (BYOS) Speech resource for Speech to text ++Bring your own storage (BYOS) can be used in the following Speech to text scenarios: ++- Batch transcription +- Real-time transcription with audio and transcription result logging enabled +- Custom Speech ++One Speech resource to Storage account pairing can be used for all scenarios simultaneously. ++This article explains in depth how to use a BYOS-enabled Speech resource in all Speech to text scenarios. The article implies, that you have [a fully configured BYOS-enabled Speech resource and associated Storage account](bring-your-own-storage-speech-resource.md). ++## Data storage ++When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in Custom Speech scenario, the Service keeps certain information about the custom endpoints, like which models they use. ++BYOS-associated Storage account stores the following data: ++> [!NOTE] +> *Optional* in this section means that it's possible, but not required to store the particular artifacts in the BYOS-associated Storage account. If needed, they can be stored elsewhere. ++**Batch transcription** +- Source audio (optional) +- Batch transcription results ++**Real-time transcription with audio and transcription result logging enabled** +- Audio and transcription result logs ++**Custom Speech** +- Source files of datasets for model training and testing (optional) +- All data and metadata related to Custom models hosted by the BYOS-enabled Speech resource ++## Batch transcription ++Batch transcription is used to transcribe a large amount of audio data in storage. If you're unfamiliar with Batch transcription, see [this article](batch-transcription.md) first. ++Perform these steps to execute Batch transcription with BYOS-enabled Speech resource: ++1. Start Batch transcription as described in [this guide](batch-transcription-create.md). ++ > [!IMPORTANT] + > Don't use `destinationContainerUrl` parameter in your transcription request. If you use BYOS, the transcription results are stored in the BYOS-associated Storage account automatically. + > + > If you use `destinationContainerUrl` parameter, it will work, but provide significantly less security for your data, because of ad hoc SAS usage. See details [here](batch-transcription-create.md#destination-container-url). ++1. When transcription is complete, get transcription results according to [this guide](batch-transcription-get.md) or directly in the `TranscriptionData` folder of `customspeech-artifacts` Blob container in the BYOS-associated Storage account. ++### Get Batch transcription results via REST API ++[Speech to text REST API](rest-speech-to-text.md) fully supports BYOS-enabled Speech resources. However, because the data is now stored within the BYOS-enabled Storage account, requests like [Get Transcription Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles) interact with the BYOS-associated Storage account Blob storage, instead of Speech service internal resources. It allows using the same REST API based code for both "regular" and BYOS-enabled Speech resources. ++For maximum security use the `sasValidityInSeconds` parameter with the value set to `0` in the requests, that return data file URLs, like [Get Transcription Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles) request. Here's an example request URL: ++```https +https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/3b24ca19-2eb1-4a2a-b964-35d89eca486b/files?sasValidityInSeconds=0 +``` ++Such a request returns direct Storage Account URLs to data files (without SAS or other additions). For example: ++```json +"links": { + "contentUrl": "https://<BYOS_storage_account_name>.blob.core.windows.net/customspeech-artifacts/TranscriptionData/3b24ca19-2eb1-4a2a-b964-35d89eca486b_0_0.json" + } +``` ++URL of this format ensures that only Azure Active Directory identities (users, service principals, managed identities) with sufficient access rights (like *Storage Blob Data Reader* role) can access the data from the URL. ++> [!WARNING] +> If `sasValidityInSeconds` parameter is omitted in [Get Transcription Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles) request or similar ones, then a [User delegation SAS](../../storage/common/storage-sas-overview.md) with the validity of 30 days will be generated for each data file URL returned. This SAS is signed by the system assigned managed identity of your BYOS-enabled Speech resource. Because of it, the SAS allows access to the data, even if storage account key access is disabled. See details [here](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens). ++## Real-time transcription with audio and transcription result logging enabled ++You can enable logging for both audio input and recognized speech when using speech to text or speech translation. See the complete description [in this article](logging-audio-transcription.md). ++If you use BYOS, then you find the logs in `customspeech-audiologs` Blob container in the BYOS-associated Storage account. ++> [!WARNING] +> Logging data is kept for 30 days. After this period the logs are automatically deleted. This is valid for BYOS-enabled Speech resources as well. If you want to keep the logs longer, copy the correspondent files and folders from `customspeech-audiologs` Blob container directly or use REST API. ++### Get real-time transcription logs via REST API ++[Speech to text REST API](rest-speech-to-text.md) fully supports BYOS-enabled Speech resources. However, because the data is now stored within the BYOS-enabled Storage account, requests like [Get Base Model Logs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) interact with the BYOS-associated Storage account Blob storage, instead of Speech service internal resources. It allows using the same REST API based code for both "regular" and BYOS-enabled Speech resources. ++For maximum security use the `sasValidityInSeconds` parameter with the value set to `0` in the requests, that return data file URLs, like [Get Base Model Logs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) request. Here's an example request URL: ++```https +https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/base/en-US/files/logs?sasValidityInSeconds=0 +``` ++Such a request returns direct Storage Account URLs to data files (without SAS or other additions). For example: ++```json +"links": { + "contentUrl": "https://<BYOS_storage_account_name>.blob.core.windows.net/customspeech-audiologs/be172190e1334399852185c0addee9d6/en-US/2023-07-06/152339_fcf52189-0d3f-4415-becd-5f639fd7fd6b.v2.json" + } +``` ++URL of this format ensures that only Azure Active Directory identities (users, service principals, managed identities) with sufficient access rights (like *Storage Blob Data Reader* role) can access the data from the URL. ++> [!WARNING] +> If `sasValidityInSeconds` parameter is omitted in [Get Base Model Logs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) request or similar ones, then a [User delegation SAS](../../storage/common/storage-sas-overview.md) with the validity of 30 days will be generated for each data file URL returned. This SAS is signed by the system assigned managed identity of your BYOS-enabled Speech resource. Because of it, the SAS allows access to the data, even if storage account key access is disabled. See details [here](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens). ++## Custom Speech ++With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [Custom Speech overview](custom-speech-overview.md). ++There's nothing specific about how you use Custom Speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account: ++- `customspeech-models` - Location of Custom Speech models +- `customspeech-artifacts` - Location of all other Custom Speech related data + - Custom Speech data is located in all subfolders of the container, except for `TranscriptionData`. This subfolder contains Batch transcription results. ++> [!CAUTION] +> Speech service relies on pre-defined Blob container paths and file names for Custom Speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and Custom Speech related folders of `customspeech-artifacts` container. +> +> Failure to do so very likely will result in hard to debug errors and may lead to the necessity of custom model retraining. +> +> Use standard tools, like REST API and Speech Studio to interact with the Custom Speech related data. See detail in [Custom Speech section](custom-speech-overview.md). ++### Use of REST API with Custom Speech ++[Speech to text REST API](rest-speech-to-text.md) fully supports BYOS-enabled Speech resources. However, because the data is now stored within the BYOS-enabled Storage account, requests like [Get Dataset Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles) interact with the BYOS-associated Storage account Blob storage, instead of Speech service internal resources. It allows using the same REST API based code for both "regular" and BYOS-enabled Speech resources. ++For maximum security use the `sasValidityInSeconds` parameter with the value set to `0` in the requests, that return data file URLs, like [Get Dataset Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles) request. Here's an example request URL: ++```https +https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/8427b92a-cb50-4cda-bf04-964ea1b1781b/files?sasValidityInSeconds=0 +``` ++Such a request returns direct Storage Account URLs to data files (without SAS or other additions). For example: ++```json + "links": { + "contentUrl": "https://<BYOS_storage_account_name>.blob.core.windows.net/customspeech-artifacts/AcousticData/8427b92a-cb50-4cda-bf04-964ea1b1781b/4a61ddac-5b1c-4c21-b87d-22001b0f18ab.zip" + } +``` ++URL of this format ensures that only Azure Active Directory identities (users, service principals, managed identities) with sufficient access rights (like *Storage Blob Data Reader* role) can access the data from the URL. ++> [!WARNING] +> If `sasValidityInSeconds` parameter is omitted in [Get Dataset Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles) request or similar ones, then a [User delegation SAS](../../storage/common/storage-sas-overview.md) with the validity of 30 days will be generated for each data file URL returned. This SAS is signed by the system assigned managed identity of your BYOS-enabled Speech resource. Because of it, the SAS allows access to the data, even if storage account key access is disabled. See details [here](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens). ++## Next steps ++- [Set up the Bring your own storage (BYOS) Speech resource](bring-your-own-storage-speech-resource.md) +- [Batch transcription overview](batch-transcription.md) +- [How to log audio and transcriptions for speech recognition](logging-audio-transcription.md) +- [Custom Speech overview](custom-speech-overview.md) |
ai-services | Bring Your Own Storage Speech Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md | + + Title: Set up the Bring your own storage (BYOS) Speech resource ++description: Learn how to set up Bring your own storage (BYOS) Speech resource. ++++++ Last updated : 03/28/2023++++# Set up the Bring your own storage (BYOS) Speech resource ++Bring your own storage (BYOS) is an Azure AI technology for customers, who have high requirements for data security and privacy. The core of the technology is the ability to associate an Azure Storage account, that the user owns and fully controls with the Speech resource. The Speech resource then uses this storage account for storing different artifacts related to the user data processing, instead of storing the same artifacts within the Speech service premises as it is done in the regular case. This approach allows using all set of security features of Azure Storage account, including encrypting the data with the Customer-managed keys, using Private endpoints to access the data, etc. ++In BYOS scenarios, all traffic between the Speech resource and the Storage account is maintained using [Azure global network](https://azure.microsoft.com/explore/global-infrastructure/global-network), in other words all communication is performed using private network, completely bypassing public internet. Speech resource in BYOS scenario is using [Azure Trusted services](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) mechanism to access the Storage account, relying on [System-assigned managed identities](../../active-directory/managed-identities-azure-resources/overview.md) as a method of authentication, and [Role-based access control (RBAC)](../../role-based-access-control/overview.md) as a method of authorization. ++There's one exception: if you use Text to speech, and your Speech resource and the associated Storage account are located in different Azure regions, then public internet is used for the operations, involving [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas). See details in [this section](#configure-storage-account-security-settings-for-text-to-speech). ++BYOS can be used with several Azure AI services. For Speech, it can be used in the following scenarios: ++**Speech to text** ++- [Batch transcription](batch-transcription.md) +- Real-time transcription with [audio and transcription result logging](logging-audio-transcription.md) enabled +- [Custom Speech](custom-speech-overview.md) (Custom models for Speech recognition) ++**Text to speech** ++- [Audio Content Creation](how-to-audio-content-creation.md) +- [Custom Neural Voice](custom-neural-voice.md) (Custom models for Speech synthesizing) +++One Speech resource ΓÇô Storage account combination can be used for all four scenarios simultaneously in all combinations. ++This article describes how to create and maintain BYOS-enabled Speech resource and applicable to all mentioned scenarios. See the scenario-specific information in the [corresponding articles](#next-steps). ++## BYOS-enabled Speech resource: Basic rules ++Consider the following rules when planning BYOS-enabled Speech resource configuration: ++- Speech resource can be BYOS-enabled only during creation. Existing Speech resource can't be converted to BYOS-enabled. BYOS-enabled Speech resource can't be converted to the ΓÇ£conventionalΓÇ¥ (non-BYOS) one. +- Storage account association with the Speech resource is declared during the Speech resource creation. It can't be changed later. That is, you can't change what Storage account is associated with the existing BYOS-enabled Speech resource. To use another Storage account, you have to create another BYOS-enabled Speech resource. +- When creating a BYOS-enabled Speech resource, you can use an existing Storage account or create one automatically during Speech resource provisioning (the latter is valid only when using Azure portal). +- One Storage account can be associated with many Speech resources. We recommend using one Storage account per one Speech resource. +- Storage account and the related BYOS-enabled Speech resource can be located in either the same or different Azure regions. We recommend using the same region to minimize latency. For the same reason, we don't recommend selecting too remote regions for multi-region configuration. (For example, we donΓÇÖt recommend placing Storage account in East US and the associated Speech resource in West Europe). ++## Create and configure BYOS-enabled Speech resource ++This section describes how to create a BYOS enabled Speech resource. ++### Request access to BYOS for your Azure subscriptions ++You need to request access to BYOS functionality for each of the Azure subscriptions you plan to use. To request access, fill and submit [Cognitive Services & Applied AI Customer Managed Keys and Bring Your Own Storage access request form](https://aka.ms/cogsvc-cmk). Wait for the request to be approved. ++### Plan and prepare your Storage account ++If you use Azure portal to create a BYOS-enabled Speech resource, an associated Storage account can be created automatically. For all other provisioning methods (Azure CLI, PowerShell, REST API Request) you need to use existing Storage account. ++If you want to use existing Storage account and don't intend to use Azure portal method for BYOS-enabled Speech resource provisioning, note the following regarding this Storage account: ++- You need the full Azure resource ID of the Storage account. To obtain it navigate to the Storage account in Azure portal, then select *Endpoints* menu from *Settings* group. Copy and store the value of *Storage account resource ID* field. +- To fully configure BYOS, you need at least *Resource Owner* right for the selected Storage account. ++> [!NOTE] +> Storage account *Resource Owner* right or higher is not required to use a BYOS-enabled Speech resource. However it is required during the one-time initial configuration of the Storage account for the usage in BYOS scenario. See details in [this section](#configure-byos-associated-storage-account). ++### Create BYOS-enabled Speech resource ++Make sure your Azure subscription is enabled for using BYOS before attempting to create the Speech resource. See [this section](#request-access-to-byos-for-your-azure-subscriptions). ++There are two ways of creating a BYOS-enabled Speech resource: ++- With Azure portal. +- With Cognitive Services API (PowerShell, Azure CLI, REST request). ++Azure portal option has tighter requirements: ++- Account used for the BYOS-enabled Speech resource provisioning should have a right of the *Subscription Owner*. +- BYOS-associated Storage account should only be located in the same region as the Speech resource. ++If any of these extra requirements don't fit your scenario, use Cognitive Services API option (PowerShell, Azure CLI, REST request). ++To use any of the methods above you need an Azure account that is assigned a role allowing to create resources in your subscription, like *Subscription Contributor*. ++# [Azure portal](#tab/portal) ++> [!NOTE] +> If you use Azure portal to create a BYOS-enabled Speech resource, we recommend selecting the option of creating a new Storage account. ++To create a BYOS-enabled Speech resource with Azure portal, you need to access some portal preview features. Perform the following steps: ++1. Navigate to *Create Speech* page using [this link](https://ms.portal.azure.com/?feature.enablecsumi=true&feature.enablecsstoragemenu=true&feature.canmodifystamps=true&Microsoft_Azure_ProjectOxford=stage1µsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_byospreview#create/Microsoft.CognitiveServicesSpeechServices). +1. Note the *Storage account* section at the bottom of the page. +1. Select *Yes* for *Bring your own storage* option. +1. Configure the required Storage account settings and proceed with the Speech resource creation. ++# [PowerShell](#tab/powershell) ++To create a BYOS-enabled Speech resource with PowerShell, we use [New-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/new-azcognitiveservicesaccount) command. ++You can [install PowerShell locally](/powershell/azure/install-azure-powershell) or use [Azure Cloud Shell](../../cloud-shell/overview.md). ++If you use local installation of PowerShell, connect to your Azure account using `Connect-AzAccount` command before trying the following script. ++```azurepowershell +# Target subscription parameters +# REPLACE WITH YOUR CONFIGURATION VALUES +$azureSubscriptionId = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" +$azureResourceGroup = "myResourceGroup" +$azureSpeechResourceName = "myBYOSSpeechResource" +$azureStorageAccount = <Full Storage account Azure Resource ID in the format of "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/<resource_group_name>/providers/Microsoft.Storage/storageAccounts/<storage_account_name>"> +$azureLocation = "eastus" ++# Select the right subscription +Set-AzContext -SubscriptionId $azureSubscriptionId ++# Create BYOS-enabled Speech resource +New-AzCognitiveServicesAccount -ResourceGroupName $azureResourceGroup -name $azureSpeechResourceName -Type SpeechServices -SkuName S0 -Location $azureLocation -AssignIdentity -Storage $azureStorageAccount +``` ++# [Azure CLI](#tab/azure-cli) ++To create a BYOS-enabled Speech resource with Azure CLI, we use [az cognitiveservices account create](/cli/azure/cognitiveservices/account) command. ++You can [install Azure CLI locally](/cli/azure/install-azure-cli) or use [Azure Cloud Shell](../../cloud-shell/overview.md). ++> [!NOTE] +> The following script doesn't use variables because variable usage differs, depending on the platform where Azure CLI runs. See information on Azure CLI variable usage in [this article](/cli/azure/azure-cli-variables). ++If you use local installation of Azure CLI, connect to your Azure account using `az login` command before trying the following script. ++```azurecli +az account set --subscription "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" ++az cognitiveservices account create -n "myBYOSSpeechResource" -g "myResourceGroup" --assign-identity --kind SpeechServices --sku S0 -l eastus --yes --storage '[{"resourceId": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/<resource_group_name>/providers/Microsoft.Storage/storageAccounts/<storage_account_name>"}]' +``` +> [!IMPORTANT] +> This script will work in Azure Cloud Shell Bash. If you want to use it in any other environment, pay special attention to the format of the `--storage` parameter value. See the following information. ++Different command shells have different rules for interpreting quotation marks in command line parameter values. For example to run the same script from Windows Command Prompt, `--storage` part of the command should be formatted like this: +```dos +--storage "[{""resourceId"": ""/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/<resource_group_name>/providers/Microsoft.Storage/storageAccounts/<storage_account_name>""}]" +``` ++General rule is that you need to pass this JSON string as a value of `--storage` parameter: +```json +[{"resourceId": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/<resource_group_name>/providers/Microsoft.Storage/storageAccounts/<storage_account_name>"}] +``` +# [REST](#tab/rest) ++To create a BYOS-enabled Speech resource with a REST Request to Cognitive Services API, we use [Accounts - Create](/rest/api/cognitiveservices/accountmanagement/accounts/create) request. ++You need to have a meaning of authentication. The example in this section uses [Microsoft Azure Active Directory token](/azure/active-directory/develop/access-tokens). ++This code snippet generates Azure AD token using interactive browser sign-in. It requires [Azure Identity client library](/dotnet/api/overview/azure/identity-readme): +```csharp +TokenRequestContext context = new Azure.Core.TokenRequestContext(new string[] { "https://management.azure.com/.default" }); +InteractiveBrowserCredential browserCredential = new InteractiveBrowserCredential(); +var aadToken = browserCredential.GetToken(context); +var token = aadToken.Token; +``` +Now execute the REST request: +```bash +@ECHO OFF ++curl -v -X PUT "https://management.azure.com/subscriptions/{AzureSubscriptionId}/resourceGroups/{myResourceGroup}/providers/Microsoft.CognitiveServices/accounts/{myBYOSSpeechResource}?api-version=2021-10-01" +-H "Content-Type: application/json" +-H "Authorization: Bearer {Value_of_token_variable}" ++--data-ascii "{body}" +``` +Here's the body of the request: +```json +{ + "location": "East US", + "kind": "SpeechServices", + "sku": { ++ "name": "S0" + }, + "properties": { + "userOwnedStorage": [ + { + "resourceId": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/<resource_group_name>/providers/Microsoft.Storage/storageAccounts/<storage_account_name>" + } + ] + }, + "identity": { + "type": "SystemAssigned" + } +} +``` +*** ++If you used Azure portal for creating a BYOS-enabled Speech resource, it's fully ready to use. If you used any other method, you need to perform the role assignment for the Speech resource managed identity within the scope of the associated Storage account. In all cases, you also need to review different Storage account settings related to data security. See [this section](#configure-byos-associated-storage-account). ++### (Optional) Verify Speech resource BYOS configuration ++You may always check, whether any given Speech resource is BYOS enabled, and what is the associated Storage account. You can do it either via Azure portal, or via Cognitive Services API. ++# [Azure portal](#tab/portal) ++To check BYOS configuration of a Speech resource with Azure portal, you need to access some portal preview features. Perform the following steps: ++1. Navigate to *Create Speech* page using [this link](https://ms.portal.azure.com/?feature.enablecsumi=true&feature.enablecsstoragemenu=true&feature.canmodifystamps=true&Microsoft_Azure_ProjectOxford=stage1µsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_byospreview#create/Microsoft.CognitiveServicesSpeechServices). +1. Close *Create Speech* screen by pressing *X* in the right upper corner. +1. If asked agree to discard unsaved changes. +1. Navigate to the Speech resource you want to check. +1. Select *Storage* menu in the *Resource Management* group. +1. Check that: + 1. *Attached storage* field contains the Azure resource ID of the BYOS-associated Storage account. + 1. *Identity type* has *System Assigned* selected. ++If *Storage* menu item is missing in the *Resource Management* group, the selected Speech resource isn't BYOS-enabled. ++# [PowerShell](#tab/powershell) ++Use the [Get-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccount) command: ++```azurepowershell +Get-AzCognitiveServicesAccount -ResourceGroupName "myResourceGroup" -name "myBYOSSpeechResource" +``` +In the command output, look for `userOwnedStorage` parameter group. If the Speech resource is BYOS-enabled, the group has Azure resource ID of the associated Storage account. If the `userOwnedStorage` group is empty or missing, the selected Speech resource isn't BYOS-enabled. ++# [Azure CLI](#tab/azure-cli) ++Use the [az cognitiveservices account show](/cli/azure/cognitiveservices/account) command: +```bash +az cognitiveservices account show -g "myResourceGroup" -n "myBYOSSpeechResource" +``` ++In the command output, look for `userOwnedStorage` parameter group. If the Speech resource is BYOS-enabled, the group has Azure resource ID of the associated Storage account. If the `userOwnedStorage` group is empty or missing, the selected Speech resource isn't BYOS-enabled. ++# [REST](#tab/rest) ++Use the [Accounts - Get](/rest/api/cognitiveservices/accountmanagement/accounts/get) request. In the request output, look for `userOwnedStorage` parameter group. If the Speech resource is BYOS-enabled, the group has Azure resource ID of the associated Storage account. If the `userOwnedStorage` group is empty or missing, the selected Speech resource isn't BYOS-enabled. ++*** ++## Configure BYOS-associated Storage account ++To achieve high security and privacy of your data you need to properly configure the settings of the BYOS-associated Storage account. In case you didn't use Azure portal to create your BYOS-enabled Speech resource, you also need to perform a mandatory step of role assignment. ++### Assign resource access role ++This step is **mandatory** if you didn't use Azure portal to create your BYOS-enabled Speech resource. ++BYOS uses the Blob storage of a Storage account. Because of this, BYOS-enabled Speech resource managed identity needs *Storage Blob Data Contributor* role assignment within the scope of BYOS-associated Storage account. ++If you used Azure portal to create your BYOS-enabled Speech resource, you may skip the rest of this subsection. Your role assignment is already done. Otherwise, follow these steps. ++> [!IMPORTANT] +> You need to be assigned the *Owner* role of the Storage account or higher scope (like Subscription) to perform the operation in the next steps. This is because only the *Owner* role can assign roles to others. See details [here](../../role-based-access-control/built-in-roles.md). ++1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. +1. Select the Storage account. +1. Select *Access Control (IAM)* menu in the left pane. +1. Select *Add role assignment* in the *Grant access to this resource* tile. +1. Select *Storage Blob Data Contributor* under *Role* and then select *Next*. +1. Select *Managed identity* under *Members* > *Assign access to*. +1. Assign the managed identity of your Speech resource and then select *Review + assign*. +1. After confirming the settings, select *Review + assign*. ++### Configure Storage account security settings for Speech to text ++This section describes how to set up Storage account security settings, if you intend to use BYOS-associated Storage account only for Speech to text scenarios. In case you use the BYOS-associated Storage account for Text to speech or a combination of both Speech to text and Text to speech, use [this section](#configure-storage-account-security-settings-for-text-to-speech). ++For Speech to text BYOS is using the [trusted Azure services security mechanism](../../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) to communicate with Storage account. The mechanism allows setting very restricted Storage account data access rules. ++If you perform all actions in the section, your Storage account will be in the following configuration: +- Access to all external network traffic is prohibited. +- Access to Storage account using Storage account key is prohibited. +- Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens)) +- Access to the BYOS-enanled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md). ++So in effect your Storage account becomes completely "locked" and can only be accessed by your Speech resource, which will be able to: +- Write artifacts of your Speech data processing (see details in the [correspondent articles](#next-steps)), +- Read the files that were already present by the time the new configuration was applied. For example, source audio files for the Batch transcription or Dataset files for Custom model training and testing. ++You should consider this configuration as a model as far as the security of your data is concerned and customize it according to your needs. ++For example, you may allow traffic from selected public IP addresses and Azure Virtual networks. You may also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc. ++> [!NOTE] +> Using [private endpoints for Speech](speech-services-private-link.md) isn't required to secure the Storage account. Private endpoints for Speech secure the channels for Speech API requests, and can be used as an extra component in your solution. ++**Restrict access to the Storage account** ++1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. +1. Select the Storage account. +1. In the *Settings* group in the left pane, select *Configuration*. +1. Select *Disabled* for *Allow Blob public access*. +1. Select *Disabled* for *Allow storage account key access* +1. Select *Save*. ++For more information, see [Prevent anonymous public read access to containers and blobs](../../storage/blobs/anonymous-read-access-prevent.md) and [Prevent Shared Key authorization for an Azure Storage account](../../storage/common/shared-key-authorization-prevent.md). ++**Configure Azure Storage firewall** ++Having restricted access to the Storage account, you need to grant networking access to your Speech resource managed identity. Follow these steps to add access for the Speech resource. ++1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. +1. Select the Storage account. +1. In the *Security + networking* group in the left pane, select *Networking*. +1. In the *Firewalls and virtual networks* tab, select *Enabled from selected virtual networks and IP addresses*. +1. Deselect all check boxes. +1. Make sure *Microsoft network routing* is selected. +1. Under the *Resource instances* section, select *Microsoft.CognitiveServices/accounts* as the resource type and select your Speech resource as the instance name. +1. Select *Save*. ++ > [!NOTE] + > It may take up to 5 minutes for the network changes to propagate. ++### Configure Storage account security settings for Text to Speech ++This section describes how to set up Storage account security settings, if you intend to use BYOS-associated Storage account for Text to speech or a combination of both Speech to text and Text to speech. In case you use the BYOS-associated Storage account for Speech to text only, use [this section](#configure-storage-account-security-settings-for-speech-to-text). ++> [!NOTE] +> Text to speech requires more relaxed settings of Storage account firewall, compared to Speech to text. If you use both Speech to text and Text to speech, and need maximally restricted Storage account security settings to protect your data, you may consider using different Storage accounts and the corresponding Speech resources for Speech to Text and Text to speech tasks. ++If you perform all actions in the section, your Storage account will be in the following configuration: +- External network traffic is allowed. +- Access to Storage account using Storage account key is prohibited. +- Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens)) +- Access to the BYOS-enanled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) and [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas). ++These are the most restricted security settings possible for Text to speech scenario. You may further customize them according to your needs. ++**Restrict access to the Storage account** ++1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. +1. Select the Storage account. +1. In the *Settings* group in the left pane, select *Configuration*. +1. Select *Disabled* for *Allow Blob public access*. +1. Select *Disabled* for *Allow storage account key access* +1. Select *Save*. ++For more information, see [Prevent anonymous public read access to containers and blobs](../../storage/blobs/anonymous-read-access-prevent.md) and [Prevent Shared Key authorization for an Azure Storage account](../../storage/common/shared-key-authorization-prevent.md). ++**Configure Azure Storage firewall** ++Custom Neural Voice uses [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas) to read the data for Custom Neural Voice model training. It requires allowing external network traffic access to the Storage account. ++1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. +1. Select the Storage account. +1. In the *Security + networking* group in the left pane, select *Networking*. +1. In the *Firewalls and virtual networks* tab, select *Enabled from all networks*. +1. Select *Save*. ++## Configure BYOS-associated Storage account for use with Speech Studio ++Many [Speech Studio](https://speech.microsoft.com/) operations like dataset upload, or custom model training and testing don't require any special configuration in the case of BYOS-enabled Speech resource. ++However, if you need to read data stored withing BYOS-associated Storage account through Speech Studio Web interface, you need to configure additional settings of your BYOS-associated Storage account. For example, it's required to view the contents of a dataset. ++### Configure Cross-Origin Resource Sharing (CORS) ++Speech Studio needs permission to make requests to the Blob storage of the BYOS-associated Storage account. To grant such permission, you use [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Follow these steps. ++1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. +1. Select the Storage account. +1. In the *Settings* group in the left pane, select *Resource sharing (CORS)*. +1. Ensure, that *Blob storage* tab is selected. +1. Configure the following record: + - *Allowed origins*: `https://speech.microsoft.com` + - *Allowed methods*: `GET`, `OPTIONS` + - *Allowed headers*: `*` + - *Exposed headers*: `*` + - *Max age*: `1000` +1. Select *Save*. ++> [!WARNING] +> *Allowed origins* field should contain URL **without** trailing slash. That is it should be `https://speech.microsoft.com`, and not `https://speech.microsoft.com/`. Adding trailing slash will result in Speech Studio not showing the details of datasets and model tests. ++### Configure Azure Storage firewall ++You need to allow access for the machine, where you run the browser using Speech Studio. If your Storage account firewall settings allow public access from all networks, you may skip this subsection. Otherwise, follow these steps. ++1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. +1. Select the Storage account. +1. In the *Security + networking* group in the left pane, select *Networking*. +1. In the *Firewall* section, enter either IP address of the machine where you run the web browser or IP subnet, to which the IP address of the machine belongs. +1. Select *Save*. ++## Next steps ++- [Use the Bring your own storage (BYOS) Speech resource for Speech to text](bring-your-own-storage-speech-resource-speech-to-text.md) |
ai-services | How To Configure Azure Ad Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-azure-ad-auth.md | To configure your Speech resource for Azure AD authentication, create a custom d [!INCLUDE [Custom Domain include](includes/how-to/custom-domain.md)] ### Assign roles-For Azure AD authentication with Speech resources, you need to assign either the *Azure AI Speech Contributor* or *Azure AI Speech User* role. +For Azure AD authentication with Speech resources, you need to assign either the *Cognitive Services Speech Contributor* or *Cognitive Services Speech User* role. You can assign roles to the user or application using the [Azure portal](../../role-based-access-control/role-assignments-portal.md) or [PowerShell](../../role-based-access-control/role-assignments-powershell.md). |
ai-services | How To Custom Commands Send Activity To Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-send-activity-to-client.md | You complete the following tasks: ## Prerequisites > [!div class = "checklist"] > * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide uses Visual Studio 2019-> * An Azure AI services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal). +> * An Azure AI Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal). > * A previously [created Custom Commands app](quickstart-custom-commands-application.md) > * A Speech SDK enabled client app: [How-to: Integrate with a client application using Speech SDK](./how-to-custom-commands-setup-speech-sdk.md) |
ai-services | How To Custom Commands Setup Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-setup-speech-sdk.md | A Custom Commands application is required to complete this article. If you haven You'll also need: > [!div class = "checklist"] > * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide is based on Visual Studio 2019.-> * An Azure AI services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal). +> * An Azure AI Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal). > * [Enable your device for development](/windows/uwp/get-started/enable-your-device-for-development) ## Step 1: Publish Custom Commands application |
ai-services | How To Custom Commands Setup Web Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-setup-web-endpoints.md | In this article, you'll learn how to set up web endpoints in a Custom Commands a > [!div class = "checklist"] > * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/)-> * An Azure AI services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal). +> * An Azure AI Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal). > * A Custom Commands app (see [Create a voice assistant using Custom Commands](quickstart-custom-commands-application.md)) > * A Speech SDK enabled client app (see [Integrate with a client application using Speech SDK](how-to-custom-commands-setup-speech-sdk.md)) |
ai-services | How To Recognize Intents From Speech Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md | For example, if you say "Turn off the lights", pause, and then say "Turn on the ![Audio file LUIS recognition results](media/sdk/luis-results-2.png) -The Speech SDK team actively maintains a large set of examples in an open-source repository. For the sample source code repository, see the [Azure AI services Speech SDK on GitHub](https://aka.ms/csspeech/samples). There are samples for C#, C++, Java, Python, Objective-C, Swift, JavaScript, UWP, Unity, and Xamarin. Look for the code from this article in the **samples/csharp/sharedcontent/console** folder. +The Speech SDK team actively maintains a large set of examples in an open-source repository. For the sample source code repository, see the [Azure AI Speech SDK on GitHub](https://aka.ms/csspeech/samples). There are samples for C#, C++, Java, Python, Objective-C, Swift, JavaScript, UWP, Unity, and Xamarin. Look for the code from this article in the **samples/csharp/sharedcontent/console** folder. ## Next steps |
ai-services | How To Use Audio Input Streams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-audio-input-streams.md | See more examples of speech-to-text recognition with audio input stream on [GitH ## Identify the format of the audio stream -Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure AI services Speech service. +Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure AI Speech service. Supported audio samples are: |
ai-services | How To Use Custom Entity Pattern Matching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-custom-entity-pattern-matching.md | For more information, see the [pattern matching overview](./pattern-matching-ove Be sure you have the following items before you begin this guide: -- An [Azure AI services Azure resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)+- An [Azure AI services resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) - [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition). ::: zone pivot="programming-language-csharp" |
ai-services | How To Use Simple Language Pattern Matching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-simple-language-pattern-matching.md | For more information, see the [pattern matching overview](./pattern-matching-ove Be sure you have the following items before you begin this guide: -- An [Azure AI services Azure resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)+- An [Azure AI services resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) - [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition). ## Speech and simple patterns |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md | With [text to speech](text-to-speech.md), you can convert input text into humanl ## Delivery and presence -You can deploy Azure AI services Speech features in the cloud or on-premises. +You can deploy Azure AI Speech features in the cloud or on-premises. With [containers](speech-container-howto.md), you can bring the service closer to your data for compliance, security, or other operational reasons. Speech service deployment in sovereign clouds is available for some government e ## Use Speech in your application -The [Speech Studio](speech-studio-overview.md) is a set of UI-based tools for building and integrating features from Azure AI services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs. +The [Speech Studio](speech-studio-overview.md) is a set of UI-based tools for building and integrating features from Azure AI Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs. The [Speech CLI](spx-overview.md) is a command-line tool for using Speech service without having to write any code. Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/role-based-access-control.md | A role definition is a collection of permissions. When you create a Speech resou | | | | |**Owner** |Yes |View, create, edit, and delete | |**Contributor** |Yes |View, create, edit, and delete |-|**Azure AI services Contributor** |Yes |View, create, edit, and delete | -|**Azure AI services User** |Yes |View, create, edit, and delete | -|**Azure AI Speech Contributor** |No | View, create, edit, and delete | -|**Azure AI Speech User** |No |View only | -|**Azure AI services Data Reader (Preview)** |No |View only | +|**Cognitive Services Contributor** |Yes |View, create, edit, and delete | +|**Cognitive Services User** |Yes |View, create, edit, and delete | +|**Cognitive Services Speech Contributor** |No | View, create, edit, and delete | +|**Cognitive Services Speech User** |No |View only | +|**Cognitive Services Data Reader (Preview)** |No |View only | > [!IMPORTANT] > Whether a role can list resource keys is important for [Speech Studio authentication](#speech-studio-authentication). To list resource keys, a role must have permission to run the `Microsoft.CognitiveServices/accounts/listKeys/action` operation. Please note that if key authentication is disabled in the Azure Portal, then none of the roles can list keys. |
ai-services | Speech Container Cstt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md | If you have been approved to run the container disconnected from the internet, t In order to prepare and configure a disconnected custom speech to text container you will need two separate speech resources: -- A regular Azure AI services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container.-- An Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.+- A regular Azure AI Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container. +- An Azure AI Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode. Follow these steps to download and run the container in disconnected environments. -1. [Download a model for the disconnected container](#download-a-model-for-the-disconnected-container). For this step, use a regular Azure AI services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. -1. [Download the disconnected container license](#download-the-disconnected-container-license). For this step, use an Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. -1. [Run the disconnected container for service](#run-the-disconnected-container). For this step, use an Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. +1. [Download a model for the disconnected container](#download-a-model-for-the-disconnected-container). For this step, use a regular Azure AI Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. +1. [Download the disconnected container license](#download-the-disconnected-container-license). For this step, use an Azure AI Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. +1. [Run the disconnected container for service](#run-the-disconnected-container). For this step, use an Azure AI Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. ### Download a model for the disconnected container -For this step, use a regular Azure AI services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. +For this step, use a regular Azure AI Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. [!INCLUDE [Custom speech container run](includes/containers-cstt-common-run.md)] You can only use a license file with the appropriate container and model that yo | `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | -For this step, use an Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. +For this step, use an Azure AI Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. ```bash docker run --rm -it -p 5000:5000 \ Wherever the container is run, the license file must be mounted to the container | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | | `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` | -For this step, use an Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. +For this step, use an Azure AI Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. ```bash docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \ |
ai-services | Speech Container Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-overview.md | The following table lists the Speech containers available in the Microsoft Conta | Container | Features | Supported versions and locales | |--|--|--|-| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| -| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | +| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.1.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| +| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.1.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | | [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |-| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.14.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | +| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.15.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | <sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. <sup>2</sup> Not available as a disconnected container. |
ai-services | Speech Encryption Of Data At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-encryption-of-data-at-rest.md | For more information about Managed Identity, see [What are managed identities](. In the meantime, when you use Custom Command, you can manage your subscription with your own encryption keys. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. For more information about Custom Command and CMK, see [Custom Commands encryption of data at rest](custom-commands-encryption-of-data-at-rest.md). -## Bring your own storage (BYOS) for customization and logging +## Bring your own storage (BYOS) -To request access to bring your own storage, fill out and submit theΓÇ»[Speech service - bring your own storage (BYOS) request form](https://aka.ms/cogsvc-cmk). Once approved, you'll need to create your own storage account to store the data required for customization and logging. When adding a storage account, the Speech service resource will enable a system assigned managed identity. +Bring your own storage (BYOS) is an Azure AI technology for customers, who have high requirements for data security and privacy. The core of the technology is the ability to associate an Azure Storage account, that the user owns and fully controls with the Speech resource. The Speech resource then uses this storage account for storing different artifacts related to the user data processing, instead of storing the same artifacts within the Speech service premises as it is done in the regular case. This approach allows using all set of security features of Azure Storage account, including encrypting the data with the Customer-managed keys, using Private endpoints to access the data, etc. -> [!IMPORTANT] -> The user account you use to create a Speech resource with BYOS functionality enabled should be assigned the [Owner role at the Azure subscription scope](../../cost-management-billing/manage/add-change-subscription-administrator.md#to-assign-a-user-as-an-administrator). Otherwise you will get an authorization error during the resource provisioning. --After the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory (AAD). After being registered, the managed identity will be given access to the storage account. For more about managed identities, see [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md). --> [!IMPORTANT] -> If you disable system assigned managed identities, access to the storage account will be removed. This will cause the parts of the Speech service that require access to the storage account to stop working. --The Speech service doesn't currently support Customer Lockbox. However, customer data can be stored using BYOS, allowing you to achieve similar data controls to [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md). Keep in mind that Speech service data stays and is processed in the region where the Speech resource was created. This applies to any data at rest and data in transit. When using customization features, like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where your BYOS (if used) and Speech service resource reside. +The Speech service doesn't currently support Customer Lockbox. However, customer data can be stored using BYOS, allowing you to achieve similar data controls to [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md). > [!IMPORTANT] > Microsoft **does not** use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored. +See detailed information on using BYOS-enabled Speech resource in [this article](bring-your-own-storage-speech-resource.md). + ## Next steps -* [Speech service - bring your own storage (BYOS) request form](https://aka.ms/cogsvc-cmk) +* [Set up the Bring your own storage (BYOS) Speech resource](bring-your-own-storage-speech-resource.md) * [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md). |
ai-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md | The limits in this table apply per Speech resource when you create a Custom Spee ### Text to speech quotas and limits per resource -This section describes text to speech quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable. +This section describes text to speech quotas and limits per Speech resource. -#### Common text to speech quotas and limits +#### Real-time text to speech ++You can use real-time text to speech with the [Speech SDK](speech-sdk.md) or the [Text to speech REST API](rest-text-to-speech.md). Unless otherwise specified, the limits aren't adjustable. | Quota | Free (F0) | Standard (S0) | |--|--|--| This section describes text to speech quotas and limits per Speech resource. Unl | Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 | | Max SSML message size per turn for websocket | 64 KB | 64 KB | +#### Batch synthesis ++These limits aren't adjustable. ++| Quota | Free (F0) | Standard (S0) | +|--|--|--| +|REST API limit | Not available for F0 | 50 requests per 5 seconds | +| Max JSON payload size to create a synthesis job | N/A | 500 kilobytes | +| Concurrent active synthesis jobs | N/A | 200 | +| Max number of text inputs per synthesis job | N/A | 1000 | +|Max time to live for a synthesis job since it being in the final state | N/A | Up to 31 days (specified using properties) | + #### Custom Neural Voice | Quota | Free (F0)| Standard (S0) | |
ai-services | Speech Studio Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-studio-overview.md | -[Speech Studio](https://aka.ms/speechstudio/) is a set of UI-based tools for building and integrating features from Azure AI services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs. +[Speech Studio](https://aka.ms/speechstudio/) is a set of UI-based tools for building and integrating features from Azure AI Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs. > [!TIP] > You can try speech to text and text to speech in [Speech Studio](https://aka.ms/speechstudio/) without signing up or writing any code. |
ai-services | Tutorial Voice Enable Your Bot Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/tutorial-voice-enable-your-bot-speech-sdk.md | -You can use Azure AI services Speech to voice-enable a chat bot. +You can use Azure AI Speech to voice-enable a chat bot. In this tutorial, you'll use the Microsoft Bot Framework to create a bot that responds to what you say. You'll deploy your bot to Azure and register it with the Bot Framework Direct Line Speech channel. Then, you'll configure a sample client app for Windows that lets you speak to your bot and hear it speak back to you. |
ai-services | Translator Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-disconnected-containers.md | When run in a disconnected environment, an output mount must be available to the ```docker docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH} ```+#### Environment variable names in Kubernetes deployments ++Some Azure AI Containers, for example Translator, require users to pass environmental variable names that include colons (`:`) when running the container. This will work fine when using Docker, but Kubernetes does not accept colons in environmental variable names. +To resolve this, you can replace colons with two underscore characters (`__`) when deploying to Kubernetes. See the following example of an acceptable format for environmental variable names: ++```Kubernetes + env: + - name: Mounts__License + value: "/license" + - name: Mounts__Output + value: "/output" +``` ++This example replaces the default format for the `Mounts:License` and `Mounts:Output` environment variable names in the docker run command. #### Get records using the container endpoints |
ai-services | Document Sdk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/document-sdk-overview.md | |
ai-services | Quickstart Text Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-rest-api.md | |
ai-services | Quickstart Text Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-sdk.md | |
ai-services | V3 0 Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-reference.md | curl -X POST https://<your-custom-domain>.cognitiveservices.azure.com/translator ## Virtual Network support -The Translator service is now available with Virtual Network (VNET) capabilities in all regions of the Azure public cloud. To enable Virtual Network, *See* [Configuring Azure AI services Virtual Networks](../../cognitive-services-virtual-networks.md?tabs=portal). +The Translator service is now available with Virtual Network (VNET) capabilities in all regions of the Azure public cloud. To enable Virtual Network, *See* [Configuring Azure AI services virtual networks](../../cognitive-services-virtual-networks.md?tabs=portal). Once you turn on this capability, you must use the custom endpoint to call the Translator. You can't use the global translator endpoint ("api.cognitive.microsofttranslator.com") and you can't authenticate with an access token. |
ai-services | Text Translation Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/text-translation-overview.md | Add Text Translation to your projects and applications using the following resou > [!IMPORTANT] >- > * To use the Translator container you must complete and submit the [**Azure AI services Application for Gated Services**](https://aka.ms/csgate-translator) online request form and have it approved to acquire access to the container. + > * To use the Translator container you must complete and submit the [**Azure AI services application for Gated Services**](https://aka.ms/csgate-translator) online request form and have it approved to acquire access to the container. > > * The [**Translator container image**](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/about) supports limited features compared to cloud offerings. > |
aks | App Routing Nginx Prometheus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-nginx-prometheus.md | Title: Monitor the ingress-nginx controller metrics in the application routing a description: Configure Prometheus to scrape the ingress-nginx controller metrics. -+ Last updated 07/12/2023 |
aks | Howto Deploy Java Liberty App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md | The following steps deploy and test the application. ``` Copy the value of **ADDRESS** from the output, this is the frontend public IP address of the deployed Azure Application Gateway.-- 1. Go to `https://<ADDRESS>` to test the application. + + 1. Go to `https://<ADDRESS>` to test the application. For your convenience, this shell command will create an environment variable whose value you can paste straight into the browser. + + ```bash + export APP_URL=https://$(kubectl get ingress | grep javaee-cafe-cluster-agic-ingress | cut -d " " -f14)/ + echo $APP_URL + ``` ## Clean up resources |
aks | Managed Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-azure-ad.md | Title: AKS-managed Azure Active Directory integration description: Learn how to configure Azure AD for your Azure Kubernetes Service (AKS) clusters. Previously updated : 07/25/2023 Last updated : 07/28/2023 Learn more about the Azure AD integration flow in the [Azure AD documentation](c ## Before you begin * Make sure you have Azure CLI version 2.29.0 or later is installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).-* You need `kubectl` with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`][kubelogin]. The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than *one* version. You'll experience authentication issues if you don't use the correct version. +* You need `kubectl` with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`][kubelogin]. With the Azure CLI and the Azure PowerShell module, these two commands are included and automatically managed. Meaning, they are upgraded by default and running `az aks install-cli` isn't required or recommended. If you are using an automated pipeline, you need to manage upgrading to the correct or latest version. The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than *one* version. Otherwise, you'll experience authentication issues if you don't use the correct version. * If you're using [helm](https://github.com/helm/helm), you need a minimum version of helm 3.3. * This configuration requires you have an Azure AD group for your cluster. This group is registered as an admin group on the cluster to grant admin permissions. If you don't have an existing Azure AD group, you can create one using the [`az ad group create`](/cli/azure/ad/group#az_ad_group_create) command. +> [!NOTE] +> Azure AD integrated clusters using a Kubernetes version newer than version 1.24 automatically use the `kubelogin` format. Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [`kubelogin`][kubelogin] binary in the execution PATH. There is no behavior change for non-Azure AD clusters, or Azure AD clusters running a version older than 1.24. +> Existing downloaded `kubeconfig` continues to work. An optional query parameter **format** is included when getting clusterUser credential to overwrite the default behavior change. You can explicitly specify format to **azure** if you need to maintain the old `kubeconfig` format . + ## Enable AKS-managed Azure AD integration on your AKS cluster ### Create a new cluster A successful migration of an AKS-managed Azure AD cluster has the following sect There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with `kubectl`. You can use [`kubelogin`][kubelogin] to connect to the cluster with a non-interactive service principal credential. -Azure AD integrated clusters using a Kubernetes version newer than version 1.24 automatically use the `kubelogin` format. Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [`kubelogin`][kubelogin] binary in the execution PATH. +> [!NOTE] +> Azure AD integrated clusters using a Kubernetes version newer than version 1.24 automatically use the `kubelogin` format. Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [`kubelogin`][kubelogin] binary in the execution PATH. There is no behavior change for non-Azure AD clusters, or Azure AD clusters running a version older than 1.24. +> Existing downloaded `kubeconfig` continues to work. An optional query parameter **format** is included when getting clusterUser credential to overwrite the default behavior change. You can explicitly specify format to **azure** if you need to maintain the old `kubeconfig` format . * When getting the clusterUser credential, you can use the `format` query parameter to overwrite the default behavior. You can set the value to `azure` to use the original kubeconfig format: ```azurecli-interactive az aks get-credentials --format azure ```- + * If your Azure AD integrated cluster uses Kubernetes version 1.24 or lower, you need to manually convert the kubeconfig format. ```azurecli-interactive |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | For the past release history, see [Kubernetes history](https://github.com/kubern | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | 1.27 | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024 +### AKS Kubernetes release schedule Gantt chart ++If you prefer to see this information visually, here's a Gantt chart with all the current releases displayed: ++ ## AKS Components Breaking Changes by Version Note important changes to make, before you upgrade to any of the available minor versions per below. |
aks | Workload Identity Migrate From Pod Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md | If your cluster is already using the latest version of the Azure Identity SDK, p If your cluster isn't using the latest version of the Azure Identity SDK, you have two options: -- You can use a migration sidecar that we provide within your applications, which proxies the IMDS transactions your application makes over to [OpenID Connect][openid-connect-overview] (OIDC). The migration sidecar isn't intended to be a long-term solution, but a way to get up and running quickly on workload identity. Perform the following steps to:+- You can use a migration sidecar that we provide within your Linux applications, which proxies the IMDS transactions your application makes over to [OpenID Connect][openid-connect-overview] (OIDC). The migration sidecar isn't intended to be a long-term solution, but a way to get up and running quickly on workload identity. Perform the following steps to: - [Deploy the workload with migration sidecar](#deploy-the-workload-with-migration-sidecar) to proxy the application IMDS transactions. - Verify the authentication transactions are completing successfully. If your cluster isn't using the latest version of the Azure Identity SDK, you ha > [!NOTE] > The migration sidecar is **not supported for production use**. This feature is meant to give you time to migrate your application SDK's to a supported version, and not meant or intended to be a long-term solution.+ > The migration sidecar is only for Linux containers as pod-managed identities was available on Linux node pools only. - Rewrite your application to support the latest version of the [Azure Identity][azure-identity-supported-versions] client library. Afterwards, perform the following steps: kind: ServiceAccount metadata: annotations: azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID}- labels: - azure.workload.identity/use: "true" name: ${SERVICE_ACCOUNT_NAME} namespace: ${SERVICE_ACCOUNT_NAMESPACE} EOF az identity federated-credential create --name federatedIdentityName --identity- > [!NOTE] > The migration sidecar is **not supported for production use**. This feature is meant to give you time to migrate your application SDK's to a supported version, and not meant or intended to be a long-term solution.+> The migration sidecar is only for Linux containers as pod-managed identities was available on Linux node pools only. If your application is using managed identity and still relies on IMDS to get an access token, you can use the workload identity migration sidecar to start migrating to workload identity. This sidecar is a migration solution and in the long-term applications, you should modify their code to use the latest Azure Identity SDKs that support client assertion. metadata: name: httpbin-pod labels: app: httpbin+ azure.workload.identity/use: "true" + annotations: + azure.workload.identity/inject-proxy-sidecar: "true" spec: serviceAccountName: workload-identity-sa initContainers: - name: init-networking- image: mcr.microsoft.com/oss/azure/workload-identity/proxy-init:v0.13.0 + image: mcr.microsoft.com/oss/azure/workload-identity/proxy-init:v1.1.0 securityContext: capabilities: add: spec: ports: - containerPort: 80 - name: proxy- image: mcr.microsoft.com/oss/azure/workload-identity/proxy:v0.13.0 + image: mcr.microsoft.com/oss/azure/workload-identity/proxy:v1.1.0 ports: - containerPort: 8000 ``` |
api-management | Cosmosdb Data Source Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cosmosdb-data-source-policy.md | documents.azure.com:443/;AccountKey=CONTOSOKEY; <container-name>myContainer</container-name> </connection-info> <query-request>- <sql-statement>SELECT * FROM c </sqlstatement> + <sql-statement>SELECT * FROM c </sql-statement> </query-request> </cosmosdb-data-source> ``` |
app-service | Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md | tags: buy-ssl-certificates Last updated 07/28/2023 -+ # Add and manage TLS/SSL certificates in Azure App Service |
app-service | Deploy Staging Slots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md | description: Learn how to deploy apps to a nonproduction slot and autoswap into ms.assetid: e224fc4f-800d-469a-8d6a-72bcde612450 Last updated 07/30/2023--+ # Set up staging environments in Azure App Service <a name="Overview"></a> |
app-service | Identity Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/identity-scenarios.md | -You can add authentication to your web app or API running in Azure App Service to limit the users who can access it. There are several different authentication solutions available. This article describes which authentication solution to use for specific scenarios. +If you have a web app or an API running in Azure App Service, you can restrict access to it based on the identity of the users or applications that request it. App Service offers several authentication solutions to help you achieve this goal. In this article, you will learn about the different authentication solutions, their benefits and drawbacks, and which authentication solution to use for specific scenarios. ## Authentication solutions |
app-service | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md | Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure' description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Previously updated : 08/23/2022 Last updated : 07/26/2023 ms.devlang: python -1. An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -1. <a href="https://www.python.org/downloads/" target="_blank">Python 3.9 or higher</a> installed locally. ->**Note**: This article contains current instructions on deploying a Python web app using Azure App Service. Python on Windows is no longer supported. +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). +- <a href="https://www.python.org/downloads/" target="_blank">Python 3.9 or higher</a> installed locally. ++> [!NOTE] +> This article contains current instructions on deploying a Python web app using Azure App Service. Python on Windows is no longer supported. ## 1 - Sample application Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps | Instructions | Screenshot | |:-|--:| | [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot of how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: | -| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: | -| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: | -| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: | +| [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: | +| [!INCLUDE [Create app service step 3](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: | +| [!INCLUDE [Create app service step 4](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: | +| [!INCLUDE [Create app service step 5](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: | |
app-service | Tutorial Connect Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-overview.md | Secrets include: |Keys and access tokens|Cognitive service API Key<br>GitHub personal access token<br>Twitter consumer keys and authentication tokens| |Connection strings|Database connection strings such as SQL server or MongoDB| Benefits of managed identity integrated with Key Vault include: The App Service provides [App settings](configure-common.md?tabs=portal#configur * Learn how to use App Service managed identity with: * [SQL server](tutorial-connect-msi-sql-database.md?tabs=windowsclient%2Cdotnet) * [Azure storage](scenario-secure-app-access-storage.md?tabs=azure-portal%2Cprogramming-language-csharp)- * [Microsoft Graph](scenario-secure-app-access-microsoft-graph-as-app.md?tabs=azure-powershell%2Cprogramming-language-csharp) + * [Microsoft Graph](scenario-secure-app-access-microsoft-graph-as-app.md?tabs=azure-powershell%2Cprogramming-language-csharp) |
app-service | Tutorial Networking Isolate Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-networking-isolate-vnet.md | -In this article you will configure an App Service app with secure, network-isolated communication to backend services. The example scenario used is in [Tutorial: Secure Cognitive Service connection from App Service using Key Vault](tutorial-connect-msi-key-vault.md). When you're finished, you have an App Service app that accesses both Key Vault and Cognitive Services through an [Azure virtual network](../virtual-network/virtual-networks-overview.md), and no other traffic is allowed to access those back-end resources. All traffic will be isolated within your virtual network using [virtual network integration](web-sites-integrate-with-vnet.md) and [private endpoints](../private-link/private-endpoint-overview.md). +In this article you will configure an App Service app with secure, network-isolated communication to backend services. The example scenario used is in [Tutorial: Secure Cognitive Service connection from App Service using Key Vault](tutorial-connect-msi-key-vault.md). When you're finished, you have an App Service app that accesses both Key Vault and Azure AI services through an [Azure virtual network](../virtual-network/virtual-networks-overview.md), and no other traffic is allowed to access those back-end resources. All traffic will be isolated within your virtual network using [virtual network integration](web-sites-integrate-with-vnet.md) and [private endpoints](../private-link/private-endpoint-overview.md). As a multi-tenanted service, outbound network traffic from your App Service app to other Azure services shares the same environment with other apps or even other subscriptions. While the traffic itself can be encrypted, certain scenarios may require an extra level of security by isolating back-end communication from other network traffic. These scenarios are typically accessible to large enterprises with a high level of expertise, but App Service puts it within reach with virtual network integration. The tutorial continues to use the following environment variables from the previ ## Create private DNS zones -Because your Key Vault and Cognitive Services resources will sit behind [private endpoints](../private-link/private-endpoint-overview.md), you need to define [private DNS zones](../dns/private-dns-privatednszone.md) for them. These zones are used to host the DNS records for private endpoints and allow the clients to find the back-end services by name. +Because your Key Vault and Azure AI services resources will sit behind [private endpoints](../private-link/private-endpoint-overview.md), you need to define [private DNS zones](../dns/private-dns-privatednszone.md) for them. These zones are used to host the DNS records for private endpoints and allow the clients to find the back-end services by name. -1. Create two private DNS zones, one for your Cognitive Services resource and one for your key vault. +1. Create two private DNS zones, one for your Azure AI services resource and one for your key vault. ```azurecli-interactive az network private-dns zone create --resource-group $groupName --name privatelink.cognitiveservices.azure.com Because your Key Vault and Cognitive Services resources will sit behind [private az network private-endpoint create --resource-group $groupName --name securecstext-pe --location $region --connection-name securecstext-pc --private-connection-resource-id $csResourceId --group-id account --vnet-name $vnetName --subnet private-endpoint-subnet ``` -1. Create a DNS zone group for the Cognitive Services private endpoint. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS Zone when there is an update to the private endpoint. +1. Create a DNS zone group for the Azure AI services private endpoint. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS Zone when there is an update to the private endpoint. ```azurecli-interactive az network private-endpoint dns-zone-group create --resource-group $groupName --endpoint-name securecstext-pe --name securecstext-zg --private-dns-zone privatelink.cognitiveservices.azure.com --zone-name privatelink.cognitiveservices.azure.com ``` -1. Block public traffic to the Cognitive Services resource. +1. Block public traffic to the Azure AI services resource. ```azurecli-interactive az rest --uri $csResourceId?api-version=2021-04-30 --method PATCH --body '{"properties":{"publicNetworkAccess":"Disabled"}}' --headers 'Content-Type=application/json' Because your Key Vault and Cognitive Services resources will sit behind [private ``` > [!NOTE]- > Make sure the provisioning state of your change is `"Succeeded"`. Then you can observe the behavior change in the sample app. You can still load the app, but if you try click the **Detect** button, you get an `HTTP 500` error. The app has lost its connectivity to the Cognitive Services resource through the shared networking. + > Make sure the provisioning state of your change is `"Succeeded"`. Then you can observe the behavior change in the sample app. You can still load the app, but if you try click the **Detect** button, you get an `HTTP 500` error. The app has lost its connectivity to the Azure AI services resource through the shared networking. 1. Repeat the steps above for the key vault. The two private endpoints are only accessible to clients inside the virtual netw Virtual network integration allows outbound traffic to flow directly into the virtual network. By default, only local IP traffic defined in [RFC-1918](https://tools.ietf.org/html/rfc1918#section-3) is routed to the virtual network, which is what you need for the private endpoints. To route all your traffic to the virtual network, see [Manage virtual network integration routing](configure-vnet-integration-routing.md). Routing all traffic can also be used if you want to route internet traffic through your virtual network, such as through an [Azure Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) or an [Azure Firewall](../firewall/overview.md). -1. In the browser, navigate to `<app-name>.azurewebsites.net` again and wait for the integration to take effect. If you get an HTTP 500 error, wait a few minutes and try again. If you can load the page and get detection results, then you're connecting to the Cognitive Services endpoint with key vault references. +1. In the browser, navigate to `<app-name>.azurewebsites.net` again and wait for the integration to take effect. If you get an HTTP 500 error, wait a few minutes and try again. If you can load the page and get detection results, then you're connecting to the Azure AI services endpoint with key vault references. >[!NOTE] > If keep getting HTTP 500 errors after a long time, it may help to force a refetch of the [key vault references](app-service-key-vault-references.md) again, like so: |
application-gateway | How To Ssl Offloading Ingress Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-ingress-api.md | metadata: namespace: test-infra annotations: alb.networking.azure.io/alb-name: alb-test- alb.networking.azure.io/alb-namespace: test-infra + alb.networking.azure.io/alb-namespace: alb-test-infra spec: ingressClassName: azure-alb-external tls: |
application-gateway | Quickstart Create Application Gateway For Containers Managed By Alb Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md | ALB_SUBNET_ID=$(az network vnet subnet show --name $ALB_SUBNET_NAME --resource-g ALB Controller needs the ability to provision new Application Gateway for Containers resources and to join the subnet intended for the Application Gateway for Containers association resource. -In this example, we delegate the _AppGW for Containers Configuration Manager_ role to the resource group the managed cluster and delegate the _Network Contributor_ role to the subnet used by the Application Gateway for Containers association subnet, which contains the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission. +In this example, we delegate the _AppGW for Containers Configuration Manager_ role to the resource group containing the managed cluster and delegate the _Network Contributor_ role to the subnet used by the Application Gateway for Containers association subnet, which contains the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission. If desired, you can [create and assign a custom role](../../role-based-access-control/custom-roles-portal.md) with the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission to eliminate other permissions contained in the _Network Contributor_ role. Learn more about [managing subnet permissions](../../virtual-network/virtual-network-manage-subnet.md#permissions). |
application-gateway | Ingress Controller Expose Service Over Http Https | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-service-over-http-https.md | -> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. ## Prerequisites |
application-gateway | Ingress Controller Expose Websocket Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-websocket-server.md | -> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. The following Kubernetes deployment YAML shows the minimum configuration used to deploy a WebSocket server, which is the same as deploying a regular web server: ```yaml |
application-gateway | Ingress Controller Install Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md | AGIC monitors the Kubernetes [Ingress](https://kubernetes.io/docs/concepts/servi resources, and creates and applies Application Gateway config based on the status of the Kubernetes cluster. > [!TIP]-> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. ## Outline Gateway should that become necessary ## Install Helm [Helm](../aks/kubernetes-helm.md) is a package manager for Kubernetes, used to install the `application-gateway-kubernetes-ingress` package.-Use [Cloud Shell](https://shell.azure.com/) to install Helm: ++> [!NOTE] +> If you use [Cloud Shell](https://shell.azure.com/), you don't need to install Helm. Azure Cloud Shell comes with Helm version 3. Skip the first step and just add the AGIC Helm repository. 1. Install [Helm](../aks/kubernetes-helm.md) and run the following to add `application-gateway-kubernetes-ingress` helm package: Use [Cloud Shell](https://shell.azure.com/) to install Helm: helm init ``` -1. Add the AGIC Helm repository: -+2. Add the AGIC Helm repository: ```bash helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ helm repo update |
application-gateway | Ingress Controller Install New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md | The instructions below assume Application Gateway Ingress Controller (AGIC) will installed in an environment with no pre-existing components. > [!TIP]-> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. ## Required Command Line Tools This step will add the following components to your subscription: With the instructions in the previous section, we created and configured a new AKS cluster and an Application Gateway. We're now ready to deploy a sample app and an ingress controller to our new Kubernetes infrastructure. -### Setup Kubernetes Credentials +### Set up Kubernetes Credentials For the following steps, we need setup [kubectl](https://kubectl.docs.kubernetes.io/) command, which we'll use to connect to our new Kubernetes cluster. [Cloud Shell](https://shell.azure.com/) has `kubectl` already installed. We'll use `az` CLI to obtain credentials for Kubernetes. To install Azure AD Pod Identity to your cluster: ``` ### Install Helm-[Helm](../aks/kubernetes-helm.md) is a package manager for -Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress` package: +[Helm](../aks/kubernetes-helm.md) is a package manager for Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress` package. ++> [!NOTE] +> If you use [Cloud Shell](https://shell.azure.com/), you don't need to install Helm. Azure Cloud Shell comes with Helm version 3. Skip the first step and just add the AGIC Helm repository. 1. Install [Helm](../aks/kubernetes-helm.md) and run the following to add `application-gateway-kubernetes-ingress` helm package: Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress` helm init ``` -1. Add the AGIC Helm repository: +2. Add the AGIC Helm repository: ```bash helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ helm repo update |
application-gateway | Ingress Controller Letsencrypt Certificate Application Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md | -> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. Use the following steps to install [cert-manager](https://docs.cert-manager.io) on your existing AKS cluster. |
application-gateway | Ingress Controller Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-migration.md | -> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. ## Prerequisites Before you start the migration process, there are a few things to check. Before you start the migration process, there are a few things to check. - Are you using more than one AGIC Helm deployment per AKS cluster? - Are you using multiple AGIC Helm deployments to target one Application Gateway? -If you answered yes to any of the questions above, AGIC add-on won't support your use case yet so it is be best to continue using AGIC Helm in the meantime. Otherwise, continue with the migration process below during off-business hours. +If you answered yes to any of the previous questions, AGIC add-on won't support your use case yet, so it's best to continue using AGIC Helm. Otherwise, use the following migration process during off-business hours. ## Find the Application Gateway resource ID that AGIC Helm is currently targeting Navigate to the Application Gateway that your AGIC Helm deployment is targeting. Copy and save the resource ID of that Application Gateway. You need the resource ID in a later step. The resource ID can be found in Portal, under the Properties tab of your Application Gateway or through Azure CLI. The following example saves the Application Gateway resource ID to *appgwId* for a gateway named *myApplicationGateway* in the resource group *myResourceGroup*. appgwId=$(az network application-gateway show -n myApplicationGateway -g myResou ``` ## Delete AGIC Helm from your AKS cluster-Through Azure CLI, delete your AGIC Helm deployment from your cluster. You'll need to delete the AGIC Helm deployment first before you can enable the AGIC AKS add-on. Please note that any changes that occur within your AKS cluster between the time of deleting your AGIC Helm deployment and the time you enable the AGIC add-on won't be reflected on your Application Gateway, and therefore this migration process should be done outside of business hours to minimize impact. Application Gateway continues to have the last configuration applied by AGIC so existing routing rules won't be affected. +Using Azure CLI, delete your AGIC Helm deployment from your cluster. You need to delete the AGIC Helm deployment first before you can enable the AGIC AKS add-on. Any changes that occur within your AKS cluster between the time of deleting your AGIC Helm deployment and the time you enable the AGIC add-on aren't reflected on your Application Gateway. Therefore, migration should be completed outside of business hours to minimize impact. Application Gateway continues to have the last configuration applied by AGIC so that existing routing rules aren't affected. ## Enable AGIC add-on using your existing Application Gateway -You can now enable the AGIC add-on in your AKS cluster to target your existing Application Gateway through Azure CLI or Portal. Run the following Azure CLI command to enable the AGIC add-on in your AKS cluster. The example enables the add-on in a cluster called *myCluster*, in a resource group called *myResourceGroup*, using the Application Gateway resource ID *appgwId* we saved above in the earlier step. +You can now enable the AGIC add-on in your AKS cluster to target your existing Application Gateway through Azure CLI or Portal. Run the following Azure CLI command to enable the AGIC add-on in your AKS cluster. The example enables the add-on in a cluster called *myCluster*, in a resource group called *myResourceGroup*, using the Application Gateway resource ID *appgwId* we saved in the earlier step. ```azurecli-interactive |
application-gateway | Ingress Controller Multiple Namespace Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-multiple-namespace-support.md | namespace, unless this is explicitly changed to one or more different namespaces in the Helm configuration (see the following section). > [!TIP]-> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. ## Enable multiple namespace support To enable multiple namespace support: |
application-gateway | Ingress Controller Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-overview.md | The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, w The Ingress Controller runs in its own pod on the customerΓÇÖs AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to Application Gateway specific configuration and applied to the [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md). > [!TIP]-> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. ## Benefits of Application Gateway Ingress Controller-AGIC helps eliminate the need to have another load balancer/public IP in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP directly and doesn't require NodePort or KubeProxy services. This also brings better performance to your deployments. +AGIC helps eliminate the need to have another load balancer/public IP address in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP address directly and doesn't require NodePort or KubeProxy services. This capability also brings better performance to your deployments. -Ingress Controller is supported exclusively by Standard_v2 and WAF_v2 SKUs, which also enables autoscaling benefits. Application Gateway can react in response to an increase or decrease in traffic load and scale accordingly, without consuming any resources from your AKS cluster. +Ingress Controller is supported exclusively by Standard_v2 and WAF_v2 SKUs, which also enable autoscaling benefits. Application Gateway can react in response to an increase or decrease in traffic load and scale accordingly, without consuming any resources from your AKS cluster. Using Application Gateway in addition to AGIC also helps protect your AKS cluster by providing TLS policy and Web Application Firewall (WAF) functionality. AGIC is configured via the Kubernetes [Ingress resource](https://kubernetes.io/d - Integrated web application firewall ## Difference between Helm deployment and AKS Add-On-There are two ways to deploy AGIC for your AKS cluster. The first way is through Helm; the second is through AKS as an add-on. The primary benefit of deploying AGIC as an AKS add-on is that it's much simpler than deploying through Helm. For a new setup, you can deploy a new Application Gateway and a new AKS cluster with AGIC enabled as an add-on in one line in Azure CLI. The add-on is also a fully managed service, which provides added benefits such as automatic updates and increased support. Both ways of deploying AGIC (Helm and AKS add-on) are fully supported by Microsoft. Additionally, the add-on allows for better integration with AKS as a first class add-on. +There are two ways to deploy AGIC for your AKS cluster. The first way is through Helm; the second is through AKS as an add-on. The primary benefit of deploying AGIC as an AKS add-on is that it's simpler than deploying through Helm. For a new setup, you can deploy a new Application Gateway and a new AKS cluster with AGIC enabled as an add-on in one line in Azure CLI. The add-on is also a fully managed service, which provides added benefits such as automatic updates and increased support. Both ways of deploying AGIC (Helm and AKS add-on) are fully supported by Microsoft. Additionally, the add-on allows for better integration with AKS as a first class add-on. -The AGIC add-on is still deployed as a pod in the customer's AKS cluster, however, there are a few differences between the Helm deployment version and the add-on version of AGIC. Below is a list of differences between the two versions: +The AGIC add-on is still deployed as a pod in the customer's AKS cluster, however, there are a few differences between the Helm deployment version and the add-on version of AGIC. The following is a list of differences between the two versions: - Helm deployment values can't be modified on the AKS add-on: - `verbosityLevel` is set to 5 by default- - `usePrivateIp` is set to be false by default; this can be overwritten by the [use-private-ip annotation](ingress-controller-annotations.md#use-private-ip) + - `usePrivateIp` is set to be false by default; this setting can be overwritten by the [use-private-ip annotation](ingress-controller-annotations.md#use-private-ip) - `shared` isn't supported on add-on - `reconcilePeriodSeconds` isn't supported on add-on - `armAuth.type` isn't supported on add-on- - AGIC deployed via Helm supports ProhibitedTargets, which means AGIC can configure the Application Gateway specifically for AKS clusters without affecting other existing backends. AGIC add-on doesn't currently support this. + - AGIC deployed via Helm supports ProhibitedTargets, which means AGIC can configure the Application Gateway specifically for AKS clusters without affecting other existing backends. AGIC add-on doesn't currently support this capability. - Since AGIC add-on is a managed service, customers are automatically updated to the latest version of AGIC add-on, unlike AGIC deployed through Helm where the customer must manually update AGIC. > [!NOTE] |
application-gateway | Ingress Controller Private Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-private-ip.md | -> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. ## Prerequisites Application Gateway with a [Private IP configuration](./configure-application-gateway-with-private-frontend-ip.md) |
application-gateway | Ingress Controller Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md | and AGIC installation. Launch your shell from [shell.azure.com](https://shell.az [![Embed launch](https://shell.azure.com/images/launchcloudshell.png "Launch Azure Cloud Shell")](https://shell.azure.com) > [!TIP]-> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. ## Test with a simple Kubernetes app |
application-gateway | Ingress Controller Update Ingress Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-update-ingress-controller.md | The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be up using a Helm repository hosted on Azure Storage. > [!TIP]-> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview. +> Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview. Before beginning the upgrade procedure, ensure that you've added the required repository: |
attestation | Quickstart Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-terraform.md | + + Title: 'Quickstart: Create an Azure Attestation provider by using Terraform' +description: In this article, you learn how to create an Azure Attestation provider using Terraform +++++ Last updated : 07/26/2023+content_well_notification: + - AI-contribution +++# Quickstart: Create an Azure Attestation provider by using Terraform ++[Microsoft Azure Attestation](overview.md) is a solution for attesting Trusted Execution Environments (TEEs). This quickstart focuses on the process of deploying a Bicep file to create a Microsoft Azure Attestation policy. ++In this article, you learn how to: ++> [!div class="checklist"] +> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet). +> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group). +> * Create an Azure Attestation provider using [azurerm_attestation_provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/attestation). ++## Prerequisites ++- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure) ++- **Policy Signing Certificate:** You need to upload an X.509 certificate, which is used by the attestation provider to validate signed policies. This certificate is either signed by a certificate authority or self-signed. Supported file extensions include `pem`, `txt`, and `cer`. This article assumes that you already have a valid X.509 certificate. ++## Implement the Terraform code ++> [!NOTE] +> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-attestation-provider). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-attestation-provider/TestRecord.md). +> +> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform) ++1. Create a directory in which to test the sample Terraform code and make it the current directory. ++1. Create a file named `providers.tf` and insert the following code: ++ :::code language="Terraform" source="~/terraform_samples/quickstart/101-attestation-provider/providers.tf"::: ++1. Create a file named `main.tf` and insert the following code: ++ :::code language="Terraform" source="~/terraform_samples/quickstart/101-attestation-provider/main.tf"::: ++1. Create a file named `variables.tf` and insert the following code: ++ :::code language="Terraform" source="~/terraform_samples/quickstart/101-attestation-provider/variables.tf"::: + + **Key points:** + + - Adjust the `policy_file` field as needed to point to your PEM file. + +1. Create a file named `outputs.tf` and insert the following code: ++ :::code language="Terraform" source="~/terraform_samples/quickstart/101-attestation-provider/outputs.tf"::: ++## Initialize Terraform +++## Create a Terraform execution plan +++## Apply a Terraform execution plan +++## 6. Verify the results ++#### [Azure CLI](#tab/azure-cli) ++1. Get the Azure resource group name. ++ ```console + resource_group_name=$(terraform output -raw resource_group_name) + ``` ++1. Run [az attestation list](/cli/azure/attestation#az-attestation-list) to list the providers for the specified resource group name. ++ ```azurecli + az attestation list --resource-group $resource_group_name + ``` ++## Clean up resources +++## Troubleshoot Terraform on Azure ++[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot) ++## Next steps ++> [!div class="nextstepaction"] +> [Overview of Azure Attestation](overview.md). |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md | To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-|Wind River Cloud Platform 22.12 | 1.24.4|1.14.0_2022-12-13 |16.0.816.19223| 14.5 (Ubuntu 20.04) | +| [Wind River Cloud Platform 22.12](https://www.windriver.com/studio/operator/cloud-platform) | 1.24.4 | 1.21.0_2023-07-11 | 16.0.5100.7242 | Not validated | |Wind River Cloud Platform 22.06 | 1.23.1|1.9.0_2022-07-12 |16.0.312.4243| 12.3 (Ubuntu 12.3-1) | ## Data services validation process |
azure-arc | Tutorial Akv Secrets Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md | Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster Previously updated : 04/21/2023 Last updated : 07/27/2023 Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed parameters: usePodIdentity: "false" keyvaultName: <key-vault-name>+ cloudName: # Defaults to AzurePublicCloud objects: | array: - | Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed tenantId: <tenant-Id> # The tenant ID of the Azure Key Vault instance ``` + For use with national clouds, change `cloudName` to `AzureUSGovernmentCloud` for U.S. Government Cloud, or to `AzureChinaCloud` for Azure China Cloud. + 1. Apply the SecretProviderClass to your cluster: ```bash The following configuration settings are frequently used with the Azure Key Vaul | Configuration Setting | Default | Description | | | -- | -- | | enableSecretRotation | false | Boolean type. If `true`, periodically updates the pod mount and Kubernetes Secret with the latest content from external secrets store |-| rotationPollInterval | 2 m | If `enableSecretRotation` is `true`, specifies the secret rotation poll interval duration. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. | +| rotationPollInterval | 2 m | If `enableSecretRotation` is `true`, this setting specifies the secret rotation poll interval duration. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. | | syncSecret.enabled | false | Boolean input. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. If `true`, `SecretProviderClass` allows the `secretObjects` field to define the desired state of the synced Kubernetes Secret objects. | These settings can be specified when the extension is installed by using the `az k8s-extension create` command: kubectl delete secret secrets-store-creds ## Reconciliation and troubleshooting -The Azure Key Vault Secrets Provider extension is self-healing. If somebody tries to change or delete an extension component that was deployed when the extension was installed, that component will be reconciled to its original state. The only exceptions are for Custom Resource Definitions (CRDs). If CRDs are deleted, they won't be reconciled. To restore deleted CRDs, use the `az k8s-extension create` command again with the existing extension instance name. +The Azure Key Vault Secrets Provider extension is self-healing. If somebody tries to change or delete an extension component that was deployed when the extension was installed, that component is reconciled to its original state. The only exceptions are for Custom Resource Definitions (CRDs). If CRDs are deleted, they won't be reconciled. To restore deleted CRDs, use the `az k8s-extension create` command again with the existing extension instance name. For more information about resolving common issues, see the open source troubleshooting guides for [Azure Key Vault provider for Secrets Store CSI driver](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/) and [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html). |
azure-arc | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/disaster-recovery.md | + + Title: Recover from accidental deletion of resource bridge VM +description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled System Center Virtual Machine Manager (preview) disaster scenarios. ++ Last updated : 07/28/2023+ms. ++++++# Recover from accidental deletion of resource bridge virtual machine ++In this article, you'll learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail. ++## Recover the Arc resource bridge in case of virtual machine deletion ++To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps. ++1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and SCVMM Azure resources. ++2. Find and delete the old Arc resource bridge template from your SCVMM. ++3. Download the [onboarding script](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#download-the-onboarding-script) from the Azure portal and update the following section in the script, using the same information as the original resources in Azure. ++ ```powershell + $location = <Azure region of the resources> + $applianceSubscriptionId = <subscription-id> + $applianceResourceGroupName = <resource-group-name> + $applianceName = <resource-bridge-name> ++ $customLocationSubscriptionId = <subscription-id> + $customLocationResourceGroupName = <resource-group-name> + $customLocationName = <custom-location-name> ++ $vmmserverSubscriptionId = <subscription-id> + $vmmserverResourceGroupName = <resource-group-name> + $vmmserverName= <SCVMM-name-in-azure> + ``` ++4. [Run the onboarding script](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#download-the-onboarding-script) again with the `--force` parameter. ++ ``` powershell-interactive + ./resource-bridge-onboarding-script.ps1 --force + ``` ++5. [Provide the inputs](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#script-runtime) as prompted. ++6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again. ++## Next steps ++[Troubleshoot Azure Arc resource bridge (preview) issues](../resource-bridge/troubleshoot-resource-bridge.md) ++If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support: ++- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html). +- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. +- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md | Title: Overview of the Azure Connected System Center Virtual Machine Manager (preview) description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager (preview). Previously updated : 05/19/2023 Last updated : 07/24/2023 ms. The following scenarios are supported in Azure Arc-enabled SCVMM (preview): Azure Arc-enabled SCVMM (preview) is currently supported in the following regions: - East US+- West US2 +- East US2 - West Europe+- North Europe ### Resource bridge networking requirements |
azure-cache-for-redis | Cache Tutorial Functions Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-functions-getting-started.md | Title: 'Tutorial: Function - Azure Cache for Redis and Azure Functions' -description: Learn how to use Azure functions with Azure Cache for Redis. + Title: 'Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis' +description: In this tutorial, you learn how to use Azure Functions with Azure Cache for Redis. Last updated 07/19/2023+#CustomerIntent: As a < type of user >, I want < what? > so that < why? >. -# Get started with Azure Functions triggers in Azure Cache for Redis +# Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis -The following tutorial shows how to implement basic triggers with Azure Cache for Redis and Azure Functions. This tutorial uses VS Code to write and deploy the Azure Function in C#. +This tutorial shows how to implement basic triggers with Azure Cache for Redis and Azure Functions. It guides you through using Visual Studio Code (VS Code) to write and deploy an Azure function in C#. -## Requirements +In this tutorial, you learn how to: -- Azure subscription-- [Visual Studio Code](https://code.visualstudio.com/)+> [!div class="checklist"] +> * Set up the necessary tools. +> * Configure and connect to a cache. +> * Create an Azure function and deploy code to it. +> * Confirm the logging of triggers. -## Instructions +## Prerequisites -### Set up an Azure Cache for Redis instance +- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- [Visual Studio Code](https://code.visualstudio.com/). -Create a new **Azure Cache for Redis** instance using the Azure portal or your preferred CLI tool. We use a _Standard C1_ instance, which is a good starting point. Use the [quickstart guide](quickstart-create-redis.md) to get started. +## Set up an Azure Cache for Redis instance ++Create a new Azure Cache for Redis instance by using the Azure portal or your preferred CLI tool. This tutorial uses a _Standard C1_ instance, which is a good starting point. Use the [quickstart guide](quickstart-create-redis.md) to get started. :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-new-standard.png" alt-text="Screenshot of creating a cache in the Azure portal."::: -The default settings should suffice. We use a public endpoint for this demo, but we recommend you use a private endpoint for anything in production. +The default settings should suffice. This tutorial uses a public endpoint for demonstration, but we recommend that you use a private endpoint for anything in production. -Creating the cache can take a few minutes. You can move to the next section while creating the cache completes. +Creating the cache can take a few minutes. You can move to the next section while the process finishes. -### Set up Visual Studio Code +## Set up Visual Studio Code -1. If you haven’t installed the functions extension for VS Code, search for _Azure Functions_ in the extensions menu, and select **Install**. If you don’t have the C# extension installed, install it, too. +1. If you haven't installed the Azure Functions extension for VS Code, search for **Azure Functions** on the **EXTENSIONS** menu, and then select **Install**. If you don't have the C# extension installed, install it, too. :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-code-editor.png" alt-text="Screenshot of the required extensions installed in VS Code."::: -1. Next, go to the **Azure** tab, and sign-in to your existing Azure account, or create a new one: +1. Go to the **Azure** tab. Sign in to your Azure account. -1. Create a new local folder on your computer to hold the project that you're building. In our example, we use _RedisAzureFunctionDemo_. +1. Create a new local folder on your computer to hold the project that you're building. This tutorial uses _RedisAzureFunctionDemo_ as an example. -1. In the Azure tab, create a new functions app by clicking on the lightning bolt icon in the top right of the **Workspace** tab. +1. On the **Azure** tab, create a new function app by selecting the lightning bolt icon in the upper right of the **Workspace** tab. - :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-add-resource.png" alt-text="Screenshot showing how to add a new function from VS Code."::: + :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-add-resource.png" alt-text="Screenshot that shows the icon for adding a new function from VS Code."::: -1. Select the new folder that you’ve created to start the creation of a new Azure Functions project. You get several on-screen prompts. Select: +1. Select the folder that you created to start the creation of a new Azure Functions project. You get several on-screen prompts. Select: - - **C#** as the language - - **.NET 6.0 LTS** as the .NET runtime - - **Skip for now** as the project template + - **C#** as the language. + - **.NET 6.0 LTS** as the .NET runtime. + - **Skip for now** as the project template. - > [!NOTE] - > If you don’t have the .NET Core SDK installed, you’ll be prompted to do so. + If you don't have the .NET Core SDK installed, you're prompted to do so. -1. The new project is created: +1. Confirm that the new project appears on the **EXPLORER** pane. :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-vscode-workspace.png" alt-text="Screenshot of a workspace in VS Code."::: -### Install the necessary NuGet package +## Install the necessary NuGet package You need to install `Microsoft.Azure.WebJobs.Extensions.Redis`, the NuGet package for the Redis extension that allows Redis keyspace notifications to be used as triggers in Azure Functions. Install this package by going to the **Terminal** tab in VS Code and entering th dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease ``` -### Configure cache +## Configure the cache 1. Go to your newly created Azure Cache for Redis instance. -1. Go to your cache in the Azure portal and select the **Advanced settings** from the Resource menu. Scroll down to the field labeled _notify-keyspace-events_ and enter `KEA`. You have enabled **keyspace notifications** on the cache to trigger on keys and commands. +1. Go to your cache in the Azure portal, and then: ++ 1. On the resource menu, select **Advanced settings**. + 1. Scroll down to the **notify-keyspace-events** box and enter **KEA**. -1. Then select **Save** at the top of the window. “KEA” is a configuration string that enables keyspace notifications for all keys and events. More information on keyspace configuration strings can be found [here](https://redis.io/docs/manual/keyspace-notifications/). + **KEA** is a configuration string that enables keyspace notifications for all keys and events. For more information on keyspace configuration strings, see the [Redis documentation](https://redis.io/docs/manual/keyspace-notifications/). + 1. Select **Save** at the top of the window. - :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-keyspace-notifications.png" alt-text="Screenshot of Advanced settings selected in the Resource menu and notify-keyspace-events highlighted with a red box."::: + :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-keyspace-notifications.png" alt-text="Screenshot of advanced settings for Azure Cache for Redis in the portal."::: -1. Select **Access keys** from the Resource menu and write down/copy the Primary connection string field. This string is used to connect to the cache. +1. Select **Access keys** from the resource menu, and then write down or copy the contents of the **Primary connection string** box. This string is used to connect to the cache. - :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-access-keys.png" alt-text="Screenshot showing the primary access key highlighted with a red box."::: + :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-access-keys.png" alt-text="Screenshot that shows the primary connection string for an access key."::: -### Set up the example code +## Set up the example code -1. Go back to VS Code, add a file to the project called `RedisFunctions.cs`. +1. Go back to VS Code and add a file called _RedisFunctions.cs_ to the project. -1. Copy and paste the code sample into the new file. +1. Copy and paste the following code sample into the new file: ```csharp using Microsoft.Extensions.Logging; dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease } ``` -1. This tutorial shows multiple different ways to trigger on Redis activity: +1. This tutorial shows multiple ways to trigger on Redis activity: - - _PubSubTrigger_, which is triggered when activity is published to the pub/sub channel named `pubsubTest` - - _KeyspaceTrigger_, which is built on the Pub/Sub trigger. Use it to look for changes to the key `keyspaceTest` - - _KeyeventTrigger_, which is also built on the Pub/Sub trigger. Use it to look for any use of the`DEL` command. - - _ListTrigger_, which looks for changes to the list `listTest` - - _StreamTrigger_, which looks for changes to the stream `streamTest` + - `PubSubTrigger`, which is triggered when an activity is published to the Pub/Sub channel named `pubsubTest`. + - `KeyspaceTrigger`, which is built on the Pub/Sub trigger. Use it to look for changes to the `keyspaceTest` key. + - `KeyeventTrigger`, which is also built on the Pub/Sub trigger. Use it to look for any use of the `DEL` command. + - `ListTrigger`, which looks for changes to the `listTest` list. + - `StreamTrigger`, which looks for changes to the `streamTest` stream. -### Connect to your cache +## Connect to your cache -1. In order to trigger on Redis activity, you need to pass in the connection string of your cache instance. This information is stored in the `local.settings.json` file that was automatically created in your folder. Using the [local settings file](../azure-functions/functions-run-local.md#local-settings) is recommended as a security best practice. +1. To trigger on Redis activity, you need to pass in the connection string of your cache instance. This information is stored in the _local.settings.json_ file that was automatically created in your folder. We recommend that you use the [local settings file](../azure-functions/functions-run-local.md#local-settings) as a security best practice. -1. To connect to your cache, add a `ConnectionStrings` section in the `local.settings.json` file and add your connection string using the parameter `redisConnectionString`. It should look like this: +1. To connect to your cache, add a `ConnectionStrings` section in the _local.settings.json_ file, and then add your connection string by using the `redisConnectionString` parameter. The section should look like this example: ```json { dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease } ``` - The code in `RedisConnection.cs` looks to this value when running local. + The code in _RedisConnection.cs_ looks to this value when it's running locally: ```csharp public const string connectionString = "redisConnectionString"; dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease > [!IMPORTANT] > This example is simplified for the tutorial. For production use, we recommend that you use [Azure Key Vault](../service-connector/tutorial-portal-key-vault.md) to store connection string information.-> -### Build and run the code locally +## Build and run the code locally -1. Switch to the **Run and debug** tab in VS Code and select the green arrow to debug the code locally. If you don’t have Azure Functions core tools installed, you're prompted to do so. In that case, you’ll need to restart VS Code after installing. +1. Switch to the **Run and debug** tab in VS Code and select the green arrow to debug the code locally. If you don't have Azure Functions core tools installed, you're prompted to do so. In that case, you'll need to restart VS Code after installing. - The code should build successfully, which you can track in the Terminal output. +1. The code should build successfully. You can track its progress in the terminal output. -1. To test the trigger functionality, try creating and deleting the _keyspaceTest_ key. You can use any way you prefer to connect to the cache. An easy way is to use the built-in Console tool in the Azure Cache for Redis portal. Bring up the cache instance in the Azure portal, and select **Console** to open it. +1. To test the trigger functionality, try creating and deleting the `keyspaceTest` key. - :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-console.png" alt-text="Screenshot of C# code and a connection string."::: + You can use any way you prefer to connect to the cache. An easy way is to use the built-in console tool in the Azure Cache for Redis portal. Go to the cache instance in the Azure portal, and then select **Console** to open it. -1. After it's open, try the following commands: + :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-console.png" alt-text="Screenshot of C-Sharp code and a connection string."::: ++ After the console is open, try the following commands: - `SET keyspaceTest 1` - `SET keyspaceTest 2` dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-console-output.png" alt-text="Screenshot of a console and some Redis commands and results."::: -1. You should see the triggers activating in the terminal: +1. Confirm that the triggers are being activated in the terminal. :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-triggers-working-lightbox.png" alt-text="Screenshot of the VS Code editor with code running." lightbox="media/cache-tutorial-functions-getting-started/cache-triggers-working.png"::: -### Deploy code to an Azure function +## Deploy code to an Azure function ++1. Create a new Azure function: -1. Create a new Azure function by going back to the Azure tab, expanding your subscription, and right clicking on **Function App**. Select **Create a Function App in Azure…(Advanced)**. + 1. Go back to the **Azure** tab and expand your subscription. + 1. Right-click **Function App**, and then select **Create Function App in Azure (Advanced)**. - :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-create-function-app.png" alt-text="Screenshot of creating a function app in VS Code."::: + :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-create-function-app.png" alt-text="Screenshot of selections for creating a function app in VS Code."::: -1. You see several prompts on information to configure the new functions app: +1. You get several prompts for information to configure the new function app: - - Enter a unique name - - Choose **.NET 6 (LTS)** as the runtime stack - - Choose either **Linux** or **Windows** (either works) - - Select an existing or new resource group to hold the Function App - - Choose the same region as your cache instance - - Select **Premium** as the hosting plan - - Create a new App Service plan - - Choose the **EP1** pricing tier. - - Choose an existing storage account or create a new one - - Create a new Application Insights resource. We use the resource to confirm the trigger is working. + - Enter a unique name. + - Select **.NET 6 (LTS)** as the runtime stack. + - Select either **Linux** or **Windows** (either works). + - Select an existing or new resource group to hold the function app. + - Select the same region as your cache instance. + - Select **Premium** as the hosting plan. + - Create a new Azure App Service plan. + - Select the **EP1** pricing tier. + - Select an existing storage account or create a new one. + - Create a new Application Insights resource. You use the resource to confirm that the trigger is working. > [!IMPORTANT]- > Redis triggers are not currently supported on consumption functions. - > + > Redis triggers aren't currently supported on consumption functions. ++1. Wait a few minutes for the new function app to be created. It appears under **Function App** in your subscription. Right-click the new function app, and then select **Deploy to Function App**. ++ :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-deploy-to-function.png" alt-text="Screenshot of selections for deploying to a function app in VS Code."::: ++1. The app builds and starts deploying. You can track its progress in the output window. -1. Wait a few minutes for the new Function App to be created. It appears in the drop-down under **Function App** in your subscription. Right-click on the new function app and select **Deploy to Function App…** +## Add connection string information - :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-deploy-to-function.png" alt-text="Screenshot of deploying to a function app in VS Code."::: +1. In the Azure portal, go to your new function app and select **Configuration** from the resource menu. -1. The app builds and starts deploying. You can track progress in the **Output Window**. +1. On the working pane, go to **Application settings**. In the **Connection strings** section, select **New connection string**. -### Add connection string information +1. For **Name**, enter **redisConnectionString**. -1. Navigate to your new Function App in the Azure portal and select the **Configuration** from the Resource menu. +1. For **Value**, enter your connection string. -1. In the working pane, you see **Application settings**. In the **Connection strings** section, select **New connection string**. +1. Set **Type** to **Custom**, and then select **Ok** to close the menu. -1. Then, type `redisConnectionString` as the **Name**, with your connection string as the **Value**. Set **Type** to _Custom_, and select **Ok** to close the menu. Then, select **Save** on the Configuration page to confirm. The functions app restarts with the new connection string information. +1. Select **Save** on the configuration page to confirm. The function app restarts with the new connection string information. -### Test your triggers +## Test your triggers -1. Once deployment is complete and the connection string information added, open your Function App in the Azure portal and select **Log Stream** from the Resource menu. +1. After deployment is complete and the connection string information is added, open your function app in the Azure portal. Then select **Log Stream** from the resource menu. -1. Wait for log analytics to connect, and then use the Redis console to activate any of the triggers. You should see the triggers being logged here. +1. Wait for Log Analytics to connect, and then use the Redis console to activate any of the triggers. Confirm that triggers are being logged here. - :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-log-stream.png" alt-text="Screenshot of log stream for a function app resource in the Resource menu." lightbox="media/cache-tutorial-functions-getting-started/cache-log-stream.png"::: + :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-log-stream.png" alt-text="Screenshot of a log stream for a function app resource on the resource menu." lightbox="media/cache-tutorial-functions-getting-started/cache-log-stream.png"::: -## Next steps +## Next step -- [Serverless event-based architectures with Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md)-- [Build a write-behind cache using Azure Functions](cache-tutorial-write-behind.md)+> [!div class="nextstepaction"] +> [Create serverless event-based architectures by using Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md) |
azure-cache-for-redis | Cache Tutorial Write Behind | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-write-behind.md | Title: 'Tutorial: Create a write-behind cache use Azure Cache for Redis and Azure Functions' -description: Learn how to use Using Azure Functions and Azure Cache for Redis to create a write-behind cache. + Title: 'Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis' +description: In this tutorial, you learn how to use Azure Functions and Azure Cache for Redis to create a write-behind cache. Last updated 04/20/2023+#CustomerIntent: As a < type of user >, I want < what? > so that < why? >. -# Using Azure Functions and Azure Cache for Redis to create a write-behind cache +# Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis -The objective of this tutorial is to use an Azure Cache for Redis instance as a [write-behind cache](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-caching/#types-of-caching). The _write-behind_ pattern in this tutorial shows how writes to the cache trigger corresponding writes to an Azure SQL database. +The objective of this tutorial is to use an Azure Cache for Redis instance as a [write-behind cache](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-caching/#types-of-caching). The write-behind pattern in this tutorial shows how writes to the cache trigger corresponding writes to a SQL database (an instance of the Azure SQL Database service). -We use the [Redis trigger for Azure Functions](cache-how-to-functions.md) to implement this functionality. In this scenario, you see how to use Azure Cache for Redis to store inventory and pricing information, while backing up that information in an Azure SQL Database. +You use the [Redis trigger for Azure Functions](cache-how-to-functions.md) to implement this functionality. In this scenario, you see how to use Azure Cache for Redis to store inventory and pricing information, while backing up that information in a SQL database. Every new item or new price written to the cache is then reflected in a SQL table in the database. -## Requirements +In this tutorial, you learn how to: -- Azure account-- Completion of the previous tutorial, [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md) with the following resources provisioned:+> [!div class="checklist"] +> * Configure a database, trigger, and connection strings. +> * Validate that triggers are working. +> * Deploy code to a function app. ++## Prerequisites ++- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- Completion of the previous tutorial, [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md), with these resources provisioned: - Azure Cache for Redis instance- - Azure Function instance - - VS Code environment set up with NuGet packages installed + - Azure Functions instance + - Visual Studio Code (VS Code) environment set up with NuGet packages installed -## Instructions +## Create and configure a new SQL database -### Create and configure a new Azure SQL Database instance +The SQL database is the backing database for this example. You can create a SQL database through the Azure portal or through your preferred method of automation. -The SQL database is the backing database for this example. You can create an Azure SQL database instance through the Azure portal or through your preferred method of automation. +This example uses the portal: -This example uses the portal. +1. Enter a database name and select **Create new** to create a new server to hold the database. -First, enter a database name and select **Create new** to create a new SQL server to hold the database. +1. Select **Use SQL authentication** and enter an admin sign-in and password. Be sure to remember these credentials or write them down. When you're deploying a server in production, use Azure Active Directory (Azure AD) authentication instead. -Select **Use SQL authentication** and enter an admin sign in and password. Make sure to remember these or write them down. When deploying a SQL server in production, use Azure Active Directory (Azure AD) authentication instead. +1. Go to the **Networking** tab and choose **Public endpoint** as a connection method. Select **Yes** for both firewall rules that appear. This endpoint allows access from your Azure function app. -Go to the **Networking** tab, and choose **Public endpoint** as a connection method. Select **Yes** for both firewall rules that appear. This endpoint allows access from your Azure Functions app. +1. After validation finishes, select **Review + create** and then **Create**. The SQL database starts to deploy. -Select **Review + create** and then **Create** after validation finishes. The SQL database starts to deploy. +1. After deployment finishes, go to the resource in the Azure portal and select the **Query editor** tab. Create a new table called *inventory* that holds the data you'll write to it. Use the following SQL command to make a new table with two fields: -Once deployment completes, go to the resource in the Azure portal, and select the **Query editor** tab. Create a new table called “inventory” that holds the data you'll be writing to it. Use the following SQL command to make a new table with two fields: + - `ItemName` lists the name of each item. + - `Price` stores the price of the item. -- `ItemName`, lists the name of each item-- `Price`, stores the price of the item+ ```sql + CREATE TABLE inventory ( + ItemName varchar(255), + Price decimal(18,2) + ); + ``` -```sql -CREATE TABLE inventory ( - ItemName varchar(255), - Price decimal(18,2) - ); -``` +1. After the command finishes running, expand the *Tables* folder and verify that the new table was created. -Once that command has completed, expand the **Tables** folder and verify that the new table was created. +## Configure the Redis trigger -### Configure the Redis trigger +First, make a copy of the same VS Code project that you used in the previous tutorial. Copy the folder from the previous tutorial under a new name, such as *RedisWriteBehindTrigger*, and open it in VS Code. -First, make a copy of the same VS Code project used in the previous tutorial. Copy the folder from the previous tutorial under a new name, such as “RedisWriteBehindTrigger” and open it up in VS Code. +In this example, you use the [pub/sub trigger](cache-how-to-functions.md#redispubsubtrigger) to trigger on `keyevent` notifications. The goals of the example are: -In this example, we’re going to use the [pub/sub trigger](cache-how-to-functions.md#redispubsubtrigger) to trigger on `keyevent` notifications. The following list shows our goals: +- Trigger every time a `SET` event occurs. A `SET` event happens when either new keys are written to the cache instance or the value of a key is changed. +- After a `SET` event is triggered, access the cache instance to find the value of the new key. +- Determine if the key already exists in the *inventory* table in the SQL database. + - If so, update the value of that key. + - If not, write a new row with the key and its value. -1. Trigger every time a SET event occurs. A SET event happens when either new keys are being written to the cache instance or the value of a key is being changed. -1. Once a SET event is triggered, access the cache instance to find the value of the new key. -1. Determine if the key already exists in the “inventory” table in the Azure SQL database. - 1. If so, update the value of that key. - 1. If not, write a new row with the key and its value. +To configure the trigger: -First, import the `System.Data.SqlClient` NuGet package to enable communication with the SQL Database instance. Go to the VS Code terminal and use the following command: +1. Import the `System.Data.SqlClient` NuGet package to enable communication with the SQL database. Go to the VS Code terminal and use the following command: -```dos -dotnet add package System.Data.SqlClient -``` + ```dos + dotnet add package System.Data.SqlClient + ``` -Next, copy and paste the following code in redisfunction.cs, replacing the existing code. +1. Copy and paste the following code in *redisfunction.cs* to replace the existing code: -```csharp -using Microsoft.Extensions.Logging; -using StackExchange.Redis; -using System; -using System.Data.SqlClient; + ```csharp + using Microsoft.Extensions.Logging; + using StackExchange.Redis; + using System; + using System.Data.SqlClient; -namespace Microsoft.Azure.WebJobs.Extensions.Redis -{ - public static class WriteBehind - { - public const string connectionString = "redisConnectionString"; - public const string SQLAddress = "SQLConnectionString"; -- [FunctionName("KeyeventTrigger")] - public static void KeyeventTrigger( - [RedisPubSubTrigger(connectionString, "__keyevent@0__:set")] string message, - ILogger logger) - { - // retrive redis connection string from environmental variables - var redisConnectionString = System.Environment.GetEnvironmentVariable(connectionString); + namespace Microsoft.Azure.WebJobs.Extensions.Redis + { + public static class WriteBehind + { + public const string connectionString = "redisConnectionString"; + public const string SQLAddress = "SQLConnectionString"; ++ [FunctionName("KeyeventTrigger")] + public static void KeyeventTrigger( + [RedisPubSubTrigger(connectionString, "__keyevent@0__:set")] string message, + ILogger logger) + { + // Retrieve a Redis connection string from environmental variables. + var redisConnectionString = System.Environment.GetEnvironmentVariable(connectionString); - // connect to a Redis cache instance - var redisConnection = ConnectionMultiplexer.Connect(redisConnectionString); - var cache = redisConnection.GetDatabase(); + // Connect to a Redis cache instance. + var redisConnection = ConnectionMultiplexer.Connect(redisConnectionString); + var cache = redisConnection.GetDatabase(); - // get the key that was set and its value - var key = message; - var value = (double)cache.StringGet(key); - logger.LogInformation($"Key {key} was set to {value}"); + // Get the key that was set and its value. + var key = message; + var value = (double)cache.StringGet(key); + logger.LogInformation($"Key {key} was set to {value}"); - // retrive SQL connection string from environmental variables - String SQLConnectionString = System.Environment.GetEnvironmentVariable(SQLAddress); + // Retrieve a SQL connection string from environmental variables. + String SQLConnectionString = System.Environment.GetEnvironmentVariable(SQLAddress); - // Define the name of the table you created and the column names - String tableName = "dbo.inventory"; - String column1Value = "ItemName"; - String column2Value = "Price"; + // Define the name of the table you created and the column names. + String tableName = "dbo.inventory"; + String column1Value = "ItemName"; + String column2Value = "Price"; - // Connect to the database. Check if the key exists in the database, if it does, update the value, if it doesn't, add it to the database - using (SqlConnection connection = new SqlConnection(SQLConnectionString)) - { - connection.Open(); - using (SqlCommand command = new SqlCommand()) - { - command.Connection = connection; -- //Form the SQL query to update the database. In practice, you would want to use a parameterized query to prevent SQL injection attacks. - //An example query would be something like "UPDATE dbo.inventory SET Price = 1.75 WHERE ItemName = 'Apple'" - command.CommandText = "UPDATE " + tableName + " SET " + column2Value + " = " + value + " WHERE " + column1Value + " = '" + key + "'"; - int rowsAffected = command.ExecuteNonQuery(); //The query execution returns the number of rows affected by the query. If the key doesn't exist, it will return 0. -- if (rowsAffected == 0) //If key doesn't exist, add it to the database + // Connect to the database. Check if the key exists in the database. If it does, update the value. If it doesn't, add it to the database. + using (SqlConnection connection = new SqlConnection(SQLConnectionString)) + { + connection.Open(); + using (SqlCommand command = new SqlCommand()) + { + command.Connection = connection; ++ //Form the SQL query to update the database. In practice, you would want to use a parameterized query to prevent SQL injection attacks. + //An example query would be something like "UPDATE dbo.inventory SET Price = 1.75 WHERE ItemName = 'Apple'". + command.CommandText = "UPDATE " + tableName + " SET " + column2Value + " = " + value + " WHERE " + column1Value + " = '" + key + "'"; + int rowsAffected = command.ExecuteNonQuery(); //The query execution returns the number of rows affected by the query. If the key doesn't exist, it will return 0. ++ if (rowsAffected == 0) //If key doesn't exist, add it to the database {- //Form the SQL query to update the database. In practice, you would want to use a parameterized query to prevent SQL injection attacks. - //An example query would be something like "INSERT INTO dbo.inventory (ItemName, Price) VALUES ('Bread', '2.55')" - command.CommandText = "INSERT INTO " + tableName + " (" + column1Value + ", " + column2Value + ") VALUES ('" + key + "', '" + value + "')"; - command.ExecuteNonQuery(); + //Form the SQL query to update the database. In practice, you would want to use a parameterized query to prevent SQL injection attacks. + //An example query would be something like "INSERT INTO dbo.inventory (ItemName, Price) VALUES ('Bread', '2.55')". + command.CommandText = "INSERT INTO " + tableName + " (" + column1Value + ", " + column2Value + ") VALUES ('" + key + "', '" + value + "')"; + command.ExecuteNonQuery(); - logger.LogInformation($"Item " + key + " has been added to the database with price " + value + ""); - } + logger.LogInformation($"Item " + key + " has been added to the database with price " + value + ""); + } - else { - logger.LogInformation($"Item " + key + " has been updated to price " + value + ""); - } - } - connection.Close(); - } + else { + logger.LogInformation($"Item " + key + " has been updated to price " + value + ""); + } + } + connection.Close(); + } - //Log the time the function was executed - logger.LogInformation($"C# Redis trigger function executed at: {DateTime.Now}"); - } - } -} -``` + //Log the time that the function was executed. + logger.LogInformation($"C# Redis trigger function executed at: {DateTime.Now}"); + } + } + } + ``` > [!IMPORTANT] > This example is simplified for the tutorial. For production use, we recommend that you use parameterized SQL queries to prevent SQL injection attacks.-> -### Configure connection strings -You need to update the 'local.settings.json' file to include the connection string for your SQL Database instance. Add an entry in the `Values` section for `SQLConnectionString`. Your file should look like this: +## Configure connection strings ++You need to update the *local.settings.json* file to include the connection string for your SQL database. Add an entry in the `Values` section for `SQLConnectionString`. Your file should look like this example: ```json { You need to update the 'local.settings.json' file to include the connection stri } } ```-You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically. You can find the Redis connection string in the **Access Keys** of the Resource menu of the Azure Cache for Redis resource. You can find the SQL database connection string under the **ADO.NET** tab in **Connection strings** on the Resource menu in the SQL database resource. ++You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically. ++To find the Redis connection string, go to the resource menu in the Azure Cache for Redis resource. The string is in the **Access Keys** area. ++To find the SQL database connection string, go to the resource menu in the SQL database resource, and then select the **ADO.NET** tab. The string is in the **Connection strings** area. > [!IMPORTANT] > This example is simplified for the tutorial. For production use, we recommend that you use [Azure Key Vault](../service-connector/tutorial-portal-key-vault.md) to store connection string information. > -### Build and run the project +## Build and run the project -Go to the **Run and debug tab** in VS Code and run the project. Navigate back to your Azure Cache for Redis instance in the Azure portal and select the **Console** button to enter the Redis Console. Try using some set commands: +1. In VS Code, go to the **Run and debug tab** and run the project. +1. Go back to your Azure Cache for Redis instance in the Azure portal, and select the **Console** button to enter the Redis console. Try using some `SET` commands: -- SET apple 5.25-- SET bread 2.25-- SET apple 4.50+ - `SET apple 5.25` + - `SET bread 2.25` + - `SET apple 4.50` -Back in VS Code, you should see the triggers being registered: +1. Back in VS Code, the triggers are being registered. To validate that the triggers are working: -To validate that the triggers are working, go to the SQL database instance in the Azure portal. Then, select **Query editor** from the Resource menu. Create a **New Query** with the following SQL to view the top 100 items in the inventory table: + 1. Go to the SQL database in the Azure portal. + 1. On the resource menu, select **Query editor**. + 1. For **New Query**, create a query with the following SQL command to view the top 100 items in the inventory table: -```sql -SELECT TOP (100) * FROM [dbo].[inventory] -``` + ```sql + SELECT TOP (100) * FROM [dbo].[inventory] + ``` ++ Confirm that the items written to your Azure Cache for Redis instance appear here. -You should see the items written to your Azure Cache for Redis instance show up here! +## Deploy the code to your function app -### Deploy to your Azure Functions App +1. In VS Code, go to the **Azure** tab. -The only thing left is to deploy the code to the actual Azure Function app. As before, go to the Azure tab in VS Code, find your subscription, expand it, find the Function App section, and expand that. Select and hold (or right-click) your Azure Function app. Then, select **Deploy to Function App…** +1. Find your subscription and expand it. Then, find the **Function App** section and expand that. -### Add connection string information +1. Select and hold (or right-click) your function app, and then select **Deploy to Function App**. -Navigate to your Function App in the Azure portal and select the **Configuration** blade from the Resource menu. Select **New application setting** and enter `SQLConnectionString` as the Name, with your connection string as the Value. Set Type to _Custom_, and select **Ok** to close the menu and then **Save** on the Configuration page to confirm. The functions app will restart with the new connection string information. +## Add connection string information ++1. Go to your function app in the Azure portal. On the resource menu, select **Configuration**. ++1. Select **New application setting**. For **Name**, enter **SQLConnectionString**. For **Value**, enter your connection string. ++1. Set **Type** to **Custom**, and then select **Ok** to close the menu. ++1. On the **Configuration** pane, select **Save** to confirm. The function app restarts with the new connection string information. ## Verify deployment-Once the deployment has finished, go back to your Azure Cache for Redis instance and use SET commands to write more values. You should see these show up in your Azure SQL database as well. -If you’d like to confirm that your Azure Function app is working properly, go to the app in the portal and select the **Log stream** from the Resource menu. You should see the triggers executing there, and the corresponding updates being made to your SQL database. +After the deployment finishes, go back to your Azure Cache for Redis instance and use `SET` commands to write more values. Confirm that they also appear in your SQL database. ++If you want to confirm that your function app is working properly, go to the app in the portal and select **Log stream** from the resource menu. You should see the triggers running there, and the corresponding updates being made to your SQL database. -If you ever would like to clear the SQL database table without deleting it, you can use the following SQL query: +If you ever want to clear the SQL database table without deleting it, you can use the following SQL query: ```sql TRUNCATE TABLE [dbo].[inventory] TRUNCATE TABLE [dbo].[inventory] ## Summary -This tutorial and [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md) show how to use Azure Cache for Redis to trigger Azure Function apps, and how to use that functionality to use Azure Cache for Redis as a write-behind cache with Azure SQL Database. Using Azure Cache for Redis with Azure Functions is a powerful combination that can solve many integration and performance problems. +This tutorial and [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md) show how to use Azure Cache for Redis to trigger Azure function apps. They also show how to use Azure Cache for Redis as a write-behind cache with Azure SQL Database. Using Azure Cache for Redis with Azure Functions is a powerful combination that can solve many integration and performance problems. -## Next steps +## Related content -- [Serverless event-based architectures with Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md)-- [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md)+- [Create serverless event-based architectures by using Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md) +- [Build a write-behind cache by using Azure Functions](cache-tutorial-write-behind.md) |
azure-functions | Create First Function Cli Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-java.md | description: Learn how to create a Java function from the command line, then pub Last updated 11/03/2020 ms.devlang: java-+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B |
azure-functions | Create First Function Vs Code Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-powershell.md | description: Learn how to create a PowerShell function, then publish the local p Last updated 06/22/2022 ms.devlang: powershell-+ # Quickstart: Create a PowerShell function in Azure using Visual Studio Code |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | For some service-specific binding types, binding data can be provided using type | Dependency | Version requirement | |-|-| |[Microsoft.Azure.Functions.Worker]| For **Generally Available** extensions in the table below: 1.18.0 or later<br/>For extensions that have **preview support**: 1.15.0-preview1 |-|[Microsoft.Azure.Functions.Worker.Sdk]|For **Generally Available** extensions in the table below: 1.12.0 or later<br/>For extensions that have **preview support**: 1.11.0-preview1 | +|[Microsoft.Azure.Functions.Worker.Sdk]|For **Generally Available** extensions in the table below: 1.13.0 or later<br/>For extensions that have **preview support**: 1.11.0-preview1 | When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`. Each trigger and binding extension also has its own minimum version requirement, | [Azure Queues][queue-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Service Bus][servicebus-sdk-types] | **Preview support<sup>2</sup>** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | -| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ | +| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | |
azure-functions | Functions Add Output Binding Storage Queue Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-cli.md | description: Learn how to connect Azure Functions to an Azure Storage queue by a Last updated 02/07/2020 ms.devlang: csharp, java, javascript, powershell, python, typescript-+ zone_pivot_groups: programming-languages-set-functions |
azure-functions | Functions Add Output Binding Storage Queue Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md | description: Learn how to connect Azure Functions to an Azure Queue Storage by a Last updated 01/31/2023 ms.devlang: csharp, java, javascript, powershell, python, typescript-+ zone_pivot_groups: programming-languages-set-functions #Customer intent: As an Azure Functions developer, I want to connect my function to Azure Storage so that I can easily write data to a storage queue. |
azure-functions | Functions Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-best-practices.md | The hosting plan you choose determines the following behaviors: To learn more about choosing the correct hosting plan and for a detailed comparison between the plans, see [Azure Functions hosting options](functions-scale.md). -It's important that you choose the correct plan when you create your function app. Functions provides a limited ability to switch your hosting plan, primarily between Consumption and Elastic Premium plans. To learn more, see [Plan migration](functions-how-to-use-azure-function-app-settings.md?tabs=portal#plan-migration). +It's important that you choose the correct plan when you create your function app. Functions provide a limited ability to switch your hosting plan, primarily between Consumption and Elastic Premium plans. To learn more, see [Plan migration](functions-how-to-use-azure-function-app-settings.md?tabs=portal#plan-migration). ## Configure storage correctly |
azure-functions | Functions Event Grid Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md | Title: 'Tutorial: Trigger Azure Functions on blob containers using an event subs description: This tutorial shows how to create a low-latency, event-driven trigger on an Azure Blob Storage container using an Event Grid event subscription. -+ Last updated 3/1/2021 zone_pivot_groups: programming-languages-set-functions-lang-workers |
azure-functions | Functions Host Json | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md | Controls the logging behaviors of the function app, including Application Insigh |Property |Default | Description | ||||-|fileLoggingMode|debugOnly|Determines the file logging behavior when running in Azure. Options are `never`, `always`, and `debugOnly`. This setting isn't used when running locally. When possible, you should use Application Insights when debugging your functions in Azure. Using `always` negatively impacts your app's cold start behavior and data throughput. The default `debguOnly` setting generates log files when you are debugging using the Azure portal. | +|fileLoggingMode|debugOnly|Determines the file logging behavior when running in Azure. Options are `never`, `always`, and `debugOnly`. This setting isn't used when running locally. When possible, you should use Application Insights when debugging your functions in Azure. Using `always` negatively impacts your app's cold start behavior and data throughput. The default `debugOnly` setting generates log files when you are debugging using the Azure portal. | |logLevel|n/a|Object that defines the log category filtering for functions in the app. This setting lets you filter logging for specific functions. For more information, see [Configure log levels](configure-monitoring.md#configure-log-levels). | |console|n/a| The [console](#console) logging setting. | |applicationInsights|n/a| The [applicationInsights](#applicationinsights) setting. | |
azure-functions | Functions Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scenarios.md | public static async Task Run( Besides data processing, Azure Functions can be used to infer on models. -For example, a function that calls a TensorFlow model or submits it to Azure AI Cognitive Services can process and classify a stream of images. +For example, a function that calls a TensorFlow model or submits it to Azure AI services can process and classify a stream of images. Functions can also connect to other services to help process data and perform other AI-related tasks, like [text summarization](https://github.com/Azure-Samples/function-csharp-ai-textsummarize). |
azure-functions | Functions Twitter Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-twitter-email.md | Title: Create a function that integrates with Azure Logic Apps -description: Create a function integrate with Azure Logic Apps and Azure Cognitive Services. The resulting workflow categorizes tweet sentiments sends email notifications. +description: Create a function integrate with Azure Logic Apps and Azure AI services. The resulting workflow categorizes tweet sentiments sends email notifications. ms.assetid: 60495cc5-1638-4bf0-8174-52786d227734 Last updated 04/10/2021 This tutorial shows you how to create a workflow to analyze Twitter activity. As In this tutorial, you learn to: > [!div class="checklist"]-> * Create a Cognitive Services API Resource. +> * Create an Azure AI services API Resource. > * Create a function that categorizes tweet sentiment. > * Create a logic app that connects to Twitter. > * Add sentiment detection to the logic app. In this tutorial, you learn to: ## Create Text Analytics resource -The Cognitive Services APIs are available in Azure as individual resources. Use the Text Analytics API to detect the sentiment of posted tweets. +The Azure AI services APIs are available in Azure as individual resources. Use the Text Analytics API to detect the sentiment of posted tweets. 1. Sign in to the [Azure portal](https://portal.azure.com/). With the Text Analytics resource created, you'll copy a few settings and set the > [!NOTE] > To test the function, select **Test/Run** from the top menu. On the _Input_ tab, enter a value of `0.9` in the _Body_ input box, and then select **Run**. Verify that a value of _Positive_ is returned in the _HTTP response content_ box in the _Output_ section. -Next, create a logic app that integrates with Azure Functions, Twitter, and the Cognitive Services API. +Next, create a logic app that integrates with Azure Functions, Twitter, and the Azure AI services API. ## Create a logic app |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | Table below lists API endpoints in Azure vs. Azure Government for accessing and |Service category|Service name|Azure Public|Azure Government|Notes| |--|--|-|-|-| |**AI + machine learning**|Azure Bot Service|botframework.com|botframework.azure.us||-||Azure Form Recognizer|cognitiveservices.azure.com|cognitiveservices.azure.us|| +||Azure AI Document Intelligence|cognitiveservices.azure.com|cognitiveservices.azure.us|| ||Computer Vision|cognitiveservices.azure.com|cognitiveservices.azure.us|| ||Custom Vision|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://www.customvision.azure.us/)|| ||Content Moderator|cognitiveservices.azure.com|cognitiveservices.azure.us|| ||Face API|cognitiveservices.azure.com|cognitiveservices.azure.us||-||Language Understanding|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://luis.azure.us/)|Part of [Cognitive Services for Language](../ai-services/language-service/index.yml)| +||Language Understanding|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://luis.azure.us/)|Part of [Azure AI Language](../ai-services/language-service/index.yml)| ||Personalizer|cognitiveservices.azure.com|cognitiveservices.azure.us||-||QnA Maker|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Cognitive Services for Language](../ai-services/language-service/index.yml)| +||QnA Maker|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Azure AI Language](../ai-services/language-service/index.yml)| ||Speech service|See [STT API docs](../ai-services/speech-service/rest-speech-to-text-short.md#regions-and-endpoints)|[Speech Studio](https://speech.azure.us/)</br></br>See [Speech service endpoints](../ai-services/Speech-Service/sovereign-clouds.md)</br></br>**Speech translation endpoints**</br>Virginia: `https://usgovvirginia.s2s.speech.azure.us`</br>Arizona: `https://usgovarizona.s2s.speech.azure.us`</br>||-||Text Analytics|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Cognitive Services for Language](../ai-services/language-service/index.yml)| +||Text Analytics|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Azure AI Language](../ai-services/language-service/index.yml)| ||Translator|See [Translator API docs](../ai-services/translator/reference/v3-0-reference.md#base-urls)|cognitiveservices.azure.us|| |**Analytics**|Azure HDInsight|azurehdinsight.net|azurehdinsight.us|| ||Event Hubs|servicebus.windows.net|servicebus.usgovcloudapi.net|| For information on how to deploy Bot Framework and Azure Bot Service bots to Azu For feature variations and limitations, see [Azure Machine Learning feature availability across cloud regions](../machine-learning/reference-machine-learning-cloud-parity.md). -### [Cognitive +<a name='cognitive-services-content-moderator'></a> ++### [Azure AI The following Content Moderator **features aren't currently available** in Azure Government: - Review UI and Review APIs. -### [Cognitive +<a name='cognitive-services-language-understanding-luis'></a> ++### [Azure AI Language Understanding (LUIS)](../ai-services/luis/index.yml) The following Language Understanding **features aren't currently available** in Azure Government: - Speech Requests - Prebuilt Domains -Cognitive Services Language Understanding (LUIS) is part of [Cognitive Services for Language](../ai-services/language-service/index.yml). +Azure AI Language Understanding (LUIS) is part of [Azure AI Language](../ai-services/language-service/index.yml). -### [Cognitive +<a name='cognitive-services-speech'></a> ++### [Azure AI Speech](../ai-services/speech-service/index.yml) For feature variations and limitations, including API endpoints, see [Speech service in sovereign clouds](../ai-services/speech-service/sovereign-clouds.md). -### [Cognitive +<a name='cognitive-services-translator'></a> ++### [Azure AI For feature variations and limitations, including API endpoints, see [Translator in sovereign clouds](../ai-services/translator/sovereign-clouds.md). For secured virtual networks, you'll want to allow network security groups (NSGs |US Gov Virginia|13.72.49.126 </br> 13.72.55.55 </br> 13.72.184.124 </br> 13.72.190.110| 443| |US Gov Arizona|52.127.3.176 </br> 52.127.3.178| 443| -For a demo on how to build data-centric solutions on Azure Government using HDInsight, see Cognitive Services, HDInsight, and Power BI on Azure Government. +For a demo on how to build data-centric solutions on Azure Government using HDInsight, see Azure AI services, HDInsight, and Power BI on Azure Government. ### [Power BI](/power-bi/fundamentals/) -For usage guidance, feature variations, and limitations, see [Power BI for US government customers](/power-bi/admin/service-govus-overview). For a demo on how to build data-centric solutions on Azure Government using Power BI, see Cognitive Services, HDInsight, and Power BI on Azure Government. +For usage guidance, feature variations, and limitations, see [Power BI for US government customers](/power-bi/admin/service-govus-overview). For a demo on how to build data-centric solutions on Azure Government using Power BI, see Azure AI services, HDInsight, and Power BI on Azure Government. ### [Power BI Embedded](/power-bi/developer/embedded/) |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Cloud Services](../../cloud-services/index.yml) | ✅ | ✅ | | [Cloud Shell](../../cloud-shell/overview.md) | ✅ | ✅ | | [Cognitive Search](../../search/index.yml) (formerly Azure Search) | ✅ | ✅ |-| [Cognitive -| [Cognitive -| [Cognitive -| [Cognitive Services Containers](../../ai-services/cognitive-services-container-support.md) | ✅ | ✅ | -| [Cognitive -| [Cognitive -| [Cognitive -| [Cognitive -| [Cognitive +| [Azure AI +| [Azure AI +| [Azure AI +| [Azure AI +| [Azure AI +| [Azure AI +| [Azure AI Language Understanding (LUIS)](../../ai-services/luis/index.yml) </br> (part of [Azure AI Language](../../ai-services/language-service/index.yml)) | ✅ | ✅ | +| [Azure AI +| [Azure AI | **Service** | **FedRAMP High** | **DoD IL2** |-| [Cognitive -| [Cognitive -| [Cognitive +| [Azure AI +| [Azure AI +| [Azure AI | [Container Instances](../../container-instances/index.yml) | ✅ | ✅ | | [Container Registry](../../container-registry/index.yml) | ✅ | ✅ | | [Content Delivery Network (CDN)](../../cdn/index.yml) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [File Sync](../../storage/file-sync/index.yml) | ✅ | ✅ | | [Firewall](../../firewall/index.yml) | ✅ | ✅ | | [Firewall Manager](../../firewall-manager/index.yml) | ✅ | ✅ |-| [Form Recognizer](../../ai-services/document-intelligence/index.yml) | ✅ | ✅ | +| [Azure AI Document Intelligence](../../ai-services/document-intelligence/index.yml) | ✅ | ✅ | | [Front Door](../../frontdoor/index.yml) | ✅ | ✅ | | [Functions](../../azure-functions/index.yml) | ✅ | ✅ | | [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Cloud Shell](../../cloud-shell/overview.md) | ✅ | ✅ | ✅ | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Cognitive Search](../../search/index.yml) (formerly Azure Search) | ✅ | ✅ | ✅ | ✅ | ✅ |-| [Cognitive -| [Cognitive -| [Cognitive Services Containers](../../ai-services/cognitive-services-container-support.md) | ✅ | ✅ | ✅ | ✅ | | -| [Cognitive -| [Cognitive -| [Cognitive -| [Cognitive -| [Cognitive -| [Cognitive -| [Cognitive -| [Cognitive +| [Azure AI +| [Azure AI +| [Azure AI services containers](../../ai-services/cognitive-services-container-support.md) | ✅ | ✅ | ✅ | ✅ | | +| [Azure AI +| [Azure AI +| [Azure AI +| [Azure AI +| [Azure AI +| [Azure AI Speech](../../ai-services/speech-service/index.yml) | ✅ | ✅ | ✅ | ✅ | | +| [Azure AI +| [Azure AI | [Container Instances](../../container-instances/index.yml)| ✅ | ✅ | ✅ | ✅ | ✅ | | [Container Registry](../../container-registry/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Content Delivery Network (CDN)](../../cdn/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [File Sync](../../storage/file-sync/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Firewall](../../firewall/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Firewall Manager](../../firewall-manager/index.yml) | ✅ | ✅ | ✅ | ✅ | |-| [Form Recognizer](../../ai-services/document-intelligence/index.yml) | ✅ | ✅ | ✅ | ✅ | | +| [Azure AI Document Intelligence](../../ai-services/document-intelligence/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Front Door](../../frontdoor/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Functions](../../azure-functions/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | |
azure-government | Documentation Government Cognitiveservices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md | Title: Cognitive Services on Azure Government -description: Guidance for developing Cognitive Services applications for Azure Government + Title: Azure AI services on Azure Government +description: Guidance for developing Azure AI services applications for Azure Government cloud: gov documentationcenter: '' Last updated 08/30/2021 -# Cognitive Services on Azure Government +# Azure AI services on Azure Government -This article provides developer guidance for using Computer Vision, Face API, Text Analytics, and Translator cognitive services. For feature variations and limitations, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). +This article provides developer guidance for using Computer Vision, Face API, Text Analytics, and Translator Azure AI services. For feature variations and limitations, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). ## Prerequisites This article provides developer guidance for using Computer Vision, Face API, Te - Install and Configure [Azure PowerShell](/powershell/azure/install-azure-powershell) - Connect [PowerShell with Azure Government](documentation-government-get-started-connect-with-ps.md) -## Part 1: Provision Cognitive Services accounts +<a name='part-1-provision-cognitive-services-accounts'></a> -In order to access any of the Cognitive Services APIs, you must first provision a Cognitive Services account for each of the APIs you want to access. You can create cognitive services in the [Azure Government portal](https://portal.azure.us/), or you can use Azure PowerShell to access the APIs and services as described in this article. +## Part 1: Provision Azure AI services accounts ++In order to access any of the Azure AI services APIs, you must first provision an Azure AI services account for each of the APIs you want to access. You can create Azure AI services in the [Azure Government portal](https://portal.azure.us/), or you can use Azure PowerShell to access the APIs and services as described in this article. > [!NOTE] > You must go through the process of creating an account and retrieving account key (explained below) **for each** of the APIs you want to access. Now you are ready to make calls to the APIs. ## Part 2: API Quickstarts -The Quickstarts below will help you to get started with the APIs available through Cognitive Services in Azure Government. +The Quickstarts below will help you to get started with the APIs available through Azure AI services in Azure Government. ## Computer Vision For more information, see [public documentation](../ai-services/translator/trans ### Next Steps - Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/)-- Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag+- Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag |
azure-government | Documentation Government Impact Level 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md | For AI and machine learning services availability in Azure Government, see [Prod - Configure encryption at rest of content in Azure Machine Learning by using customer-managed keys in Azure Key Vault. Azure Machine Learning stores snapshots, output, and logs in the Azure Blob Storage account that's associated with the Azure Machine Learning workspace and customer subscription. All the data stored in Azure Blob Storage is [encrypted at rest with Microsoft-managed keys](../machine-learning/concept-enterprise-security.md). Customers can use their own keys for data stored in Azure Blob Storage. See [Configure encryption with customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md). -### [Cognitive +<a name='cognitive-services-content-moderator'></a> ++### [Azure AI - Configure encryption at rest of content in the Content Moderator service by [using customer-managed keys in Azure Key Vault](../ai-services/content-moderator/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault). -### [Cognitive +<a name='cognitive-services-custom-vision'></a> ++### [Azure AI -- Configure encryption at rest of content in Cognitive Services Custom Vision [using customer-managed keys in Azure Key Vault](../ai-services/custom-vision-service/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).+- Configure encryption at rest of content in Azure AI Custom Vision [using customer-managed keys in Azure Key Vault](../ai-services/custom-vision-service/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault). -### [Cognitive +<a name='cognitive-services-face'></a> ++### [Azure AI - Configure encryption at rest of content in the Face service by [using customer-managed keys in Azure Key Vault](../ai-services/computer-vision/identity-encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault). -### [Cognitive +<a name='cognitive-services-language-understanding-luis'></a> ++### [Azure AI Language Understanding (LUIS)](../ai-services/luis/index.yml) - Configure encryption at rest of content in the Language Understanding service by [using customer-managed keys in Azure Key Vault](../ai-services/luis/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault). -Cognitive Services Language Understanding (LUIS) is part of [Cognitive Services for Language](../ai-services/language-service/index.yml). +Azure AI Language Understanding (LUIS) is part of [Azure AI Language](../ai-services/language-service/index.yml). -### [Cognitive +<a name='cognitive-services-personalizer'></a> -- Configure encryption at rest of content in Cognitive Services Personalizer [using customer-managed keys in Azure Key Vault](../ai-services/personalizer/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).+### [Azure AI -### [Cognitive +- Configure encryption at rest of content in Azure AI Personalizer [using customer-managed keys in Azure Key Vault](../ai-services/personalizer/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault). -- Configure encryption at rest of content in Cognitive Services QnA Maker [using customer-managed keys in Azure Key Vault](../ai-services/qnamaker/encrypt-data-at-rest.md).+<a name='cognitive-services-qna-maker'></a> -Cognitive Services QnA Maker is part of [Cognitive Services for Language](../ai-services/language-service/index.yml). +### [Azure AI -### [Cognitive +- Configure encryption at rest of content in Azure AI QnA Maker [using customer-managed keys in Azure Key Vault](../ai-services/qnamaker/encrypt-data-at-rest.md). ++Azure AI QnA Maker is part of [Azure AI Language](../ai-services/language-service/index.yml). ++<a name='cognitive-services-speech'></a> ++### [Azure AI Speech](../ai-services/speech-service/index.yml) - Configure encryption at rest of content in Speech Services by [using customer-managed keys in Azure Key Vault](../ai-services/speech-service/speech-encryption-of-data-at-rest.md). -### [Cognitive +<a name='cognitive-services-translator'></a> ++### [Azure AI - Configure encryption at rest of content in the Translator service by [using customer-managed keys in Azure Key Vault](../ai-services/translator/encrypt-data-at-rest.md). |
azure-government | Documentation Government Overview Wwps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md | Azure Stack Hub and Azure Stack Edge represent key enabling technologies that al ### Azure Stack Hub -[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you're always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub isn't dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity. +[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you're always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Azure AI services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub isn't dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity. In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. This section addresses common customer questions related to Azure public, privat - **Data storage for regional - **Data storage for non-regional - **Air-gapped (sovereign) cloud deployment:** Why doesnΓÇÖt Microsoft deploy an air-gapped, sovereign, physically isolated cloud instance in every country/region? **Answer:** Microsoft is actively pursuing air-gapped cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, are diminished when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra air-gapped cloud or fragmentation within an air-gapped cloud. Whereas an air-gapped cloud might prove to be the right solution for certain customers, it isn't the only available option.-- **Air-gapped (sovereign) cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country/region by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** You can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by your own security-cleared, in-country/region personnel. You can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes that you use in Azure. With Azure Stack Hub, you have sole control of your data, including storage, processing, transmission, and remote access.+- **Air-gapped (sovereign) cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country/region by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** You can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by your own security-cleared, in-country/region personnel. You can run many types of VM instances, App Services, Containers (including Azure AI services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes that you use in Azure. With Azure Stack Hub, you have sole control of your data, including storage, processing, transmission, and remote access. - **Local jurisdiction:** Is Microsoft subject to local country/region jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it's unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States. - **Autarky:** Can Microsoft cloud operations be separated from the rest of Microsoft cloud and connected solely to local government network? Are operations possible without external connections to a third party? **Answer:** Yes, depending on the cloud deployment model. - **Public Cloud:** Azure regional datacenters can be connected to your local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft isn't possible in the public cloud. |
azure-health-insights | Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/deploy-portal.md | Once deployment is complete, you can use the Azure portal to navigate to the new 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Create a new **Resource group**.-3. Add a new Cognitive Services account to your Resource group and search for **Health Insights**. +3. Add a new Azure AI services account to your Resource group and search for **Health Insights**. ![Screenshot of how to create the new Project Health Insights service.](media/create-service.png) - or Use this [link](https://portal.azure.com/#create/Microsoft.CognitiveServicesHealthInsights) to create a new Cognitive Services account. + or Use this [link](https://portal.azure.com/#create/Microsoft.CognitiveServicesHealthInsights) to create a new Azure AI services account. 4. Enter the following values: - **Resource group**: Select or create your Resource group name. - **Region**: Select an Azure location, such as West Europe.- - **Name**: Enter a Cognitive Services account name. + - **Name**: Enter an Azure AI services account name. - **Pricing tier**: Select your pricing tier. - ![Screenshot of how to create new Cognitive Services account.](media/create-health-insights.png) + ![Screenshot of how to create new Azure AI services account.](media/create-health-insights.png) 5. Navigate to your newly created service. - ![Screenshot of the Overview of Cognitive Services account.](media/created-health-insights.png) + ![Screenshot of the Overview of Azure AI services account.](media/created-health-insights.png) ## Configure private endpoints -With private endpoints, the network traffic between the clients on the VNet and the Cognitive Services account run over the VNet and a private link on the Microsoft backbone network. This eliminates exposure from the public internet. +With private endpoints, the network traffic between the clients on the VNet and the Azure AI services account run over the VNet and a private link on the Microsoft backbone network. This eliminates exposure from the public internet. -Once the Cognitive Services account is successfully created, configure private endpoints from the Networking page under Resource Management. +Once the Azure AI services account is successfully created, configure private endpoints from the Networking page under Resource Management. ![Screenshot of Private Endpoint.](media/private-endpoints.png) To get started using Project Health Insights, get started with one of the follow > [Onco Phenotype](oncophenotype/index.yml) >[!div class="nextstepaction"]-> [Trial Matcher](trial-matcher/index.yml) +> [Trial Matcher](trial-matcher/index.yml) |
azure-health-insights | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/get-started.md | -To use the Onco Phenotype model, you must have a Cognitive Services account created. If you haven't already created a Cognitive Services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md) +To use the Onco Phenotype model, you must have an Azure AI services account created. If you haven't already created an Azure AI services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md) -Once deployment is complete, you use the Azure portal to navigate to the newly created Cognitive Services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/. +Once deployment is complete, you use the Azure portal to navigate to the newly created Azure AI services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/. ## Example request and results -To send an API request, you need your Cognitive Services account endpoint and key. You can also find a full view on the [request parameters here](../request-info.md) +To send an API request, you need your Azure AI services account endpoint and key. You can also find a full view on the [request parameters here](../request-info.md) ![Screenshot of the Keys and Endpoints for the Onco Phenotype.](../media/keys-and-endpoints.png) To get better insights into the request and responses, you can read more on foll > [Model configuration](model-configuration.md) >[!div class="nextstepaction"]-> [Inference information](inferences.md) +> [Inference information](inferences.md) |
azure-health-insights | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/get-started.md | -To use Trial Matcher, you must have a Cognitive Services account created. If you haven't already created a Cognitive Services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md) +To use Trial Matcher, you must have an Azure AI services account created. If you haven't already created an Azure AI services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md) -Once deployment is complete, you use the Azure portal to navigate to the newly created Cognitive Services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/. +Once deployment is complete, you use the Azure portal to navigate to the newly created Azure AI services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/. ## Submit a request and get results-To send an API request, you need your Cognitive Services account endpoint and key. +To send an API request, you need your Azure AI services account endpoint and key. ![Screenshot of the Keys and Endpoints for the Trial Matcher.](../media/keys-and-endpoints.png) > [!IMPORTANT] To get better insights into the request and responses, read more on the followin > [Model configuration](model-configuration.md) >[!div class="nextstepaction"]-> [Patient information](patient-info.md) +> [Patient information](patient-info.md) |
azure-large-instances | What Is Azure Large Instances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-large-instances/what-is-azure-large-instances.md | Shows Azure IaaS, and in this case, use of VMs to host your applications, which Shows using your ExpressRoute Gateway enabled with ExpressRoute FastPath for Azure Large Instances connectivity offering low latency. > [!Note]->To support this configuration, your ExpressRoute Gateway should be UltraPerformance. For more information, [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md). +>To support this configuration, your ExpressRoute Gateway should be UltraPerformance. For more information, see [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md). |
azure-large-instances | Work With Azure Large Instances In Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-large-instances/work-with-azure-large-instances-in-azure-portal.md | Last updated 06/01/2023 In this article, you learn what to do in the Azure portal with your implementation of Azure Large Instances. > [!Note]-> For now, BareMetal Infrastructure or BareMetal Instances are being used as synonyms with Azure Large Instances. +> For now, BareMetal Infrastructure and BareMetal Instances are being used as synonyms for Azure Large Instances. ## Register the resource provider -An Azure resource provider for Azure Large Instances enables you to see the instances in the Azure portal. By default, the Azure subscription you use for Azure Large Instances deployments registers the Azure Large Instances resource provider. If you don't see your deployed Azure Large Instances, register the resource provider with your subscription. +An Azure resource provider for Azure Large Instances enables you to see the instances in the Azure portal. +By default, the Azure subscription you use for Azure Large Instances deployments registers the Azure Large Instances resource provider. +If you don't see your deployed Azure Large Instances, register the resource provider with your subscription. You can register the Azure Large Instance resource provider using the Azure portal or the Azure CLI. ### [Portal](#tab/azure-portal) - You need to list your subscription in the Azure portal and then double-click the subscription used to deploy your Azure Large Instances tenant. 1. Sign in to the Azure portal. 2. On the Azure portal menu, select **All services**. 3. In the **All services** box, enter **subscription**, and then select **Subscriptions**. 4. Select the subscription from the subscription list.-5. Select **Resource providers** and type **BareMetalInfrastructure** in the search box. The resource provider should be Registered, as the image shows. +5. Select **Resource providers** and type **BareMetalInfrastructure** in the search box. The resource provider should be registered, as the image shows. :::image type="content" source="../baremetal-infrastructure/media/connect-baremetal-infrastructure/register-resource-provider-azure-portal.png" alt-text="Networking diagram of Azure Large Instances." lightbox="../baremetal-infrastructure/media/connect-baremetal-infrastructure/register-resource-provider-azure-portal.png" border="false"::: To begin using Azure CLI: [!INCLUDE [azure-cli-prepare-your-environment-no-header](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -[comment]: <The following section duplicates the content provided by the INCLUDE above> --Use the Bash environment in [Azure Cloud Shell](../cloud-shell/overview.md). -For more information, see [Quickstart for Bash in Azure Cloud Shell](../cloud-shell/quickstart.md). --If you prefer to run CLI reference commands locally, [install](https://learn.microsoft.com/cli/azure/install-azure-cli) the Azure CLI. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](https://learn.microsoft.com/cli/azure/run-azure-cli-docker). --If you're using a local installation, sign in to the Azure CLI by using the [az login command](https://learn.microsoft.com/cli/azure/reference-index?view=azure-cli-latest#az-login). To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see [Sign in with the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli). --When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see [Use extensions with the Azure CLI](https://learn.microsoft.com/cli/azure/azure-cli-extensions-overview). --Run [az version](https://learn.microsoft.com/cli/azure/reference-index?view=azure-cli-latest#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](https://learn.microsoft.com/cli/azure/reference-index?view=azure-cli-latest#az-upgrade). - For more information about resource providers, see [Azure resource providers and types](./../azure-resource-manager/management/resource-providers-and-types.md). -[comment]: <End of Include content> - Sign in to the Azure subscription you use for the Azure Large Instances deployment through the Azure CLI. Register the BareMetalInfrastructure Azure Large Instance resource provider with the az provider register command: You can use the az provider list command to see all available providers. -For more information about resource providers, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md). - ## Azure Large Instances in the Azure portal When you submit an Azure Large Instances deployment request, specify the Azure subscription you're connecting to the Azure Large Instances. Use the same subscription you use to deploy the application layer that works against the Azure Large Instances. |
azure-maps | Drawing Package Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md | The following example is taken from the [sample drawing package v2]. The facilit :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/facility-levels.png" alt-text="Screenshot showing the facility levels tab of the Azure Maps Creator onboarding tool."::: +### Georeference ++Georeferencing is used to specify the exterior profile, location and rotation of the facility. ++The [facility level] defines the exterior profile as it appears on the map and is selected from the list of DWG layers in the **Exterior** drop-down list. ++The **Anchor Point Longitude** and **Anchor Point Latitude** specify the facility's location, the default value is zero (0). ++The **Anchor Point Angle** is specified in degrees between true north and the drawing's vertical (Y) axis, the default value is zero (0). ++ ### DWG layers The `dwgLayers` object is used to specify the DWG layer names where feature classes can be found. To receive a properly converted facility, it's important to provide the correct layer names. For example, a DWG wall layer must be provided as a wall layer and not as a unit layer. The drawing can have other layers such as furniture or plumbing; but, the Azure Maps Conversion service ignores anything not specified in the manifest. Defining text properties enables you to associate text entities that fall inside > 2. Stair > 3. Elevator -### georeference --Georeferencing is used to specify the exterior profile, location and rotation of the facility. --The [facility level] defines the exterior profile as it appears on the map and is selected from the list of DWG layers in the **Exterior** drop-down list. --The **Anchor Point Longitude** and **Anchor Point Latitude** specify the facility's location, the default value is zero (0). --The **Anchor Point Angle** is specified in degrees between true north and the drawing's vertical (Y) axis, the default value is zero (0). ---You position the facility's location by entering either an address or longitude and latitude values. You can also pan the map to make minor adjustments to the facility's location. ---### Review and download +### Download -When finished, select the **Review + Download** button to view the manifest. When you finished verifying that it's ready, select the **Download** button to save it locally so that you can include it in the drawing package to import into your Azure Maps Creator resource. +When finished, select the **Download** button to view the manifest. When you finished verifying that it's ready, select the **Download** button to save it locally so that you can include it in the drawing package to import into your Azure Maps Creator resource. :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/review-download.png" alt-text="Screenshot showing the manifest JSON."::: |
azure-maps | How To Secure Daemon App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-daemon-app.md | This article uses the [Postman](https://www.postman.com/) application to create 5. Enter the following URL to address bar (replace `{Tenant-ID}` with the Directory (Tenant) ID, the `{Client-ID}` with the Application (Client) ID, and `{Client-Secret}` with your client secret: ```http- https://login.microsoftonline.com/{Tenant-ID}/oauth2/v2.0/token?response_type=token&grant_type=client_credentials&client_id={Client-ID}&client_secret={Client-Secret}%3D&scope=api%3A%2F%2Fazmaps.fundamentals%2F.default + https://login.microsoftonline.com/{Tenant-ID}/oauth2/v2.0/token?response_type=token&grant_type=client_credentials&client_id={Client-ID}&client_secret={Client-Secret}%3D&scope=https://atlas.microsoft.com/.default ``` 6. Select **Send** |
azure-maps | Understanding Azure Maps Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md | -When you use [Azure Maps Services](index.yml), the API requests you make generate transactions. Your transaction usage is available for review in your [Azure portal]( https://portal.azure.com) Metrics report. For more information, see [View Azure Maps API usage metrics](how-to-view-api-usage.md). These transactions can be either billable or non-billable usage, depending on the service and the feature. It’s important to understand which usage generates a billable transaction and how it’s calculated so you can plan and budget for the costs associated with using Azure Maps. Billable transactions will show up in your Cost Analysis report within the Azure portal. +When you use [Azure Maps Services], the API requests you make generate transactions. Your transaction usage is available for review in your [Azure portal] Metrics report. For more information, see [View Azure Maps API usage metrics]. These transactions can be either billable or nonbillable usage, depending on the service and the feature. It’s important to understand which usage generates a billable transaction and how it’s calculated so you can plan and budget for the costs associated with using Azure Maps. Billable transactions show up in your Cost Analysis report within the Azure portal. -The following table summarizes the Azure Maps services that generate transactions, billable and non-billable, along with any notable aspects that are helpful to understand in how the number of transactions are calculated. +The following table summarizes the Azure Maps services that generate transactions, billable and nonbillable, along with any notable aspects that are helpful to understand in how the number of transactions are calculated. ## Azure Maps Transaction information by service | Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-|-| [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2)<br>[Data registry](/rest/api/maps/data-registry) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| -| [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>| -| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>| -| [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | -| [Search v1](/rest/api/maps/search)<br>[Search v2](/rest/api/maps/search-v2) | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction. Note, the billable Search transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | -| [Spatial](/rest/api/maps/spatial) | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch`, which are non-billable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> | -| [Timezone](/rest/api/maps/timezone) | Yes | One request = 1 transaction | <ul><li>Location Insights Timezone (Gen2 pricing)</li><li>Standard S1 Time Zones Transactions (Gen1 S1 pricing)</li><li>Standard Time Zones Transactions (Gen1 S0 pricing)</li></ul> | -| [Traffic](/rest/api/maps/traffic) | Yes | One request = 1 transaction (except tiles)<br>15 tiles = 1 transaction | <ul><li>Location Insights Traffic (Gen2 pricing)</li><li>Standard S1 Traffic Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li></ul> | -| [Weather](/rest/api/maps/weather) | Yes | One request = 1 transaction | <ul><li>Location Insights Weather (Gen2 pricing)</li><li>Standard S1 Weather Transactions (Gen1 S1 pricing)</li><li>Standard Weather Transactions (Gen1 S0 pricing)</li></ul> | +| [Data v1]<br>[Data v2]<br>[Data registry] | Yes, except for `MapDataStorageService.GetDataStatus` and `MapDataStorageService.GetUserData`, which are nonbillable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| +| [Geolocation]| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>| +| [Render v1]<br>[Render v2] | Yes, except for Terra maps (`MapTile.GetTerraTile` and `layer=terra`) which are nonbillable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table]. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>| +| [Route] | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request has **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | +| [Search v1]<br>[Search v2] | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction. Note, the billable Search transaction usage results generated by the batch request has **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | +| [Spatial] | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch`, which are nonbillable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> | +| [Timezone] | Yes | One request = 1 transaction | <ul><li>Location Insights Timezone (Gen2 pricing)</li><li>Standard S1 Time Zones Transactions (Gen1 S1 pricing)</li><li>Standard Time Zones Transactions (Gen1 S0 pricing)</li></ul> | +| [Traffic] | Yes | One request = 1 transaction (except tiles)<br>15 tiles = 1 transaction | <ul><li>Location Insights Traffic (Gen2 pricing)</li><li>Standard S1 Traffic Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li></ul> | +| [Weather] | Yes | One request = 1 transaction | <ul><li>Location Insights Weather (Gen2 pricing)</li><li>Standard S1 Weather Transactions (Gen1 S1 pricing)</li><li>Standard Weather Transactions (Gen1 S0 pricing)</li></ul> | <!-- In Bing Maps, any time a synchronous Truck Routing request is made, three transactions are counted. Does this apply also to Azure Maps?--> The following table summarizes the Azure Maps services that generate transaction | Azure Maps Creator | Billable | Transaction Calculation | Meter | |-|-||-|-| [Alias](/rest/api/maps/v2/alias) | No | One request = 1 transaction | Not applicable | -| [Conversion](/rest/api/maps/v2/conversion) | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing) | -| [Dataset](/rest/api/maps/v2/dataset) | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing)| -| [Feature State](/rest/api/maps/v2/feature-state) | Yes, except for `FeatureState.CreateStateset`, `FeatureState.DeleteStateset`, `FeatureState.GetStateset`, `FeatureState.ListStatesets`, `FeatureState.UpdateStatesets` | One request = 1 transaction | Azure Maps Creator Feature State (Gen2 pricing) | -| [Render v2](/rest/api/maps/render-v2) | Yes, only with `GetMapTile` with Creator Tileset ID and `GetStaticTile`.<br>For everything else for Render v2, see Render v2 section in the above table.| One request = 1 transaction<br>One tile = 1 transaction | Azure Maps Creator Map Render (Gen2 pricing) | -| [Tileset](/rest/api/maps/v2/tileset) | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning    (Gen2 pricing) | -| [WFS](/rest/api/maps/v2/wfs) | Yes| One request = 1 transaction | Azure Maps Creator Web Feature (WFS) (Gen2 pricing) | +| [Alias] | No | One request = 1 transaction | Not applicable | +| [Conversion] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing) | +| [Dataset] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing)| +| [Feature State] | Yes, except for `FeatureState.CreateStateset`, `FeatureState.DeleteStateset`, `FeatureState.GetStateset`, `FeatureState.ListStatesets`, `FeatureState.UpdateStatesets` | One request = 1 transaction | Azure Maps Creator Feature State (Gen2 pricing) | +| [Render v2] | Yes, only with `GetMapTile` with Creator Tileset ID and `GetStaticTile`.<br>For everything else for Render v2, see Render v2 section in the above table.| One request = 1 transaction<br>One tile = 1 transaction | Azure Maps Creator Map Render (Gen2 pricing) | +| [Tileset] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning    (Gen2 pricing) | +| [WFS] | Yes| One request = 1 transaction | Azure Maps Creator Web Feature (WFS) (Gen2 pricing) | <!-- | Service | Unit of measure | Price | The following table summarizes the Azure Maps services that generate transaction ## Next steps > [!div class="nextstepaction"]-> [Azure Maps pricing](https://azure.microsoft.com/pricing/details/azure-maps/) +> [Azure Maps pricing] > [!div class="nextstepaction"]-> [Pricing calculator](https://azure.microsoft.com/pricing/calculator/) +> [Pricing calculator] > [!div class="nextstepaction"]-> [Manage the pricing tier of your Azure Maps account](how-to-manage-pricing-tier.md) +> [Manage the pricing tier of your Azure Maps account] > [!div class="nextstepaction"]-> [View Azure Maps API usage metrics](how-to-view-api-usage.md) +> [View Azure Maps API usage metrics] ++[Alias]: /rest/api/maps/v2/alias +[Azure Maps pricing]: https://azure.microsoft.com/pricing/details/azure-maps/ +[Azure Maps Services]: index.yml +[Azure portal]: https://portal.azure.com +[Conversion]: /rest/api/maps/v2/conversion +[Creator table]: #azure-maps-creator +[Data registry]: /rest/api/maps/data-registry +[Data v1]: /rest/api/maps/data +[Data v2]: /rest/api/maps/data-v2 +[Dataset]: /rest/api/maps/v2/dataset +[Feature State]: /rest/api/maps/v2/feature-state +[Geolocation]: /rest/api/maps/geolocation +[Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md +[Pricing calculator]: https://azure.microsoft.com/pricing/calculator/ +[Render v1]: /rest/api/maps/render +[Render v2]: /rest/api/maps/render-v2 +[Route]: /rest/api/maps/route +[Search v1]: /rest/api/maps/search +[Search v2]: /rest/api/maps/search-v2 +[Spatial]: /rest/api/maps/spatial +[Tileset]: /rest/api/maps/v2/tileset +[Timezone]: /rest/api/maps/timezone +[Traffic]: /rest/api/maps/traffic +[View Azure Maps API usage metrics]: how-to-view-api-usage.md +[Weather]: /rest/api/maps/weather +[WFS]: /rest/api/maps/v2/wfs |
azure-maps | Weather Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md | -This article provides coverage information for Azure Maps [Weather services][weather-services]. +This article provides coverage information for Azure Maps [Weather services]. ## Weather information supported This article provides coverage information for Azure Maps [Weather services][wea Infrared (IR) radiation is electromagnetic radiation that measures an object's infrared emission, returning information about its temperature. Infrared images can indicate cloud heights (Colder cloud-tops mean higher clouds) and types, calculate land and surface water temperatures, and locate ocean surface features. --> -Infrared satellite imagery, showing clouds by their temperature, is returned when `tilesetID` is set to `microsoft.weather.infrared.main` when making calls to [Get Map Tile][get-map-tile] and can then be overlaid on the map image. +Infrared satellite imagery, showing clouds by their temperature, is returned when `tilesetID` is set to `microsoft.weather.infrared.main` when making calls to [Get Map Tile] and can then be overlaid on the map image. ### Minute forecast -The [Get Minute forecast][get-minute-forecast] service returns minute-by-minute forecasts for the specified location for the next 120 minutes. +The [Get Minute forecast] service returns minute-by-minute forecasts for the specified location for the next 120 minutes. ### Radar tiles <!-- Replace with Minimal Description Radar imagery is a depiction of the response returned when microwave radiation is sent into the atmosphere. The pulses of radiation reflect back showing its interactions with any precipitation it encounters. The radar technology visually represents those pulses showing where it's clear, raining, snowing or stormy. --> -Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned when `tilesetID` is set to `microsoft.weather.radar.main` when making calls to [Get Map Tile][get-map-tile] and can then be overlaid on the map image. +Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned when `tilesetID` is set to `microsoft.weather.radar.main` when making calls to [Get Map Tile] and can then be overlaid on the map image. ### Severe weather alerts -[Severe weather alerts][severe-weather-alerts] service returns severe weather alerts from both official Government Meteorological Agencies and other leading severe weather alert providers. The service can return details such as alert type, category, level and detailed description. Severe weather includes conditions like hurricanes, tornados, tsunamis, severe thunderstorms, and fires. +[Severe weather alerts] service returns severe weather alerts from both official Government Meteorological Agencies and other leading severe weather alert providers. The service can return details such as alert type, category, level and detailed description. Severe weather includes conditions like hurricanes, tornados, tsunamis, severe thunderstorms, and fires. ### Other -- **Air quality**. The Air Quality service returns [current][aq-current], [hourly][aq-hourly] or [daily][aq-daily] forecasts that include pollution levels, air quality index values, the dominant pollutant, and a brief statement summarizing risk level and suggested precautions.-- **Current conditions**. The [Get Current Conditions](/rest/api/maps/weather/get-current-conditions) service returns detailed current weather conditions such as precipitation, temperature and wind for a given coordinate location.-- **Daily forecast**. The [Get Daily Forecast](/rest/api/maps/weather/get-current-air-quality) service returns detailed weather forecasts such as temperature and wind by day for the next 1, 5, 10, 15, 25, or 45 days for a given coordinate location.-- **Daily indices**. The [Get Daily Indices](/rest/api/maps/weather/get-daily-indices) service returns index values that provide information that can help in planning activities. For example, a health mobile application can notify users that today is good weather for running or playing golf.-- **Historical weather**. The Historical Weather service includes Daily Historical [Records][dh-records], [Actuals][dh-actuals] and [Normals][dh-normals] that return climatology data such as past daily record temperatures, precipitation and snowfall at a given coordinate location.-- **Hourly forecast**. The [Get Hourly Forecast](/rest/api/maps/weather/get-hourly-forecast) service returns detailed weather forecast information by the hour for up to 10 days.-- **Quarter-day forecast**. The [Get Quarter Day Forecast](/rest/api/maps/weather/get-quarter-day-forecast) service returns detailed weather forecast by quarter-day for up to 15 days.-- **Tropical storms**. The Tropical Storm service provides information about [active storms][tropical-storm-active], tropical storm [forecasts][tropical-storm-forecasts] and [locations][tropical-storm-locations] and the ability to [search][tropical-storm-search] for tropical storms by year, basin ID, or government ID.-- **Weather along route**. The [Get Weather Along Route](/rest/api/maps/weather/get-weather-along-route) service returns hyper local (1 kilometer or less), up-to-the-minute weather nowcasts, weather hazard assessments, and notifications along a route described as a sequence of waypoints.+- **Air quality**. The Air Quality service returns [current], [hourly] or [daily] forecasts that include pollution levels, air quality index values, the dominant pollutant, and a brief statement summarizing risk level and suggested precautions. +- **Current conditions**. The [Get Current Conditions] service returns detailed current weather conditions such as precipitation, temperature and wind for a given coordinate location. +- **Daily forecast**. The [Get Daily Forecast] service returns detailed weather forecasts such as temperature and wind by day for the next 1, 5, 10, 15, 25, or 45 days for a given coordinate location. +- **Daily indices**. The [Get Daily Indices] service returns index values that provide information that can help in planning activities. For example, a health mobile application can notify users that today is good weather for running or playing golf. +- **Historical weather**. The Historical Weather service includes Daily Historical [Records], [Actuals] and [Normals] that return climatology data such as past daily record temperatures, precipitation and snowfall at a given coordinate location. +- **Hourly forecast**. The [Get Hourly Forecast] service returns detailed weather forecast information by the hour for up to 10 days. +- **Quarter-day forecast**. The [Get Quarter Day Forecast] service returns detailed weather forecast by quarter-day for up to 15 days. +- **Tropical storms**. The Tropical Storm service provides information about [active storms], tropical storm [forecasts] and [locations] and the ability to [search] for tropical storms by year, basin ID, or government ID. +- **Weather along route**. The [Get Weather Along Route] service returns hyper local (1 kilometer or less), up-to-the-minute weather nowcasts, weather hazard assessments, and notifications along a route described as a sequence of waypoints. ## Azure Maps Weather coverage tables Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned ## Next steps > [!div class="nextstepaction"]-> [Weather services in Azure Maps](weather-services-concepts.md) +> [Weather services in Azure Maps] > [!div class="nextstepaction"]-> [Azure Maps weather services frequently asked questions (FAQ)](weather-services-faq.yml) --[weather-services]: /rest/api/maps/weather -[get-map-tile]: /rest/api/maps/render-v2/get-map-tile -[get-minute-forecast]: /rest/api/maps/weather/get-minute-forecast -[severe-weather-alerts]: /rest/api/maps/weather/get-severe-weather-alerts --[aq-current]: /rest/api/maps/weather/get-current-air-quality -[aq-hourly]: /rest/api/maps/weather/get-air-quality-hourly-forecasts -[aq-daily]: /rest/api/maps/weather/get-air-quality-daily-forecasts --[current-conditions]: /rest/api/maps/weather/get-current-conditions --[dh-records]: /rest/api/maps/weather/get-daily-historical-records -[dh-actuals]: /rest/api/maps/weather/get-daily-historical-actuals -[dh-normals]: /rest/api/maps/weather/get-daily-historical-normals --[tropical-storm-active]: /rest/api/maps/weather/get-tropical-storm-active -[tropical-storm-forecasts]: /rest/api/maps/weather/get-tropical-storm-forecast -[tropical-storm-locations]: /rest/api/maps/weather/get-tropical-storm-locations -[tropical-storm-search]: /rest/api/maps/weather/get-tropical-storm-search +> [Azure Maps weather services frequently asked questions (FAQ)] ++[active storms]: /rest/api/maps/weather/get-tropical-storm-active +[Actuals]: /rest/api/maps/weather/get-daily-historical-actuals +[Azure Maps weather services frequently asked questions (FAQ)]: weather-services-faq.yml +[current]: /rest/api/maps/weather/get-current-air-quality +[daily]: /rest/api/maps/weather/get-air-quality-daily-forecasts +[forecasts]: /rest/api/maps/weather/get-tropical-storm-forecast +[Get Current Conditions]: /rest/api/maps/weather/get-current-conditions +[Get Daily Forecast]: /rest/api/maps/weather/get-current-air-quality +[Get Daily Indices]: /rest/api/maps/weather/get-daily-indices +[Get Hourly Forecast]: /rest/api/maps/weather/get-hourly-forecast +[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile +[Get Minute forecast]: /rest/api/maps/weather/get-minute-forecast +[Get Quarter Day Forecast]: /rest/api/maps/weather/get-quarter-day-forecast +[Get Weather Along Route]: /rest/api/maps/weather/get-weather-along-route +[hourly]: /rest/api/maps/weather/get-air-quality-hourly-forecasts +[locations]: /rest/api/maps/weather/get-tropical-storm-locations +[Normals]: /rest/api/maps/weather/get-daily-historical-normals +[Records]: /rest/api/maps/weather/get-daily-historical-records +[search]: /rest/api/maps/weather/get-tropical-storm-search +[Severe weather alerts]: /rest/api/maps/weather/get-severe-weather-alerts +[Weather services in Azure Maps]: weather-services-concepts.md +[Weather services]: /rest/api/maps/weather |
azure-maps | Weather Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md | Title: 'Tutorial: Join sensor data with weather forecast data by using Azure Notebooks(Python) with Microsoft Azure Maps' + Title: 'Tutorial: Join sensor data with weather forecast data by using Azure Notebooks(Python)' + description: Tutorial on how to join sensor data with weather forecast data from Microsoft Azure Maps Weather services using Azure Notebooks(Python). In this tutorial, you will: > [!div class="checklist"] >-> * Work with data files in [Azure Notebooks](https://notebooks.azure.com) in the cloud. +> * Work with data files in [Azure Notebooks] in the cloud. > * Load demo data from file. > * Call Azure Maps REST APIs in Python. > * Render location data on the map.-> * Enrich the demo data with Azure Maps [Daily Forecast](/rest/api/maps/weather/getdailyforecast) weather data. +> * Enrich the demo data with Azure Maps [Daily Forecast] weather data. > * Plot forecast data in graphs. ## Prerequisites If you don't have an Azure subscription, create a [free account] before you begi > [!NOTE] > For more information on authentication in Azure Maps, see [manage authentication in Azure Maps]. -To get familiar with Azure notebooks and to know how to get started, follow the instructions [Create an Azure Notebook](./tutorial-ev-routing.md#create-an-azure-notebooks-project). +To get familiar with Azure notebooks and to know how to get started, follow the instructions [Create an Azure Notebook]. > [!NOTE]-> The Jupyter notebook file for this project can be downloaded from the [Weather Maps Jupyter Notebook repository](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data). +> The Jupyter notebook file for this project can be downloaded from the [Weather Maps Jupyter Notebook repository]. ## Load the required modules and frameworks import aiohttp ## Import weather data -For the sake of this tutorial, we'll use weather data readings from sensors installed at four different wind turbines. The sample data consists of 30 days of weather readings. These readings are gathered from weather data centers near each turbine location. The demo data contains data readings for temperature, wind speed and, direction. You can download the demo data contained in [weather_dataset_demo.csv](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data/data) from GitHub. The script below imports demo data to the Azure Notebook. +This tutorial uses weather data readings from sensors installed at four different wind turbines. The sample data consists of 30 days of weather readings. These readings are gathered from weather data centers near each turbine location. The demo data contains data readings for temperature, wind speed and, direction. You can download the demo data contained in [weather_dataset_demo.csv] from GitHub. The script below imports demo data to the Azure Notebook. ```python df = pd.read_csv("./data/weather_dataset_demo.csv") df = pd.read_csv("./data/weather_dataset_demo.csv") ## Request daily forecast data -In our scenario, we would like to request daily forecast for each sensor location. The following script calls the [Daily Forecast API](/rest/api/maps/weather/getdailyforecast) of the Azure Maps Weather services. This API returns weather forecast for each wind turbine, for the next 15 days from the current date. +In our scenario, we would like to request daily forecast for each sensor location. The following script calls the [Daily Forecast API] of the Azure Maps Weather services. This API returns weather forecast for each wind turbine, for the next 15 days from the current date. ```python subscription_key = "Your Azure Maps key" for i in range(0, len(coords), 2): await session.close() ``` -The script below renders the turbine locations on the map by calling the [Get Map Image service](/rest/api/maps/render/getmapimage). +The following script renders the turbine locations on the map by calling the [Get Map Image service]. ```python # Render the turbine locations on the map by calling the Azure Maps Get Map Image service display(Image(poi_range_map)) ![Turbine locations](./media/weather-service-tutorial/location-map.png) -We'll group the forecast data with the demo data based on the station ID. The station ID is for the weather data center. This grouping augments the demo data with the forecast data. +Group the forecast data with the demo data based on the station ID. The station ID is for the weather data center. This grouping augments the demo data with the forecast data. ```python # Group forecasted data for all locations grouped_weather_data.get_group(station_ids[0]).reset_index() ## Plot forecast data -We'll plot the forecasted values against the days for which they're forecasted. This plot allows us to see the speed and direction changes of the wind for the next 15 days. +Plot the forecasted values against the days for which they're forecasted. This plot allows us to see the speed and direction changes of the wind for the next 15 days. ```python # Plot wind speed windsPlot.set_xlabel("Date") windsPlot.set_ylabel("Wind direction") ``` -The graphs below visualize the forecast data. For the change of wind speed, see the left graph. For change in wind direction, see the right graph. This data is prediction for next 15 days from the day the data is requested. +The following graphs visualize the forecast data. For the change of wind speed, see the left graph. For change in wind direction, see the right graph. This data is prediction for next 15 days from the day the data is requested. <center>+![Wind speed plot](./media/weather-service-tutorial/speed-date-plot.png) ![Wind direction plot](./media/weather-service-tutorial/direction-date-plot.png) +</center> -![Wind speed plot](./media/weather-service-tutorial/speed-date-plot.png) ![Wind direction plot](./media/weather-service-tutorial/direction-date-plot.png)</center> +In this tutorial, you learned how to call Azure Maps REST APIs to get weather forecast data. You also learned how to visualize the data on graphs. -In this tutorial you learned, how to call Azure Maps REST APIs to get weather forecast data. You also learned how to visualize the data on graphs. --To learn more about how to call Azure Maps REST APIs inside Azure Notebooks, see [EV routing using Azure Notebooks](./tutorial-ev-routing.md). +To learn more about how to call Azure Maps REST APIs inside Azure Notebooks, see [EV routing using Azure Notebooks]. To explore the Azure Maps APIs that are used in this tutorial, see: -* [Daily Forecast](/rest/api/maps/weather/getdailyforecast) -* [Render - Get Map Image](/rest/api/maps/render/getmapimage) +* [Daily Forecast] +* [Render - Get Map Image] -For a complete list of Azure Maps REST APIs, see [Azure Maps REST APIs](./consumption-model.md). +For a complete list of Azure Maps REST APIs, see [Azure Maps REST APIs]. ## Clean up resources There are no resources that require cleanup. To learn more about Azure Notebooks, see > [!div class="nextstepaction"]-> [Azure Notebooks](https://notebooks.azure.com) +> [Azure Notebooks] [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account +[Azure Maps REST APIs]: consumption-model.md +[Azure Notebooks]: https://notebooks.azure.com +[Create an Azure Notebook]: tutorial-ev-routing.md#create-an-azure-notebooks-project +[Daily Forecast API]: /rest/api/maps/weather/getdailyforecast +[Daily Forecast]: /rest/api/maps/weather/getdailyforecast +[EV routing using Azure Notebooks]: tutorial-ev-routing.md [free account]: https://azure.microsoft.com/free/+[Get Map Image service]: /rest/api/maps/render/getmapimage [manage authentication in Azure Maps]: how-to-manage-authentication.md+[Render - Get Map Image]: /rest/api/maps/render/getmapimage +[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account +[Weather Maps Jupyter Notebook repository]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data +[weather_dataset_demo.csv]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data/data |
azure-maps | Weather Services Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-services-concepts.md | Last updated 09/10/2020 - # Weather services in Azure Maps -This article introduces concepts that apply to Azure Maps [Weather services](/rest/api/maps/weather). We recommend going through this article before starting out with the weather APIs. +This article introduces concepts that apply to Azure Maps [Weather services]. We recommend going through this article before starting out with the weather APIs. ## Unit types -Some of the Weather service APIs allow user to specify if the data is returned either in metric or in imperial units. The returned responses for these APIs include unitType and a numeric value that can be used for unit translations. See table below to interpret these values. +Some of the Weather service APIs allow user to specify if the data is returned either in metric or in imperial units. The returned responses for these APIs include unitType and a numeric value that can be used for unit translations. See the following table to interpret these values. |unitType|Description | |--|-| Some of the Weather service APIs allow user to specify if the data is returned e ## Weather icons -Some of the Weather service APIs return the `iconCode` in the response. The `iconCode` is a numeric value used to define the icon. Don't link directly to these images from your applications, the URLs can and will change. +Some of the Weather service APIs return the `iconCode` in the response. The `iconCode` is a numeric value used to define the icon. Don't link directly to these images from your applications, the URLs can change. | Icon Number |Icon| Day | Night | Text | |-|:-:|--|-|| Some of the Weather service APIs return the `iconCode` in the response. The `ico ## Radar and satellite imagery color scale -Via [Get Map Tile v2 API](/rest/api/maps/render-v2/get-map-tile) users can request latest radar and infrared satellite images. See below guide to help interpret colors used for radar and satellite tiles. +Via [Get Map Tile v2 API] users can request latest radar and infrared satellite images. See the following guide to help interpret colors used for radar and satellite tiles. ### Radar Images -The table below provides guidance to interpret the radar images and create a map legend for Radar tile data. +The following table provides guidance to interpret the radar images and create a map legend for Radar tile data. | Hex color code | Color sample | Weather condition | |-|--|-| The table below provides guidance to interpret the radar images and create a map | #8a32d7 | ![Color for mix-heavy.](./media/weather-services-concepts/color-8a32d7.png) | Mix-Heavy | | #6500ba | ![Color for mix-severe.](./media/weather-services-concepts/color-6500ba.png) | Mix-Severe | -Detailed color palette for radar tiles with Hex color codes and dBZ values is shown below. dBZ represents precipitation intensity in weather radar. +Detailed color palette for radar tiles with Hex color codes and dBZ values is shown in the following table. dBZ represents precipitation intensity in weather radar. | **RAIN** | **ICE** | **SNOW** | **MIXED** | |-|-|--|--| Detailed color palette for radar tiles with Hex color codes and dBZ values is sh | 73.75 (#bf9bc4) | 73.75 (#7C1571) | 73.75 (#020298) | 73.75 (#6500B9) | | 75 (#c9b5c2) | 75 (#7A1570) | 75 (#020096) | 75 (#6500BA) | -- ### Satellite Images -The table below provides guidance to interpret the infrared satellite images showing clouds by their temperature and how to create a map legend for these tiles. +The following table provides guidance to interpret the infrared satellite images showing clouds by their temperature and how to create a map legend for these tiles. | Hex color code | Color sample | Cloud Temperature | |-|--|-| The table below provides guidance to interpret the infrared satellite images sho | #ba0808 | ![Color tile for #ba0808.](./media/weather-services-concepts/color-ba0808.png) | | | #1f1f1f | ![Color tile for #1f1f1f.](./media/weather-services-concepts/color-1f1f1f.png) | Temperature-High | -Detailed color palette for infrared satellite tiles is shown below. +Detailed color palette for infrared satellite tiles is shown in the following table. |**Temp (K)**|**Hex color code**| |--|--| Detailed color palette for infrared satellite tiles is shown below. ## Index IDs and Index Groups IDs -[Get Daily Indices API](/rest/api/maps/weather) allows users to -restrict returned results to specific index types or index -groups. +[Get Daily Indices API] allows users to restrict returned results to specific index types or index groups. -Below is a table of available index IDs, their names, and a link to their range sets. Below this table is a table listing the various index groups. +The following table lists the available index IDs, their names, and a link to their range sets. Below this table is a table listing the various index groups. Index Name | ID | Value Range -- ||-- Below is a table of available index IDs, their names, and a link to their range Soil Moisture | 34| [Poor-Excellent 1](#poor-excellent-1) Stargazing | 12| [Poor-Excellent 1](#poor-excellent-1) -Below is the list of available Index groups (indexGroupId): +The following table lists the available Index groups (indexGroupId): ID | Group Name | Indices in this group | -- | | Below is the list of available Index groups (indexGroupId): ## Daily index range sets -[Get Daily Indices API](/rest/api/maps/weather) returns the ranged value and its associated category name for each index ID. Range sets aren't the same for all indices. The tables below show the various range sets used by the supported indices listed in [Index IDs and index groups IDs](#index-ids-and-index-groups-ids). To find out which indices use which range sets, go to the [Index IDs and Index Groups IDs](#index-ids-and-index-groups-ids) section of this document. +[Get Daily Indices API] returns the ranged value and its associated category name for each index ID. Range sets aren't the same for all indices. The following tables show the various range sets used by the supported indices listed in [Index IDs and index groups IDs]. To find out which indices use which range sets, go to the [Index IDs and Index Groups IDs] section of this document. ### Poor-Excellent 1 Below is the list of available Index groups (indexGroupId): ## Next steps > [!div class="nextstepaction"]-> [Azure Maps Weather services frequently asked questions (FAQ)](weather-services-faq.yml) +> [Azure Maps Weather services frequently asked questions (FAQ)] > [!div class="nextstepaction"]-> [Azure Maps Weather services coverage](weather-coverage.md) +> [Azure Maps Weather services coverage] > [!div class="nextstepaction"]-> [Weather services API](/rest/api/maps/weather) +> [Weather services API] ++[Azure Maps Weather services coverage]: weather-coverage.md +[Azure Maps Weather services frequently asked questions (FAQ)]: weather-services-faq.yml +[Get Daily Indices API]: /rest/api/maps/weather +[Get Map Tile v2 API]: /rest/api/maps/render-v2/get-map-tile +[Index IDs and index groups IDs]: #index-ids-and-index-groups-ids +[Weather services API]: /rest/api/maps/weather +[Weather services]: /rest/api/maps/weather |
azure-maps | Webgl Custom Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md | -using [WebGL][getting_started_with_webgl]. WebGL is based -on [OpenGL ES][OpenGL ES] and enables rendering 2D and 3D +using [WebGL]. WebGL is based +on [OpenGL ES] and enables rendering 2D and 3D graphics in web browsers. Using WebGL, you can build high-performance interactive scenarios like simulations, data visualization, animations and Developers can access the WebGL context of the map during rendering and use custom WebGL layers to integrate with other-libraries such as [three.js][threejs] and [deck.gl][deckgl] +libraries such as [three.js] and [deck.gl] to provide enriched and interactive content on the map. ## Add a WebGL layer This sample renders a triangle on the map using a WebGL layer. ![A screenshot showing a triangle rendered on a map, using a WebGL layer.](./media/how-to-webgl-custom-layer/triangle.png) -For a fully functional sample with source code, see [Simple 2D WebGL layer][Simple 2D WebGL layer] in the Azure Maps Samples. +For a fully functional sample with source code, see [Simple 2D WebGL layer] in the Azure Maps Samples. The map's camera matrix is used to project spherical Mercator point to-gl coordinates. Mercator point \[0, 0\] represents the top left corner +`gl` coordinates. Mercator point \[0, 0\] represents the top left corner of the Mercator world and \[1, 1\] represents the bottom right corner.-When the `renderingMode` is `"3d"`, the z coordinate is conformal. +When `renderingMode` is `"3d"`, the z coordinate is conformal. A box with identical x, y, and z lengths in Mercator units would be rendered as a cube. methods can be used to project a Mercator point to a Position. ## Render a 3D model Use a WebGL layer to render 3D models. The following example shows how-to load a [glTF][glTF] file and render it on the map using [three.js][threejs]. +to load a [glTF] file and render it on the map using [three.js]. You need to add the following script files. This sample renders an animated 3D parrot on the map. ![A screenshot showing an an animated 3D parrot on the map.](./media/how-to-webgl-custom-layer/3d-parrot.gif) -For a fully functional sample with source code, see [Three custom WebGL layer][Three custom WebGL layer] in the Azure Maps Samples. +For a fully functional sample with source code, see [Three custom WebGL layer] in the Azure Maps Samples. The `onAdd` function loads a `.glb` file into memory and instantiates-three.js objects such as Camera, Scene, Light, and a `THREE.WebGLRenderer`. +[three.js] objects such as Camera, Scene, Light, and a `THREE.WebGLRenderer`. The `render` function calculates the projection matrix of the camera and renders the model to the scene. one of the style options while creating the map. ## Render a 3D model using babylon.js -[Babylon.js][babylonjs] is one of the world's leading WebGL-based graphics engines. The following example shows how to load a GLTF file and render it on the map using babylon.js. +[Babylon.js] is one of the world's leading WebGL-based graphics engines. The following example shows how to load a GLTF file and render it on the map using babylon.js. You need to add the following script files. The `render` function calculates the projection matrix of the camera and renders ![A screenshot showing an example of rendering a 3D model using babylon.js.](./media/how-to-webgl-custom-layer/render-3d-model.png) -For a fully functional sample with source code, see [Babylon custom WebGL layer][Babylon custom WebGL layer] in the Azure Maps Samples. +For a fully functional sample with source code, see [Babylon custom WebGL layer] in the Azure Maps Samples. ## Render a deck.gl layer -A WebGL layer can be used to render layers from the [deck.gl][deckgl] +A WebGL layer can be used to render layers from the [deck.gl] library. The following sample demonstrates the data visualization of people migration flow in the United States from county to county within a certain time range. class DeckGLLayer extends atlas.layer.WebGLLayer { } ``` --This sample renders an arc-layer google the [deck.gl][deckgl] library. +This sample renders an arc-layer google the [deck.gl] library. ![A screenshot showing an arc-layer from the Deck G L library.](./media/how-to-webgl-custom-layer/arc-layer.png) -For a fully functional sample with source code, see [Deck GL custom WebGL layer][Deck GL custom WebGL layer] in the Azure Maps Samples. +For a fully functional sample with source code, see [Deck GL custom WebGL layer] in the Azure Maps Samples. ## Next steps Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]-> [WebGLLayer][WebGLLayer] +> [WebGLLayer] > [!div class="nextstepaction"]-> [WebGLLayerOptions][WebGLLayerOptions] +> [WebGLLayerOptions] > [!div class="nextstepaction"]-> [WebGLRenderer interface][WebGLRenderer interface] +> [WebGLRenderer interface] > [!div class="nextstepaction"]-> [MercatorPoint][MercatorPoint] +> [MercatorPoint] -[getting_started_with_webgl]: https://developer.mozilla.org/en-US/docs/web/api/webgl_api/tutorial/getting_started_with_webgl -[threejs]: https://threejs.org/ -[deckgl]: https://deck.gl/ +[Babylon custom WebGL layer]: https://samples.azuremaps.com/?sample=babylon-custom-webgl-layer +[Babylon.js]: https://www.babylonjs.com/ +[Deck GL custom WebGL layer]: https://samples.azuremaps.com/?sample=deck-gl-custom-webgl-layer +[deck.gl]: https://deck.gl/ [glTF]: https://www.khronos.org/gltf/+[MercatorPoint]: /javascript/api/azure-maps-control/atlas.data.mercatorpoint [OpenGL ES]: https://www.khronos.org/opengles/-[babylonjs]: https://www.babylonjs.com/ +[Simple 2D WebGL layer]: https://samples.azuremaps.com/?sample=simple-2d-webgl-layer +[Three custom WebGL layer]: https://samples.azuremaps.com/?sample=three-custom-webgl-layer +[three.js]: https://threejs.org/ +[WebGL]: https://developer.mozilla.org/en-US/docs/web/api/webgl_api/tutorial/getting_started_with_webgl [WebGLLayer]: /javascript/api/azure-maps-control/atlas.layer.webgllayer [WebGLLayerOptions]: /javascript/api/azure-maps-control/atlas.webgllayeroptions [WebGLRenderer interface]: /javascript/api/azure-maps-control/atlas.webglrenderer-[MercatorPoint]: /javascript/api/azure-maps-control/atlas.data.mercatorpoint -[Simple 2D WebGL layer]: https://samples.azuremaps.com/?sample=simple-2d-webgl-layer -[Deck GL custom WebGL layer]: https://samples.azuremaps.com/?sample=deck-gl-custom-webgl-layer -[Three custom WebGL layer]: https://samples.azuremaps.com/?sample=three-custom-webgl-layer -[Babylon custom WebGL layer]: https://samples.azuremaps.com/?sample=babylon-custom-webgl-layer |
azure-maps | Zoom Levels And Tile Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/zoom-levels-and-tile-grid.md | -Azure Maps use the Spherical Mercator projection coordinate system (EPSG: 3857). A projection is the mathematical model used to transform the spherical globe into a flat map. The Spherical Mercator projection stretches the map at the poles to create a square map. This projection significantly distorts the scale and area of the map but has two important properties that outweigh this distortion: +Azure Maps use the Spherical Mercator projection coordinate system ([EPSG:3857]). A projection is the mathematical model used to transform the spherical globe into a flat map. The Spherical Mercator projection stretches the map at the poles to create a square map. This projection significantly distorts the scale and area of the map but has two important properties that outweigh this distortion: - It's a conformal projection, which means that it preserves the shape of relatively small objects. Preserving the shape of small objects is especially important when showing aerial imagery. For example, we want to avoid distorting the shape of buildings. Square buildings should appear square, not rectangular.-- It's a cylindrical projection. North and south are always up and down, and west and east are always left and right. +- It's a cylindrical projection. North and south are always up and down, and west and east are always left and right. To optimize the performance of map retrieval and display, the map is divided into square tiles. The Azure Maps SDK's use tiles that have a size of 512 x 512 pixels for road maps, and smaller 256 x 256 pixels for satellite imagery. Azure Maps provides raster and vector tiles for 23 zoom levels, numbered 0 through 22. At zoom level 0, the entire world fits on a single tile: Zoom level 1 uses four tiles to render the world: a 2 x 2 square Each additional zoom level quad-divides the tiles of the previous one, creating a grid of 2<sup>zoom</sup> x 2<sup>zoom</sup>. Zoom level 22 is a grid 2<sup>22</sup> x 2<sup>22</sup>, or 4,194,304 x 4,194,304 tiles (17,592,186,044,416 tiles in total). -The Azure Maps interactive map controls for web and Android support 25 zoom levels, numbered 0 through 24. Although road data will only be available at the zoom levels in when the tiles are available. +The Azure Maps interactive map controls for web and Android support 25 zoom levels, numbered 0 through 24. Although road data is only available at the zoom levels in when the tiles are available. The following table provides the full list of values for zoom levels where the tile size is **512** pixels square at latitude 0: When determining which zoom level to use, remember each location is in a fixed p Once the zoom level is determined, the x and y values can be calculated. The top-left tile in each zoom grid is x=0, y=0; the bottom-right tile is at x=2<sup>zoom-1</sup>, y=2<sup>zoom-1</sup>. -Here is the zoom grid for zoom level 1: +Here's the zoom grid for zoom level 1: :::image type="content" border="false" source="./media/zoom-levels-and-tile-grid/api_x_y.png" alt-text="Zoom grid for zoom level 1"::: ## Quadkey indices -Some mapping platforms use a `quadkey` indexing naming convention that combines the tile ZY coordinates into a one-dimension string called `quadtree` keys or `quadkeys` for short. Each `quadkey` uniquely identifies a single tile at a particular level of detail, and it can be used as a key in common database B-tree indexes. The Azure Maps SDKs support the overlaying of tile layers that use `quadkey` naming convention in addition to other naming conventions as documented in the [Add a tile layer](map-add-tile-layer.md) document. +Some mapping platforms use a `quadkey` indexing naming convention that combines the tile ZY coordinates into a one-dimension string called `quadtree` keys or `quadkeys` for short. Each `quadkey` uniquely identifies a single tile at a particular level of detail, and it can be used as a key in common database B-tree indexes. The Azure Maps SDKs support the overlaying of tile layers that use `quadkey` naming convention in addition to other naming conventions as documented in the [Add a tile layer] document. > [!NOTE]-> The `quadkeys` naming convention only works for zoom levels of one or greater. The Azure Maps SDK's support zoom level 0 which is a single map tile for the whole world. +> The `quadkeys` naming convention only works for zoom levels of one or greater. The Azure Maps SDK's support zoom level 0 which is a single map tile for the whole world. To convert tile coordinates into a `quadkey`, the bits of the Y and X coordinates are interleaved, and the result is interpreted as a base-4 number (with leading zeros maintained) and converted into a string. For instance, given tile XY coordinates of (3, 5) at level 3, the `quadkey` is determined as follows: tileY = 5 = 101 (base 2) quadkey = 100111 (base 2) = 213 (base 4) = "213" ``` -`Qquadkeys` have several interesting properties. First, the length of a `quadkey` (the number of digits) equals the zoom level of the corresponding tile. Second, the `quadkey` of any tile starts with the `quadkey` of its parent tile (the containing tile at the previous level). As shown in the example below, tile 2 is the parent of tiles 20 through 23: +`Qquadkeys` have several interesting properties. First, the length of a `quadkey` (the number of digits) equals the zoom level of the corresponding tile. Second, the `quadkey` of any tile starts with the `quadkey` of its parent tile (the containing tile at the previous level). As shown in the following example, tile 2 is the parent of tiles 20 through 23: :::image type="content" border="false" source="./media/zoom-levels-and-tile-grid/quadkey-tile-pyramid.png" alt-text="Quadkey tile pyramid"::: module AzureMaps { * * * > [!NOTE]-> The interactive map controls in the Azure Maps SDK's have helper functions for converting between geospatial positions and viewport pixels. -> - [Web SDK: Map pixel and position calculations](/javascript/api/azure-maps-control/atlas.map#pixelstopositions-pixel) +> The interactive map controls in the Azure Maps SDK's have helper functions for converting between geospatial positions and viewport pixels. +> +> - [Web SDK: Map pixel and position calculations] ## Next steps Directly access map tiles from the Azure Maps REST > [!div class="nextstepaction"]-> [Get map tiles](/rest/api/maps/render/getmaptile) +> [Get map tiles] > [!div class="nextstepaction"]-> [Get traffic flow tiles](/rest/api/maps/traffic/gettrafficflowtile) +> [Get traffic flow tiles] > [!div class="nextstepaction"]-> [Get traffic incident tiles](/rest/api/maps/traffic/gettrafficincidenttile) +> [Get traffic incident tiles] Learn more about geospatial concepts: > [!div class="nextstepaction"]-> [Azure Maps glossary](glossary.md) +> [Azure Maps glossary] ++[EPSG:3857]: https://epsg.io/3857 +[Web SDK: Map pixel and position calculations]: /javascript/api/azure-maps-control/atlas.map#pixelstopositions-pixel +[Add a tile layer]: map-add-tile-layer.md +[Get map tiles]: /rest/api/maps/render/getmaptile +[Get traffic flow tiles]: /rest/api/maps/traffic/gettrafficflowtile +[Get traffic incident tiles]: /rest/api/maps/traffic/gettrafficincidenttile +[Azure Maps glossary]: glossary.md |
azure-monitor | Azure Monitor Agent Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-health.md | |
azure-monitor | Alerts Metric Near Real Time | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md | Here's the full list of Azure Monitor metric sources supported by the newer aler |Microsoft.ClassicStorage/storageAccounts/tableServices | Yes | No | [Azure Table Storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | |Microsoft.CloudTest/hostedpools | Yes | No | [1ES Hosted Pools](../essentials/metrics-supported.md#microsoftcloudtesthostedpools) | |Microsoft.CloudTest/pools | Yes | No | [CloudTest Pools](../essentials/metrics-supported.md#microsoftcloudtestpools) |-|Microsoft.CognitiveServices/accounts | Yes | No | [Azure Cognitive Services](../essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | +|Microsoft.CognitiveServices/accounts | Yes | No | [Azure AI services](../essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | |Microsoft.Compute/cloudServices | Yes | No | [Azure Cloud Services](../essentials/metrics-supported.md#microsoftcomputecloudservices) | |Microsoft.Compute/cloudServices/roles | Yes | No | [Azure Cloud Services roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | |Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Azure Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) | |
azure-monitor | Availability Test Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md | Title: Migrate from Azure Monitor Application Insights classic URL ping tests to standard tests description: How to migrate from Azure Monitor Application Insights classic availability URL ping tests to standard tests. + Last updated 07/19/2023 |
azure-monitor | Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md | Alternatively, you can subscribe to this page as an RSS feed by adding https://g You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or Application Insights Agent to send data to the portal. +> [!NOTE] +> These addresses are listed by using Classless Interdomain Routing notation. As an example, an entry like `51.144.56.112/28` is equivalent to 16 IPs that start at `51.144.56.112` and end at `51.144.56.127`. + | Purpose | URL | Type | IP | Ports | | | | | | | | Telemetry | dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com<br/>\*.in.applicationinsights.azure.com<br/><br/> |Global<br/>Global<br/>Global<br/>Regional<br/>|| 443 | | Live Metrics | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com<br/><br/>{region}.livediagnostics.monitor.azure.com<br/><br/>*Example for {region}: westus2<br/>Find all supported regions in [this table](#addresses-grouped-by-region-azure-public-cloud).*|Global<br/>Global<br/>Global<br/><br/>Regional<br/>|20.49.111.32/29<br/>13.73.253.112/29| 443 | -> [!IMPORTANT] -> For Live Metrics, it is *required* to add the list of IPs for the respective region aside from global IPs. - > [!NOTE]-> These addresses are listed by using Classless Interdomain Routing notation. As an example, an entry like `51.144.56.112/28` is equivalent to 16 IPs that start at `51.144.56.112` and end at `51.144.56.127`. +> Application Insights ingestion endpoints are IPv4 only. -> [!NOTE] -> As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Application Insights connection-string based regional telemetry endpoints only support TLS 1.2. Global telemetry endpoints continue to support TLS 1.0 and TLS 1.1. -> -> If you're using an older version of TLS, Application Insights will not ingest any telemetry. For applications based on .NET Framework see [Transport Layer Security (TLS) best practices with the .NET Framework](/dotnet/framework/network-programming/tls) to support the newer TLS version. +> [!IMPORTANT] +> For Live Metrics, it is *required* to add the list of [IPs for the respective region](#addresses-grouped-by-region-azure-public-cloud) aside from global IPs. ## Application Insights Agent Download [China cloud IP addresses](https://www.microsoft.com/download/details.a #### Addresses grouped by region (Azure public cloud) +Add the subdomain of the corresponding region to the Live Metrics URL from the [Outgoing ports](#outgoing-ports) table. + > [!NOTE]-> Add the subdomain of the corresponding region to the Live Metrics URL from the [Outgoing ports](#outgoing-ports) table. +> As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Application Insights connection-string based regional telemetry endpoints only support TLS 1.2. Global telemetry endpoints continue to support TLS 1.0 and TLS 1.1. +> +> If you're using an older version of TLS, Application Insights will not ingest any telemetry. For applications based on .NET Framework see [Transport Layer Security (TLS) best practices with the .NET Framework](/dotnet/framework/network-programming/tls) to support the newer TLS version. | Continent/Country | Region | Subdomain | IP | | | | | | |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | Title: Diagnose with Live Metrics - Application Insights - Azure Monitor description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Previously updated : 06/23/2023 Last updated : 07/27/2023 ms.devlang: csharp Live Metrics custom filters allow you to control which of your application's tel > [!NOTE] > On September 30, 2025, API keys used to stream Live Metrics telemetry into Application Insights will be retired. After that date, applications that use API keys won't be able to send Live Metrics data to your Application Insights resource. Authenticated telemetry ingestion for Live Metrics streaming to Application Insights will need to be done with [Azure AD authentication for Application Insights](./azure-ad-authentication.md). -It's possible to try custom filters without having to set up an authenticated channel. Select any of the filter icons and authorize the connected servers. If you choose this option, you'll have to authorize the connected servers once every new session or whenever a new server comes online. +It's possible to try custom filters without having to set up an authenticated channel. Select any of the filter icons and authorize the connected servers. If you choose this option, you have to authorize the connected servers once every new session or whenever a new server comes online. > [!WARNING] > We strongly discourage the use of unsecured channels and will disable this option six months after you start using it. The **Authorize connected servers** dialog displays the date after which this option will be disabled. Create an API key from within your Application Insights resource and go to **Set | Azure Functions v2 | Supported | Supported | Supported | Supported | **Not supported** | | Java | Supported (V2.0.0+) | Supported (V2.0.0+) | **Not supported** | Supported (V3.2.0+) | **Not supported** | | Node.js | Supported (V1.3.0+) | Supported (V1.3.0+) | **Not supported** | Supported (V1.3.0+) | **Not supported** |+| Python | **Not supported** | **Not supported** | **Not supported** | **Not supported** | **Not supported** | Basic metrics include request, dependency, and exception rate. Performance metrics (performance counters) include memory and CPU. Sample telemetry shows a stream of detailed information for failed requests and dependencies, exceptions, events, and traces. Basic metrics include request, dependency, and exception rate. Performance metri Live Metrics uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check that [outgoing ports for Live Metrics](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers. -As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you're using an older version of TLS, Live Metrics won't display any data. For applications based on .NET Framework 4.5.1, see [Enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support the newer TLS version. +As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you're using an older version of TLS, Live Metrics doesn't display any data. For applications based on .NET Framework 4.5.1, see [Enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support the newer TLS version. ### Missing configuration for .NET As described in the [Azure TLS 1.2 migration announcement](https://azure.microso When navigating to Live Metrics, you may see a banner with the status message: "Data is temporarily inaccessible. The updates on our status are posted here https://aka.ms/aistatus " -Follow the link to the *Azure status* page and check if there's an activate outage affecting Application Insights. If there's no outage, verify if any firewalls or browser extensions are blocking access to Live Metrics. For example, some popular ad-blocker extensions block connections to `*.monitor.azure.com`. In order to use the full capabilities of Live Metrics, either disable the ad-blocker extension or add an exclusion rule for the domain `*.livediagnostics.monitor.azure.com` to your ad-blocker, firewall, etc. +Follow the link to the *Azure status* page and check if there's an activate outage affecting Application Insights. Verify that firewalls and browser extensions aren't blocking access to Live Metrics if an outage isn't occurring. For example, some popular ad-blocker extensions block connections to `*.monitor.azure.com`. In order to use the full capabilities of Live Metrics, either disable the ad-blocker extension or add an exclusion rule for the domain `*.livediagnostics.monitor.azure.com` to your ad-blocker, firewall, etc. ### Unexpected large number of requests to livediagnostics.monitor.azure.com Heavier traffic is expected while the LiveMetrics pane is open. Navigate away from the LiveMetrics pane to restore normal traffic flow of traffic. Application Insights SDKs poll QuickPulse endpoints with REST API calls once every five seconds to check if the LiveMetrics pane is being viewed. -The SDKs will send new metrics to QuickPulse every one second while the LiveMetrics pane is open. +The SDKs send new metrics to QuickPulse every one second while the LiveMetrics pane is open. ## Next steps |
azure-monitor | Opentelemetry Add Modify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md | Telemetry emitted by these Azure SDKs is automatically collected by default: * [Azure Event Grid](/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+ * [Azure Event Hubs](/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+ * [Azure Event Hubs - Azure Blob Storage Checkpoint Store](/java/api/overview/azure/messaging-eventhubs-checkpointstore-blob-readme) 1.5.1+-* [Azure Form Recognizer](/java/api/overview/azure/ai-formrecognizer-readme) 3.0.6+ +* [Azure AI Document Intelligence](/java/api/overview/azure/ai-formrecognizer-readme) 3.0.6+ * [Azure Identity](/java/api/overview/azure/identity-readme) 1.2.4+ * [Azure Key Vault - Certificates](/java/api/overview/azure/security-keyvault-certificates-readme) 4.1.6+ * [Azure Key Vault - Keys](/java/api/overview/azure/security-keyvault-keys-readme) 4.2.6+ The following table represents the currently supported custom telemetry types: |-||-|--|||-|--| | **ASP.NET Core** | | | | | | | | | OpenTelemetry API | | Yes | Yes | Yes | | Yes | |-| iLogger API | | | | | | | Yes | +| ILogger API | | | | | | | Yes | | AI Classic API | | | | | | | | | | | | | | | | | | **Java** | | | | | | | | Get the request trace ID and the span ID in your code: - To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-python). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python). - To see available OpenTelemetry instrumentations and components, see the [OpenTelemetry Contributor Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).+- To enable usage experiences, [enable web or browser user monitoring](javascript.md). |
azure-monitor | Azure Monitor Workspace Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md | In certain circumstances, splitting an Azure Monitor workspace into multiple wor ## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus.-- Azure Monitor Private Links aren't supported for Prometheus metrics collection into Azure monitor workspace.-- Azure Monitor workspaces are currently only supported in public clouds.-- Azure Monitor workspaces don't currently support being moved into a different subscription or resource group once created.+ ## Data considerations Data stored in the Azure Monitor Workspace is handled in accordance with all standards described in the [Azure Trust Center](https://www.microsoft.com/en-us/trust-center?rtc=1). Several considerations exist specific to data stored in the Azure Monitor Workspace: |
azure-monitor | Data Collection Rule Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md | The following resources describe different scenarios for creating DCRs. In some |:|:|:| | Azure Monitor Agent | [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a DCR that specifies events and performance counters to collect from a machine with Azure Monitor Agent. Then apply that rule to one or more virtual machines. Azure Monitor Agent will be installed on any machines that don't currently have it. | | | [Use Azure Policy to install Azure Monitor Agent and associate with a DCR](../agents/azure-monitor-agent-manage.md#use-azure-policy) | Use Azure Policy to install Azure Monitor Agent and associate one or more DCRs with any virtual machines or virtual machine scale sets as they're created in your subscription.-| Custom logs | [Configure custom logs by using the Azure portal](../logs/tutorial-logs-ingestion-portal.md)<br>[Configure custom logs by using Azure Resource Manager templates and the REST API](../logs/tutorial-logs-ingestion-api.md) | Send custom data by using a REST API. The API call connects to a data collection endpoint and specifies a DCR to use. The DCR specifies the target table and potentially includes a transformation that filters and modifies the data before it's stored in a Log Analytics workspace. | +| Custom logs | [Configure custom logs by using the Azure portal](../logs/tutorial-logs-ingestion-portal.md)<br>[Configure custom logs by using Azure Resource Manager templates and the REST API](../logs/tutorial-logs-ingestion-api.md)<br>[Configure custom logs by using Azure Monitorint Agent](../agents/data-collection-text-log.md) | Send custom data by using a REST API or Agent. The API call connects to a data collection endpoint and specifies a DCR to use. The agent uses the DCR to configure the collection of data on a machine. The DCR specifies the target table and potentially includes a transformation that filters and modifies the data before it's stored in a Log Analytics workspace. | | Azure Event Hubs | [Ingest events from Azure Event Hubs to Azure Monitor Logs](../logs/ingest-logs-event-hub.md)| Collect data from multiple sources to an event hub and ingest the data you need directly into tables in one or more Log Analytics workspaces. This is a highly scalable method of collecting data from a wide range of sources with minimum configuration.| | Workspace transformation | [Configure ingestion-time transformations by using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Configure ingestion-time transformations by using Azure Resource Manager templates and the REST API](../logs/tutorial-workspace-transformations-api.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace. It's applied to any data sent to that table from a legacy workload that doesn't use a DCR. | |
azure-monitor | Data Collection Rule Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md | ms.reviwer: nikeist [Data collection rules (DCRs)](data-collection-rule-overview.md) determine how to collect and process telemetry sent to Azure. Some DCRs will be created and managed by Azure Monitor. You might create other DCRs to customize data collection for your particular requirements. This article describes the structure of DCRs for creating and editing DCRs in those cases where you need to work with them directly. ## Custom logs-A DCR for [custom logs](../logs/logs-ingestion-api-overview.md) contains the following sections. For a sample, see [Sample data collection rule - custom logs](../logs/data-collection-rule-sample-custom-logs.md). +A DCR for [API based custom logs](../logs/logs-ingestion-api-overview.md) contains the following sections. For a sample, see [Sample data collection rule - custom logs](../logs/data-collection-rule-sample-custom-logs.md). ### streamDeclarations This section contains the declaration of all the different types of data that will be sent via the HTTP endpoint directly into Log Analytics. Each stream is an object whose: This section ties the other sections together. It defines the following properti - `outputStream` section, which describes which table in the workspace specified under the `destination` property the data will be ingested into. The value of `outputStream` has the `Microsoft-[tableName]` shape when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom-created table. Only one destination is allowed per stream. ## Azure Monitor Agent- A DCR for [Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) contains the following sections. For a sample, see [Sample data collection rule - agent](../agents/data-collection-rule-sample-agent.md). + A DCR for [Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) contains the following sections. For a sample, see [Sample data collection rule - agent](../agents/data-collection-rule-sample-agent.md). For agent based custom logs, see [Sample Custom Log Rules - Agent](../agents/data-collection-text-log.md) ### dataSources This unique source of monitoring data has its own format and method of exposing its data. Examples of a data source include Windows event log, performance counters, and Syslog. Each data source matches a particular data source type as described in the following table. |
azure-monitor | Data Collection Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md | Transformations are performed in Azure Monitor in the [data ingestion pipeline]( Transformations are defined in a [data collection rule (DCR)](data-collection-rule-overview.md) and use a [Kusto Query Language (KQL) statement](data-collection-transformations-structure.md) that's applied individually to each entry in the incoming data. It must understand the format of the incoming data and create output in the structure expected by the destination. -For example, a DCR that collects data from a virtual machine by using Azure Monitor Agent would specify particular data to collect from the client operating system. It could also include a transformation that would get applied to that data after it's sent to the data ingestion pipeline that further filters the data or adds a calculated column. The following diagram shows this workflow. +For example, a DCR that collects data from a virtual machine by using Azure Monitor Agent would specify particular data to collect from the client operating system. It could also include a transformation that would get applied to that data after it's sent to the data ingestion pipeline that further filters the data or adds a calculated column. See [Creating Agent Transforms](../agents/azure-monitor-agent-transformation.md). The following diagram shows this workflow. :::image type="content" source="media/data-collection-transformations/transformation-azure-monitor-agent.png" lightbox="media/data-collection-transformations/transformation-azure-monitor-agent.png" alt-text="Diagram that shows ingestion-time transformation for Azure Monitor Agent." border="false"::: There are multiple methods to create transformations depending on the data colle |:|:| | Logs ingestion API with transformation | [Send data to Azure Monitor Logs by using REST API (Azure portal)](../logs/tutorial-logs-ingestion-portal.md)<br>[Send data to Azure Monitor Logs by using REST API (Azure Resource Manager templates)](../logs/tutorial-logs-ingestion-api.md) | | Transformation in workspace DCR | [Add workspace transformation to Azure Monitor Logs by using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Add workspace transformation to Azure Monitor Logs by using Resource Manager templates](../logs/tutorial-workspace-transformations-api.md)+| Agent Transformations in a DCR | [Add transformation to Azure Monitor Log](../agents/azure-monitor-agent-transformation.md) ## Cost for transformations While transformations themselves don't incur direct costs, the following scenarios can result in additional charges: |
azure-monitor | Resource Logs Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md | The schema for resource logs varies depending on the resource and log category. | Azure Automation |[Log Analytics for Azure Automation](../../automation/automation-manage-send-joblogs-log-analytics.md) | | Azure Batch |[Azure Batch logging](../../batch/batch-diagnostics.md) | | Azure Cognitive Search | [Cognitive Search monitoring data reference (schemas)](../../search/monitor-azure-cognitive-search-data-reference.md#schemas) |-| Azure Cognitive Services | [Logging for Azure Cognitive Services](../../ai-services/diagnostic-logging.md) | +| Azure AI services | [Logging for Azure AI services](../../ai-services/diagnostic-logging.md) | | Azure Container Instances | [Logging for Azure Container Instances](../../container-instances/container-instances-log-analytics.md#log-schema) | | Azure Container Registry | [Logging for Azure Container Registry](../../container-registry/monitor-service.md) | | Azure Content Delivery Network | [Diagnostic logs for Azure Content Delivery Network](../../cdn/cdn-azure-diagnostic-logs.md) | |
azure-monitor | Logs Dedicated Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md | Use the following commands to link a workspace to a cluster: ```azurecli # Find cluster resource ID az account set --subscription "cluster-subscription-id"-$clusterResourceId = az monitor log-analytics cluster list --resource-group "resource-group-name" --query "[?contains(name, "cluster-name")].[id]" --output tsv +$clusterResourceId = az monitor log-analytics cluster list --resource-group "resource-group-name" --query "[?contains(name, 'cluster-name')].[id]" --output tsv # Link workspace az account set --subscription "workspace-subscription-id" az monitor log-analytics workspace linked-service create --no-wait --name cluster --resource-group "resource-group-name" --workspace-name "workspace-name" --write-access-resource-id $clusterResourceId # Wait for job completion when `--no-wait` was used-$workspaceResourceId = az monitor log-analytics workspace list --resource-group "resource-group-name" --query "[?contains(name, "workspace-name")].[id]" --output tsv +$workspaceResourceId = az monitor log-analytics workspace list --resource-group "resource-group-name" --query "[?contains(name, 'workspace-name')].[id]" --output tsv az resource wait --deleted --ids $workspaceResourceId --include-response-body true ``` |
azure-netapp-files | Double Encryption At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md | When you create a volume in a double-encryption capacity pool, the default key m Azure NetApp Files double encryption at rest is supported for the following regions: +* Australia Central +* Australia Central 2 +* Australia East +* Australia Southeast +* Brazil South +* Canada Central +* Central US +* East Asia +* East US +* East US 2 +* France Central +* Germany West Central +* Japan East +* Korea Central +* North Central US +* North Europe +* Norway East +* Qatar Central +* South Africa North +* South Central US +* Sweden Central +* Switzerland North +* UAE North +* UK South * West Europe-* East US 2 -* East Asia -+* West US +* West US 2 +* West US 3 + ## Considerations * Azure NetApp Files double encryption at rest supports [Standard network features](azure-netapp-files-network-topologies.md#configurable-network-features), but not Basic network features. |
azure-netapp-files | Network Attached Storage Protocols | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-protocols.md | For frequently asked questions regarding SMB in Azure NetApp Files, see the [Azu ## Next steps * [Azure NetApp Files NFS FAQ](faq-nfs.md)-* [Azure NetApp Files SMB FAQ](faq-smb.md) +* [Azure NetApp Files SMB FAQ](faq-smb.md) +* [Understand file locking and lock types in Azure NetApp Files](understand-file-locks.md) |
azure-portal | Get Subscription Tenant Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/get-subscription-tenant-id.md | Follow these steps to retrieve the ID for an Azure AD tenant in the Azure portal 1. Copy the **Tenant ID** by selecting the **Copy to clipboard** icon shown next to it. You can paste this value into a text document or other location. > [!TIP]-> You can also find your tenant programmatically by using [Azure Powershell](../active-directory/fundamentals/active-directory-how-to-find-tenant.md#find-tenant-id-with-powershell) or [Azure CLI](../active-directory/fundamentals/active-directory-how-to-find-tenant.md#find-tenant-id-with-cli). +> You can also find your tenant programmatically by using [Azure Powershell](/azure/active-directory/fundamentals/how-to-find-tenant#find-tenant-id-with-powershell) or [Azure CLI](/azure/active-directory/fundamentals/how-to-find-tenant#find-tenant-id-with-cli). ## Next steps |
azure-resource-manager | Deployment Stacks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md | Title: Create & deploy deployment stacks in Bicep description: Describes how to create deployment stacks in Bicep. + Last updated 07/20/2023 |
azure-resource-manager | Patterns Shared Variable File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-shared-variable-file.md | When you define your resource names, use string interpolation to concatenate the ## Example 2: Network security group rules -Suppose you have multiple Bicep file that define their own network security groups (NSG). You have a common set of security rules that must be applied to each NSG, and then you have application-specific rules that must be added. +Suppose you have multiple Bicep files that define their own network security groups (NSG). You have a common set of security rules that must be applied to each NSG, and then you have application-specific rules that must be added. Define a JSON file that includes the common security rules that apply across your company: |
azure-resource-manager | Scenarios Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-monitoring.md | Title: Create monitoring resources by using Bicep description: Describes how to create monitoring resources by using Bicep.-- Previously updated : 07/01/2022 Last updated : 07/28/2023 + # Create monitoring resources by using Bicep Azure has a comprehensive suite of tools that can monitor your applications and services. You can programmatically create your monitoring resources using Bicep to automate the creation of rules, diagnostic settings, and alerts when provisioning your Azure infrastructure. -Bringing your monitoring configuration into your Bicep code might seem unusual, considering that there are tools available inside the Azure portal to set up alert rules, diagnostic settings and dashboards. +Bringing your monitoring configuration into your Bicep code might seem unusual, considering that there are tools available inside the Azure portal to set up alert rules, diagnostic settings and dashboards. However, alerts and diagnostic settings are essentially the same as your other infrastructure resources. By including them in your Bicep code, you can deploy and test your alerting resources as you would for other Azure resources. Diagnostic settings enable you to configure Azure Monitor to export your logs an When creating [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) in Bicep, remember that this resource is an [extension resource](scope-extension-resources.md), which means it's applied to another resource. You can create diagnostic settings in Bicep by using the resource type [Microsoft.Insights/diagnosticSettings](/azure/templates/microsoft.insights/diagnosticsettings?tabs=bicep). -When creating diagnostic settings in Bicep, you need to apply the scope of the diagnostic setting. The diagnostic setting can be applied at the management, subscription, or resource group level. [Use the scope property on this resource to set the scope for this resource](../../azure-resource-manager/bicep/scope-extension-resources.md). +When creating diagnostic settings in Bicep, you need to apply the scope of the diagnostic setting. The diagnostic setting can be applied at the management, subscription, or resource group level. [Use the scope property on this resource to set the scope for this resource](../../azure-resource-manager/bicep/scope-extension-resources.md). Consider the following example: Metric alerts notify you when one of your metrics crosses a defined threshold. Y The [Azure activity log](../../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insights into events at the subscription level. This includes information such as when a resource in Azure is modified. -Activity log alerts are alerts that are activated when a new activity log event occurs that matches the conditions that are specified in the alert. +Activity log alerts are alerts that are activated when a new activity log event occurs that matches the conditions that are specified in the alert. You can use the `scope` property within the type [Microsoft.Insights/activityLogAlerts](/azure/templates/microsoft.insights/activitylogalerts?tabs=bicep) to create activity log alerts on a specific resource or a list of resources using the resource IDs as a prefix. For more information about creating dashboards with code, see [Programmatically ## Autoscale rules -To create an autoscaling setting, you define these using the resource type [Microsoft.Insights/autoscaleSettings](/azure/templates/microsoft.insights/autoscalesettings?tabs=bicep). +To create an autoscaling setting, you define these using the resource type [Microsoft.Insights/autoscaleSettings](/azure/templates/microsoft.insights/autoscalesettings?tabs=bicep). To target the resource that you want to apply the autoscaling setting to, you need to provide the target resource identifier of the resource that the setting should be added to. -In this example, a *scale out* condition for the App Service plan based on the average CPU percentage over a 10 minute time period. If the App Service plan exceeds 70% average CPU consumption over 10 minutes, the autoscale engine scales out the plan by adding one instance. +In this example, a *scale out* condition for the App Service plan based on the average CPU percentage over a 10 minute time period. If the App Service plan exceeds 70% average CPU consumption over 10 minutes, the autoscale engine scales out the plan by adding one instance. ::: code language="bicep" source="~/azure-docs-bicep-samples/samples/scenarios-monitoring/autoscaling-rules.bicep" ::: |
azure-resource-manager | Concepts Built In Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/concepts-built-in-policy.md | If the custom resource provider needs permissions to the scope of the policy to ### Policy assignment To use the built-in policy, create a policy assignment and assign the Deploy associations for a custom resource provider policy. The policy will then identify non-compliant resources and deploy association for those resources. -![Assign built-in policy](media/concepts-built-in-policy/assign-builtin-policy-customprovider.png) ## Getting help |
azure-resource-manager | Create Custom Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/create-custom-provider.md | Read-Host -Prompt "Press [ENTER] to continue ..." To deploy the template from the Azure portal, select the **Deploy to Azure** button. -[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-docs-json-samples%2Fmaster%2Fcustom-providers%2Fcustomprovider.json) ## View custom resource provider and resource In the portal, the custom resource provider is a hidden resource type. To confirm that the resource provider was deployed, go to the resource group and select **Show hidden types**. To see the custom resource that you deployed, use the `GET` operation on your resource type. The resource type `Microsoft.CustomProviders/resourceProviders/users` shown in the JSON response includes the resource that was created by the template. |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/overview.md | Azure Custom Resource Providers is an extensibility platform to Azure. It allows - How to utilize Azure Custom Resource Providers to extend existing workflows. - Where to find guides and code samples to get started. > [!IMPORTANT] > Custom Resource Providers is currently in public preview. |
azure-resource-manager | Tutorial Custom Providers Function Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md | To install the Azure Table storage bindings: 1. In the **Table name** box, enter *myCustomResources*. 1. Select **Save** to save the updated input parameter. ## Update RESTful HTTP methods To set up the Azure function to include the custom resource provider RESTful req 1. Go to the **Integrate** tab for the `HttpTrigger`. 1. Under **Selected HTTP methods**, select **GET**, **POST**, **DELETE**, and **PUT**. ## Add Azure Resource Manager NuGet packages |
azure-resource-manager | Tutorial Resource Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md | Let's deploy the custom resource provider infrastructure. Either copy, save, and 2. Search for **templates** in **All Services** or by using the main search box: - ![Search for templates](media/tutorial-resource-onboarding/templates.png) + :::image type="content" source="media/tutorial-resource-onboarding/templates.png" alt-text="Screenshot of the search bar in Azure portal with 'templates' entered as the search query."::: 3. Select **Add** on the **Templates** pane: - ![Select Add](media/tutorial-resource-onboarding/templatesadd.png) + :::image type="content" source="media/tutorial-resource-onboarding/templatesadd.png" alt-text="Screenshot of the Templates pane in Azure portal with the Add button highlighted."::: 4. Under **General**, enter a *Name* and *Description* for the new template: - ![Template name and description](media/tutorial-resource-onboarding/templatesdescription.png) + :::image type="content" source="media/tutorial-resource-onboarding/templatesdescription.png" alt-text="Screenshot of the General section in Azure portal where the user enters a Name and Description for the new template."::: 5. Create the Resource Manager template by copying in the JSON template from the "Get started with resource onboarding" section of this article: - ![Create a Resource Manager template](media/tutorial-resource-onboarding/templatesarmtemplate.png) + :::image type="content" source="media/tutorial-resource-onboarding/templatesarmtemplate.png" alt-text="Screenshot of the Azure portal where the user pastes the JSON template into the ARM Template section."::: 6. Select **Add** to create the template. If the new template doesn't appear, select **Refresh**. 7. Select the newly created template and then select **Deploy**: - ![Select the new template and then select Deploy](media/tutorial-resource-onboarding/templateselectspecific.png) + :::image type="content" source="media/tutorial-resource-onboarding/templateselectspecific.png" alt-text="Screenshot of the Azure portal showing the newly created template with the Deploy button highlighted."::: 8. Enter the settings for the required fields and then select the subscription and resource group. You can leave the **Custom Resource Provider Id** box empty. Let's deploy the custom resource provider infrastructure. Either copy, save, and Sample parameters: - ![Enter template parameters](media/tutorial-resource-onboarding/templatescustomprovider.png) + :::image type="content" source="media/tutorial-resource-onboarding/templatescustomprovider.png" alt-text="Screenshot of the Azure portal displaying the template parameters input fields for the custom resource provider deployment."::: 9. Go to the deployment and wait for it to finish. You should see something like the following screenshot. You should see the new association resource as an output: - ![Successful deployment](media/tutorial-resource-onboarding/customproviderdeployment.png) + :::image type="content" source="media/tutorial-resource-onboarding/customproviderdeployment.png" alt-text="Screenshot of the Azure portal showing a successful deployment with the new association resource as an output."::: Here's the resource group, with **Show hidden types** selected: - ![Custom resource provider deployment](media/tutorial-resource-onboarding/showhidden.png) + :::image type="content" source="media/tutorial-resource-onboarding/showhidden.png" alt-text="Screenshot of the resource group in Azure portal with Show hidden types selected, displaying the custom resource provider deployment."::: 10. Explore the logic app **Runs history** tab to see the calls for the association create: - ![Logic app Runs history](media/tutorial-resource-onboarding/logicapprun.png) + :::image type="content" source="media/tutorial-resource-onboarding/logicapprun.png" alt-text="Screenshot of the Logic app Runs history tab in Azure portal showing the calls for the association create."::: ## Deploy additional associations After you have the custom resource provider infrastructure set up, you can easil 1. Go to the custom resource provider **Microsoft.CustomProviders/resourceProviders** resource in the resource group of the previous deployment. You need to select the **Show hidden types** check box: - ![Go to the resource](media/tutorial-resource-onboarding/showhidden.png) + :::image type="content" source="media/tutorial-resource-onboarding/showhidden.png" alt-text="Screenshot of the Azure portal displaying the custom resource provider resource in the resource group with Show hidden types selected."::: 2. Copy the Resource ID property of the custom resource provider. 3. Search for *templates* in **All Services** or by using the main search box: - ![Search for templates](media/tutorial-resource-onboarding/templates.png) + :::image type="content" source="media/tutorial-resource-onboarding/templates.png" alt-text="Screenshot of the search bar in Azure portal with 'templates' entered as the search query."::: 4. Select the previously created template and then select **Deploy**: - ![Select the previously created template and then select Deploy](media/tutorial-resource-onboarding/templateselectspecific.png) + :::image type="content" source="media/tutorial-resource-onboarding/templateselectspecific.png" alt-text="Screenshot of the Azure portal showing the previously created template with the Deploy button highlighted."::: 5. Enter the settings for the required fields and then select the subscription and a different resource group. For the **Custom Resource Provider Id** setting, enter the Resource ID that you copied from the custom resource provider that you deployed earlier. 6. Go to the deployment and wait for it to finish. It should now deploy only the new associations resource: - ![New associations resource](media/tutorial-resource-onboarding/createdassociationresource.png) + :::image type="content" source="media/tutorial-resource-onboarding/createdassociationresource.png" alt-text="Screenshot of the Azure portal displaying the successful deployment of the new associations resource."::: You can go back to the logic app **Run history** and see that another call was made to the logic app. You can update the logic app to augment additional functionality for each created association. |
azure-resource-manager | Microsoft Common Checkbox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-checkbox.md | The CheckBox control lets users check or uncheck an option. The control returns ## UI sample ## Schema |
azure-resource-manager | Microsoft Common Dropdown | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-dropdown.md | The DropDown element has different options that determine its appearance in the When only a single item is allowed for selection, the control appears as: When descriptions are included, the control appears as: When multi-select is enabled, the control adds a **Select all** option and checkboxes for selecting more than one item: Descriptions can be included with multi-select enabled. When filtering is enabled, the control includes a text box for adding the filtering value. ## Schema When filtering is enabled, the control includes a text box for adding the filter In the following example, the `defaultValue` is defined using the values of the `allowedValues` instead of the labels. The default value can contain multiple values when `multiselect` is enabled. ```json { |
azure-resource-manager | Microsoft Common Editablegrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-editablegrid.md | A control for gathering tabular input. All fields within the grid are editable a ## UI sample ## Schema |
azure-resource-manager | Microsoft Common Fileupload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-fileupload.md | A control that allows a user to specify one or more files to upload. ## UI sample -![Microsoft.Common.FileUpload](./media/managed-application-elements/microsoft-common-fileupload.png) ## Schema |
azure-resource-manager | Microsoft Common Infobox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-infobox.md | A control that adds an information box. The box contains important text or warni ## UI sample -![Microsoft.Common.InfoBox](./media/managed-application-elements/microsoft-common-infobox.png) ## Schema |
azure-resource-manager | Microsoft Common Optionsgroup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-optionsgroup.md | The OptionsGroup control lets users select one option from two or more choices. ## UI sample ## Schema |
azure-resource-manager | Microsoft Common Passwordbox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-passwordbox.md | A control that can be used to provide and confirm a password. ## UI sample -![Microsoft.Common.PasswordBox](./media/managed-application-elements/microsoft-common-passwordbox.png) ## Schema |
azure-resource-manager | Microsoft Common Section | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-section.md | A control that groups one or more elements under a heading. ## UI sample -![Microsoft.Common.Section](./media/managed-application-elements/microsoft-common-section.png) ## Schema |
azure-resource-manager | Microsoft Common Serviceprincipalselector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-serviceprincipalselector.md | The default view is determined by the values in the `defaultValue` property and If you want to register a new application, select **Change selection** and the **Register an application** dialog box is displayed. Enter **Name**, **Supported account type**, and select the **Register** button. After you register a new application, use the **Authentication Type** to enter a password or certificate thumbprint. ### Use existing application To use an existing application, choose **Select Existing** and then select **Make selection**. Use the **Select an application** dialog box to search for the application's name. From the results, select the the application and then the **Select** button. After you select an application, the control displays the **Authentication Type** to enter a password or certificate thumbprint. ## Schema |
azure-resource-manager | Microsoft Common Slider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-slider.md | The Slider control lets users select from a range of allowed values. ## UI sample ## Schema |
azure-resource-manager | Microsoft Common Tagsbyresource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-tagsbyresource.md | A control for associating [tags](../management/tag-resources.md) with the resour ## UI sample -![Microsoft.Common.DropDown](./media/managed-application-elements/microsoft-common-tagsbyresource.png) ## Schema |
azure-resource-manager | Microsoft Common Textblock | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-textblock.md | A control that can be used to add text to the portal interface. ## UI sample -![Microsoft.Common.TextBox](./media/managed-application-elements/microsoft-common-textblock.png) ## Schema |
azure-resource-manager | Microsoft Common Textbox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-textbox.md | The `TextBox` element uses a single-line or multi-line text box. Example of single-line text box. Example of multi-line text box. ## Schema |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | Pricing tiers determine the capacity and limits of your search service. Tiers in To learn more about limits on a more granular level, such as document size, queries per second, keys, requests, and responses, see [Service limits in Azure Cognitive Search](../../search/search-limits-quotas-capacity.md). -## Azure Cognitive Services limits +<a name='azure-cognitive-services-limits'></a> ++## Azure AI services limits [!INCLUDE [azure-cognitive-services-limits](../../../includes/azure-cognitive-services-limits.md)] |
azure-resource-manager | Control Plane And Data Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-and-data-plane.md | The control plane includes two scenarios for handling requests - "green field" a ## Data plane -Requests for data plane operations are sent to an endpoint that's specific to your instance. For example, the [Detect Language operation](../../ai-services/language-service/language-detection/overview.md) in Cognitive Services is a data plane operation because the request URL is: +Requests for data plane operations are sent to an endpoint that's specific to your instance. For example, the [Detect Language operation](../../ai-services/language-service/language-detection/overview.md) in Azure AI services is a data plane operation because the request URL is: ```http POST {Endpoint}/text/analytics/v2.0/languages |
azure-resource-manager | Control Plane Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-metrics.md | Now, let's look at some scenarios that can help you exploring Azure Resource Man First, navigate to the Azure Monitor blade within the [portal](https://portal.azure.com): After selecting **Explore Metrics**, select a single subscription and then select the **Azure Resource Manager** metric: Then, after selecting **Apply**, you can visualize your Traffic or Latency control plane metrics with custom filtering and splitting: ### Query traffic and latency control plane metrics via REST API curl --location --request GET 'https://management.azure.com/subscriptions/000000 ``` You can also filter directly in portal: ### Examining Server Errors |
azure-resource-manager | Create Private Link Access Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-portal.md | When you create a resource management private link, the private link association 1. In the [portal](https://portal.azure.com), search for **Resource management private links** and select it from the available options. - :::image type="content" source="./media/create-private-link-access-portal/search.png" alt-text="Search for resource management private links"::: + :::image type="content" source="./media/create-private-link-access-portal/search.png" alt-text="Screenshot of Azure portal search bar with 'Resource management' entered."::: 1. If your subscription doesn't already have resource management private links, you'll see a blank page. Select **Create resource management private link**. - :::image type="content" source="./media/create-private-link-access-portal/start-create.png" alt-text="Select create for resource management private links"::: + :::image type="content" source="./media/create-private-link-access-portal/start-create.png" alt-text="Screenshot of Azure portal showing the 'Create resource management private link' button."::: 1. Provide values for the new resource management private link. The root management group for the directory you selected is used for the new resource. Select **Review + create**. - :::image type="content" source="./media/create-private-link-access-portal/provide-values.png" alt-text="Specify values for resource management private links"::: + :::image type="content" source="./media/create-private-link-access-portal/provide-values.png" alt-text="Screenshot of Azure portal with fields to provide values for the new resource management private link."::: 1. After validation passes, select **Create**. Now, create a private endpoint that references the resource management private l 1. Navigate to the **Private Link Center**. Select **Create private endpoint**. - :::image type="content" source="./media/create-private-link-access-portal/private-link-center.png" alt-text="Select private link center"::: + :::image type="content" source="./media/create-private-link-access-portal/private-link-center.png" alt-text="Screenshot of Azure portal's Private Link Center with 'Create private endpoint' highlighted."::: 1. In the **Basics** tab, provide values for your private endpoint. - :::image type="content" source="./media/create-private-link-access-portal/private-endpoint-basics.png" alt-text="Provide values for basics"::: + :::image type="content" source="./media/create-private-link-access-portal/private-endpoint-basics.png" alt-text="Screenshot of Azure portal showing the 'Basics' tab with fields to provide values for the private endpoint."::: 1. In the **Resource** tab, select **Connect to an Azure resource in my directory**. For resource type, select **Microsoft.Authorization/resourceManagementPrivateLinks**. For target subresource, select **ResourceManagement**. - :::image type="content" source="./media/create-private-link-access-portal/private-endpoint-resource.png" alt-text="Provide values for resource"::: + :::image type="content" source="./media/create-private-link-access-portal/private-endpoint-resource.png" alt-text="Screenshot of Azure portal showing the 'Resource' tab with fields to select resource type and target subresource for the private endpoint."::: 1. In the **Configuration** tab, select your virtual network. We recommend integrating with a private DNS zone. Select **Review + create**. To make sure your environment is properly configured, check the local IP address 1. Verify that the record set named **management** has a valid local IP address. - :::image type="content" source="./media/create-private-link-access-portal/verify.png" alt-text="Verify local IP address"::: + :::image type="content" source="./media/create-private-link-access-portal/verify.png" alt-text="Screenshot of Azure portal displaying the private DNS zone resource with the record set named 'management' and its local IP address."::: ## Next steps |
azure-resource-manager | Delete Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md | az group delete --name ExampleResourceGroup 1. Select **Delete resource group**. - ![Delete resource group](./media/delete-resource-group/delete-group.png) + :::image type="content" source="./media/delete-resource-group/delete-group.png" alt-text="Screenshot of the Delete resource group button in the Azure portal."::: 1. To confirm the deletion, type the name of the resource group az resource delete \ 1. Select **Delete**. The following screenshot shows the management options for a virtual machine. - ![Delete resource](./media/delete-resource-group/delete-resource.png) + :::image type="content" source="./media/delete-resource-group/delete-resource.png" alt-text="Screenshot of the Delete button for a virtual machine in the Azure portal."::: 1. When prompted, confirm the deletion. |
azure-resource-manager | Lock Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md | Applying locks can lead to unexpected results. Some operations, which don't seem - A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting a virtual machine. These operations require a POST method request. -- A read-only lock on a **resource group** that contains a **virtual machine** prevents users from moving the the VM out of the resource group.+- A read-only lock on a **resource group** that contains a **virtual machine** prevents users from moving the VM out of the resource group. - A read-only lock on a **resource group** prevents users from moving any new **resource** into that resource group. Instead, delete the service, which also deletes the infrastructure resource grou For managed applications, choose the service you deployed. -![Select service](./media/lock-resources/select-service.png) Notice the service includes a link for a **Managed Resource Group**. That resource group holds the infrastructure and is locked. You can only delete it indirectly. -![Show managed group](./media/lock-resources/show-managed-group.png) To delete everything for the service, including the locked infrastructure resource group, choose **Delete** for the service. -![Delete service](./media/lock-resources/delete-service.png) ## Configure locks |
azure-resource-manager | Manage Resource Groups Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md | The resource group stores metadata about the resources. Therefore, when you spec 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **Resource groups** - ![add resource group](./media/manage-resource-groups-portal/manage-resource-groups-add-group.png) + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted."::: 3. Select **Add**. 4. Enter the following values: The resource group stores metadata about the resources. Therefore, when you spec - **Resource group**: Enter a new resource group name. - **Region**: Select an Azure location, such as **Central US**. - ![create resource group](./media/manage-resource-groups-portal/manage-resource-groups-create-group.png) + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region."::: 5. Select **Review + Create** 6. Select **Create**. It takes a few seconds to create a resource group. 7. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group - ![go to resource group](./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png) + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel."::: ## List resource groups 1. Sign in to the [Azure portal](https://portal.azure.com). 2. To list the resource groups, select **Resource groups** - ![browse resource groups](./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png) + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups."::: 3. To customize the information displayed for the resource groups, select **Edit columns**. The following screenshot shows the additional columns you could add to the display: The resource group stores metadata about the resources. Therefore, when you spec 1. Open the resource group you want to delete. See [Open resource groups](#open-resource-groups). 2. Select **Delete resource group**. - ![delete azure resource group](./media/manage-resource-groups-portal/delete-group.png) + :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group."::: For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md). Locking prevents other users in your organization from accidentally deleting or 3. To add a lock to the resource group, select **Add**. 4. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**. - ![lock azure resource group](./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png) + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes."::: For more information, see [Lock resources to prevent unexpected changes](lock-resources.md). |
azure-resource-manager | Manage Resources Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-portal.md | To open a resource by the service type: 1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the left pane, select the Azure service. In this case, **Storage accounts**. If you don't see the service listed, select **All services**, and then select the service type. - ![open azure resource in the portal](./media/manage-resources-portal/manage-azure-resources-portal-open-service.png) + :::image type="content" source="./media/manage-resources-portal/manage-azure-resources-portal-open-service.png" alt-text="Screenshot of the Azure portal showing the Storage accounts service selected."::: 3. Select the resource you want to open. - ![Screenshot that highlights the selected resource.](./media/manage-resources-portal/manage-azure-resources-portal-open-resource.png) + :::image type="content" source="./media/manage-resources-portal/manage-azure-resources-portal-open-resource.png" alt-text="Screenshot of the Azure portal with a storage account named mystorage0207 highlighted."::: A storage account looks like: - ![Screenshot that shows what a storage account looks like.](./media/manage-resources-portal/manage-azure-resources-portal-open-resource-storage.png) + :::image type="content" source="./media/manage-resources-portal/manage-azure-resources-portal-open-resource-storage.png" alt-text="Screenshot of an opened storage account in the Azure portal displaying its overview and settings."::: To open a resource by resource group: To open a resource by resource group: When viewing a resource in the portal, you see the options for managing that particular resource. -![manage Azure resources](./media/manage-resources-portal/manage-azure-resources-portal-manage-resource.png) The screenshot shows the management options for an Azure virtual machine. You can perform operations such as starting, restarting, and stopping a virtual machine. The screenshot shows the management options for an Azure virtual machine. You ca 1. Open the resource in the portal. For the steps, see [Open resources](#open-resources). 2. Select **Delete**. The following screenshot shows the management options for a virtual machine. - ![delete azure resource](./media/manage-resources-portal/manage-azure-resources-portal-delete-resource.png) + :::image type="content" source="./media/manage-resources-portal/manage-azure-resources-portal-delete-resource.png" alt-text="Screenshot of the Azure portal showing the Delete option for a virtual machine."::: 3. Type the name of the resource to confirm the deletion, and then select **Delete**. For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md). For more information about how Azure Resource Manager orders the deletion of res 1. Open the resource in the portal. For the steps, see [Open resources](#open-resources). 2. Select **Move**. The following screenshot shows the management options for a storage account. - ![move azure resource](./media/manage-resources-portal/manage-azure-resources-portal-move-resource.png) + :::image type="content" source="./media/manage-resources-portal/manage-azure-resources-portal-move-resource.png" alt-text="Screenshot of the Azure portal displaying the Move option for a storage account."::: 3. Select **Move to another resource group** or **Move to another subscription** depending on your needs. For more information, see [Move resources to new resource group or subscription](move-resource-group-and-subscription.md). Locking prevents other users in your organization from accidentally deleting or 1. Open the resource in the portal. For the steps, see [Open resources](#open-resources). 2. Select **Locks**. The following screenshot shows the management options for a storage account. - ![lock azure resource](./media/manage-resources-portal/manage-azure-resources-portal-lock-resource.png) + :::image type="content" source="./media/manage-resources-portal/manage-azure-resources-portal-lock-resource.png" alt-text="Screenshot of the Azure portal showing the Locks option for a storage account."::: 3. Select **Add**, and then specify the lock properties. For more information, see [Lock resources with Azure Resource Manager](lock-resources.md). Tagging helps organizing your resource group and resources logically. 1. Open the resource in the portal. For the steps, see [Open resources](#open-resources). 2. Select **Tags**. The following screenshot shows the management options for a storage account. - ![tag azure resource](./media/manage-resources-portal/manage-azure-resources-portal-tag-resource.png) + :::image type="content" source="./media/manage-resources-portal/manage-azure-resources-portal-tag-resource.png" alt-text="Screenshot of the Azure portal displaying the Tags option for a storage account."::: 3. Specify the tag properties, and then select **Save**. For information, see [Using tags to organize your Azure resources](tag-resources-portal.md). For information, see [Using tags to organize your Azure resources](tag-resources When you open a resource, the portal presents default graphs and tables for monitoring that resource type. The following screenshot shows the graphs for a virtual machine: -![monitor azure resource](./media/manage-resources-portal/manage-azure-resources-portal-monitor-resource.png) You can select the pin icon on the upper right corner of the graphs to pin the graph to the dashboard. To learn about working with dashboards, see [Creating and sharing dashboards in the Azure portal](../../azure-portal/azure-portal-dashboards.md). |
azure-resource-manager | App Service Move Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md | When you move a Web App across subscriptions, the following guidance applies: If you don't remember the original resource group, you can find it through diagnostics. For your web app, select **Diagnose and solve problems**. Then, select **Configuration and Management**. -![Select diagnostics](./media/app-service-move-limitations/select-diagnostics.png) Select **Migration Options**. -![Select migration options](./media/app-service-move-limitations/select-migration.png) Select the option for recommended steps to move the web app. -![Select recommended steps](./media/app-service-move-limitations/recommended-steps.png) You see the recommended actions to take before moving the resources. The information includes the original resource group for the web app. -![Screen capture shows recommended steps for moving Microsoft dot Web resources.](./media/app-service-move-limitations/recommendations.png) ## Move hidden resource types in portal When using the portal to move your App Service resources, you may see an error indicating that you haven't moved all of the resources. If you see this error, check if there are resource types that the portal didn't display. Select **Show hidden types**. Then, select all of the resources to move. ## Move with free managed certificates |
azure-resource-manager | Move Resource Group And Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md | There are some important steps to do before moving a resource. By verifying thes Moving resources from one subscription to another is a three-step process: -![cross-subscription move scenario](./media/move-resource-group-and-subscription/cross-subscription-move-scenario.png) For illustration purposes, we have only one dependent resource. To move resources, select the resource group that contains those resources. Select the resources you want to move. To move all of the resources, select the checkbox at the top of list. Or, select resources individually. Select the **Move** button. This button gives you three options: Select whether you're moving the resources to a new resource group or a new subs The source resource group is automatically set. Specify the destination resource group. If you're moving to a new subscription, also specify the subscription. Select **Next**. The portal validates that the resources can be moved. Wait for validation to complete. When validation completes successfully, select **Next**. Acknowledge that you need to update tools and scripts for these resources. To start moving the resources, select **Move**. When the move has completed, you're notified of the result. ## Use Azure PowerShell The lock prevents you from deleting either resource group, creating a new resour The following image shows an error message from the Azure portal when a user tries to delete a resource group that is part of an ongoing move. -![Move error message attempting to delete](./media/move-resource-group-and-subscription/move-error-delete.png) **Question: What does the error code "MissingMoveDependentResources" mean?** |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md | When you send a request through any of the Azure APIs, tools, or SDKs, Resource The following image shows the role Azure Resource Manager plays in handling Azure requests. -![Resource Manager request model](./media/overview/consistent-management-layer.png) All capabilities that are available in the portal are also available through PowerShell, Azure CLI, REST APIs, and client SDKs. Functionality initially released through APIs will be represented in the portal within 180 days of initial release. With Resource Manager, you can: Azure provides four levels of scope: [management groups](../../governance/management-groups/overview.md), subscriptions, [resource groups](#resource-groups), and resources. The following image shows an example of these layers. -![Management levels](./media/overview/scope-levels.png) You apply management settings at any of these levels of scope. The level you select determines how widely the setting is applied. Lower levels inherit settings from higher levels. For example, when you apply a [policy](../../governance/policy/overview.md) to the subscription, the policy is applied to all resource groups and resources in your subscription. When you apply a policy on the resource group, that policy is applied to the resource group and all its resources. However, another resource group doesn't have that policy assignment. |
azure-resource-manager | Tag Resources Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-portal.md | If a user doesn't have the required access for adding tags, you can assign the * 1. To view the tags for a resource or a resource group, look for existing tags in the overview. If you have not previously applied tags, the list is empty. - ![View tags for resource or resource group](./media/tag-resources-portal/view-tags.png) + :::image type="content" source="./media/tag-resources-portal/view-tags.png" alt-text="Screenshot of Azure portal showing tags for a resource group."::: 1. To add a tag, select **Click here to add tags**. 1. Provide a name and value. - ![Add tag](./media/tag-resources-portal/add-tag.png) + :::image type="content" source="./media/tag-resources-portal/add-tag.png" alt-text="Screenshot of Azure portal with the Add Tag dialog box open."::: 1. Continue adding tags as needed. When done, select **Save**. - ![Save tags](./media/tag-resources-portal/save-tags.png) + :::image type="content" source="./media/tag-resources-portal/save-tags.png" alt-text="Screenshot of Azure portal with the Save button highlighted after adding tags."::: 1. The tags are now displayed in the overview. - ![Show tags](./media/tag-resources-portal/view-new-tags.png) + :::image type="content" source="./media/tag-resources-portal/view-new-tags.png" alt-text="Screenshot of Azure portal displaying newly added tags in the overview section."::: ## Edit tags If a user doesn't have the required access for adding tags, you can assign the * 1. To delete a tag, select the trash icon. Then, select **Save**. - ![Delete tag](./media/tag-resources-portal/delete-tag.png) + :::image type="content" source="./media/tag-resources-portal/delete-tag.png" alt-text="Screenshot of Azure portal with the Delete Tag icon highlighted."::: ## Add tags to multiple resources To bulk assign tags to multiple resources: 1. From any list of resources, select the checkbox for the resources you want to assign the tag. Then, select **Assign tags**. - ![Select multiple resources](./media/tag-resources-portal/select-multiple-resources.png) + :::image type="content" source="./media/tag-resources-portal/select-multiple-resources.png" alt-text="Screenshot of Azure portal showing multiple resources selected for bulk tag assignment."::: 1. Add names and values. When done, select **Save**. - ![Select assign](./media/tag-resources-portal/select-assign.png) + :::image type="content" source="./media/tag-resources-portal/select-assign.png" alt-text="Screenshot of Azure portal with the Assign Tags dialog box open for multiple resources."::: ## View resources by tag To view all resources with a tag: 1. On the Azure portal menu, search for **tags**. Select it from the available options. - ![Find by tag](./media/tag-resources-portal/find-tags-general.png) + :::image type="content" source="./media/tag-resources-portal/find-tags-general.png" alt-text="Screenshot of Azure portal search bar with 'tags' entered and selected from the available options."::: 1. Select the tag for viewing resources. - ![Select tag](./media/tag-resources-portal/select-tag.png) + :::image type="content" source="./media/tag-resources-portal/select-tag.png" alt-text="Screenshot of Azure portal displaying a list of tags with one selected for viewing resources."::: 1. All resources with that tag are displayed. - ![View resources by tag](./media/tag-resources-portal/view-resources-by-tag.png) + :::image type="content" source="./media/tag-resources-portal/view-resources-by-tag.png" alt-text="Screenshot of Azure portal showing a list of resources filtered by the selected tag."::: ## Next steps |
azure-resource-manager | Template Tutorial Create First Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md | az account set --subscription SubscriptionName ## Create resource group -When you deploy a template, you can specify a resource group to contain the resources. Before running the deployment command, create the resource group with either the Bash Azure CLI or Azure PowerShell. +When you deploy a template, you can specify a resource group to contain the resources. Before running the deployment command, create the resource group with either the Bash Azure CLI or Azure PowerShell. > [!NOTE] > Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or the Command Prompt, you may need to remove the back slashes and write the command as one line such as: |
azure-resource-manager | Error Not Found | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-not-found.md | When you see dependency problems, you need to gain insight into the order of res 1. Sign in to the [portal](https://portal.azure.com). 1. From the resource group's **Overview**, select the link for the deployment history. - :::image type="content" source="media/error-not-found/select-deployment.png" alt-text="Screenshot that highlights the link to a resource group's deployment history."::: + :::image type="content" source="media/error-not-found/select-deployment.png" alt-text="Screenshot of Azure portal highlighting the link to a resource group's deployment history in the Overview section."::: 1. For the **Deployment name** you want to review, select **Related events**. - :::image type="content" source="media/error-not-found/select-deployment-events.png" alt-text="Screenshot that highlights the link to a deployment's related events."::: + :::image type="content" source="media/error-not-found/select-deployment-events.png" alt-text="Screenshot of Azure portal showing a deployment name with the Related events link highlighted in the deployment history."::: 1. Examine the sequence of events for each resource. Pay attention to the status of each operation and it's time stamp. For example, the following image shows three storage accounts that deployed in parallel. Notice that the three storage account deployments started at the same time. - :::image type="content" source="media/error-not-found/deployment-events-parallel.png" alt-text="Screenshot of activity log for resources deployed in parallel."::: + :::image type="content" source="media/error-not-found/deployment-events-parallel.png" alt-text="Screenshot of Azure portal activity log displaying three storage accounts deployed in parallel, with their timestamps and statuses."::: The next image shows three storage accounts that aren't deployed in parallel. The second storage account depends on the first storage account, and the third storage account depends on the second storage account. The first storage account is labeled **Started**, **Accepted**, and **Succeeded** before the next is started. - :::image type="content" source="media/error-not-found/deployment-events-sequence.png" alt-text="Screenshot of activity log for resources deployed in sequential order."::: + :::image type="content" source="media/error-not-found/deployment-events-sequence.png" alt-text="Screenshot of Azure portal activity log displaying three storage accounts deployed in sequential order, with their timestamps and statuses."::: ## Solution 3: Get external resource When you deploy a template, look for expressions that use the [reference](../tem When you delete a resource, there might be a short amount of time when the resource appears in the portal but isn't available. If you select the resource, you'll get an error that the resource is **Not found**. Refresh the portal and the deleted resource should be removed from your list of available resources. If a deleted resource continues to be shown as available for more than a few minutes, [contact support](https://azure.microsoft.com/support/options/). |
azure-resource-manager | Error Register Resource Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-register-resource-provider.md | You can see the registration status and register a resource provider namespace t 1. In the search box, enter _subscriptions_. Or if you've recently viewed your subscription, select **Subscriptions**. - :::image type="content" source="media/error-register-resource-provider/select-subscriptions.png" alt-text="Screenshot that shows how to select a subscription."::: + :::image type="content" source="media/error-register-resource-provider/select-subscriptions.png" alt-text="Screenshot of the Azure portal with search box and Subscriptions highlighted."::: 1. Select the subscription you want to use to register a resource provider. - :::image type="content" source="media/error-register-resource-provider/select-subscription-to-register.png" alt-text="Screenshot of link to subscription that's used to register a resource provider."::: + :::image type="content" source="media/error-register-resource-provider/select-subscription-to-register.png" alt-text="Screenshot of the Azure portal subscriptions list, highlighting a specific subscription for resource provider registration."::: 1. To see the list of resource providers, under **Settings** select **Resource providers**. - :::image type="content" source="media/error-register-resource-provider/select-resource-providers.png" alt-text="Screenshot of a subscription's list of resource providers."::: + :::image type="content" source="media/error-register-resource-provider/select-resource-providers.png" alt-text="Screenshot of the Azure portal displaying a subscription's settings, highlighting the 'Resource providers' option."::: 1. To register a resource provider, select the resource provider and then select **Register**. - :::image type="content" source="media/error-register-resource-provider/select-register.png" alt-text="Screenshot of button that registers a selected resource provider."::: + :::image type="content" source="media/error-register-resource-provider/select-register.png" alt-text="Screenshot of the Azure portal resource providers list, showing a specific provider selected and the 'Register' button highlighted."::: |
azure-resource-manager | Error Resource Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-resource-quota.md | Some quotas let you specify a quota limit that's submitted for review and either 1. In the search box, enter _subscriptions_. Or if you've recently viewed your subscription, select **Subscriptions**. - :::image type="content" source="media/error-resource-quota/subscriptions.png" alt-text="Screenshot that shows how to select a subscription."::: + :::image type="content" source="media/error-resource-quota/subscriptions.png" alt-text="Screenshot of the Azure portal with search box and Subscriptions highlighted."::: 1. Select the link for your subscription. - :::image type="content" source="media/error-resource-quota/select-subscription.png" alt-text="Screenshot of the link for the subscription."::: + :::image type="content" source="media/error-resource-quota/select-subscription.png" alt-text="Screenshot of the Azure portal subscriptions list, highlighting a specific subscription link."::: 1. Select **Usage + quotas**. - :::image type="content" source="media/error-resource-quota/select-usage-quotas.png" alt-text="Screenshot of subscription's settings to select usage and quotas."::: + :::image type="content" source="media/error-resource-quota/select-usage-quotas.png" alt-text="Screenshot of the subscription settings page, highlighting the 'Usage + quotas' option in the menu."::: 1. Select **Request increase**. From the quota list, you can also submit a support request for a quota increase. For quota's with a pencil icon, you can specify a quota limit. - :::image type="content" source="media/error-resource-quota/request-increase.png" alt-text="Screenshot of icons to submit a support request or specify a quota limit."::: + :::image type="content" source="media/error-resource-quota/request-increase.png" alt-text="Screenshot of the 'Usage + quotas' page, showing the 'Request increase' button and a pencil icon indicating the option to specify a quota limit."::: 1. Complete the forms for the type of quota you need to increase. - :::image type="content" source="media/error-resource-quota/forms.png" alt-text="Screenshot of the form to submit a request to increase a quota."::: + :::image type="content" source="media/error-resource-quota/forms.png" alt-text="Screenshot of the quota increase request form, displaying various fields for users to provide details about their desired quota increase."::: |
azure-resource-manager | Error Sku Not Available | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-sku-not-available.md | To determine which SKUs are available in a **Region**, use the [portal](https:// - To see other available sizes, select **See all sizes**. - :::image type="content" source="media/error-sku-not-available/create-vm.png" alt-text="Screenshot of Azure portal deployment to select a virtual machine size."::: + :::image type="content" source="media/error-sku-not-available/create-vm.png" alt-text="Screenshot of Azure portal deployment interface displaying options to select a virtual machine size from a drop-down menu."::: - You can filter and scroll through the available sizes. When you find the VM size you want to use, choose **Select**. - :::image type="content" source="media/error-sku-not-available/available-sizes.png" alt-text="Screenshot of Azure portal that shows available virtual machine sizes."::: + :::image type="content" source="media/error-sku-not-available/available-sizes.png" alt-text="Screenshot of Azure portal showing a list of available virtual machine sizes along with filtering options to narrow down the selection."::: # [REST](#tab/rest) |
azure-resource-manager | Find Error Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/find-error-code.md | An ARM template can be deployed from the portal. If the template has syntax erro The following example attempts to deploy a storage account and a validation error occurs. Select the message for more details. The template has a syntax error with error code `InvalidTemplate`. The **Summary** shows an expression is missing a closing parenthesis. # [PowerShell](#tab/azure-powershell) To see messages about a deployment's operations, use the resource group's **Acti 1. Select **Activity log**. 1. Use the filters to find an operation's error log. - :::image type="content" source="./media/find-error-code/activity-log.png" alt-text="Screenshot of the resource group's activity log that highlights a failed deployment."::: + :::image type="content" source="./media/find-error-code/activity-log.png" alt-text="Screenshot of the Azure portal's resource group activity log, emphasizing a failed deployment with an error log."::: 1. Select the error log to see the operation's details. - :::image type="content" source="./media/find-error-code/activity-log-details.png" alt-text="Screenshot of the activity log details that shows a failed deployment's error message."::: + :::image type="content" source="./media/find-error-code/activity-log-details.png" alt-text="Screenshot of the activity log details in the Azure portal, showing a failed deployment's error message and operation details."::: To view a deployment's result: To view a deployment's result: 1. Select **Settings** > **Deployments**. 1. Select **Error details** for the deployment. - :::image type="content" source="media/find-error-code/deployment-error-details.png" alt-text="Screenshot of a resource group's link to error details for a failed deployment."::: + :::image type="content" source="media/find-error-code/deployment-error-details.png" alt-text="Screenshot of a resource group's deployments section in the Azure portal, displaying a link to error details for a failed deployment."::: 1. The error message and error code `NoRegisteredProviderFound` are shown. - :::image type="content" source="media/find-error-code/deployment-error-summary.png" alt-text="Screenshot of a message that shows deployment error details."::: + :::image type="content" source="media/find-error-code/deployment-error-summary.png" alt-text="Screenshot of a deployment error summary in the Azure portal, showing the error message and error code NoRegisteredProviderFound."::: # [PowerShell](#tab/azure-powershell) |
azure-resource-manager | Quickstart Troubleshoot Arm Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md | Copy the following template and save it locally. You'll use this file to trouble Open the file in Visual Studio Code. The wavy line under `parameterss:` indicates an error. To see the validation error, hover over the error. You'll notice that `variables` and `resources` have errors for _undefined parameter reference_. To display the template's validation errors, select **View** > **Problems**. All the errors are caused by the incorrect spelling of an element name. Storage names must be between 3 and 24 characters and use only lowercase letters Because the deployment didn't run, there's no deployment history. The activity log shows the preflight error. Select the log to see the error's details. ## Fix deployment error New-AzResourceGroupDeployment ` The deployment begins and is visible in the deployment history. The deployment fails because `outputs` references a virtual network that doesn't exist in the resource group. However, there were no errors for the storage account, so the resource deployed. The deployment history shows a failed deployment. To fix the deployment error, change the reference function to use a valid resource. For more information, see [Resolve resource not found errors](error-not-found.md). For this quickstart, delete the comma that precedes `vnetResult` and all of `vnetResult`. Save the file and rerun the deployment. |
azure-resource-manager | Quickstart Troubleshoot Bicep Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-bicep-deployment.md | output vnetResult object = existingVNet Open the file in Visual Studio Code. You'll notice that Visual Studio Code identifies a syntax error. The first parameter declaration is marked with red squiggles to indicate an error. The lines marked with an error are: parameter storageAccountType string = 'Standard_LRS' When you hover over `parameter`, you see an error message. The message states: _This declaration type is not recognized. Specify a parameter, variable, resource, or output declaration._ If you attempt to deploy this file, you'll get the same error message from the deployment command. You see an error message that indicates preflight validation failed. You also ge Because the error was caught in preflight, no deployment exists in the history. But, the failed deployment exists in the Activity Log. You can open details of the log entry to see the error message. The deployment starts but fails with a message that the virtual network wasn't f Notice in the portal that the deployment appears in the history. You can open the entry in the deployment history to get details about the error. The error also exists in the activity log. |
azure-sql-edge | Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/connect.md | Title: Connect and query Azure SQL Edge description: Learn how to connect to and query Azure SQL Edge. - Previously updated : 07/25/2020 Last updated : 07/28/2023 - - # Connect and query Azure SQL Edge In Azure SQL Edge, after you deploy a container, you can connect to the database engine from any of the following locations: In Azure SQL Edge, after you deploy a container, you can connect to the database You can connect to an instance of Azure SQL Edge instance from any of these common tools: -* [sqlcmd](/sql/linux/sql-server-linux-setup-tools): sqlcmd client tools are already included in the container image of Azure SQL Edge. If you attach to a running container with an interactive bash shell, you can run the tools locally. SQL client tools are NOT available on the ARM64 platform, as such they are not included in the ARM64 version of the SQL Edge containers. -* [SQL Server Management Studio](/sql/ssms/sql-server-management-studio-ssms) -* [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) -* [Visual Studio Code](/sql/visual-studio-code/sql-server-develop-use-vscode) +- [sqlcmd](/sql/linux/sql-server-linux-setup-tools): **sqlcmd** client tools are already included in the container image of Azure SQL Edge. If you attach to a running container with an interactive bash shell, you can run the tools locally. SQL client tools *aren't* available on the ARM64 platform. +- [SQL Server Management Studio](/sql/ssms/sql-server-management-studio-ssms) +- [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) +- [Visual Studio Code](/sql/visual-studio-code/sql-server-develop-use-vscode) -To connect to an Azure SQL Edge database engine from a network machine, you need the following: +To connect to an Azure SQL Edge Database Engine from a network machine, you need the following: - **IP Address or network name of the host machine**: This is the host machine where the Azure SQL Edge container is running.+ - **Azure SQL Edge container host port mapping**: This is the mapping for the Docker container port to a port on the host. Within the container, Azure SQL Edge is always mapped to port 1433. You can change this if you want to. To change the port number, update the **Container Create Options** for the Azure SQL Edge module in Azure IoT Edge. In the following example, port 1433 on the container is mapped to port 1600 on the host. ```JSON To connect to an Azure SQL Edge database engine from a network machine, you need - **SA password for the Azure SQL Edge instance**: This is the value specified for the `SA_PASSWORD` environment variable during deployment of Azure SQL Edge. -## Connect to the database engine from within the container +## Connect to the Database Engine from within the container -The [SQL Server command-line tools](/sql/linux/sql-server-linux-setup-tools) are included in the container image of Azure SQL Edge. If you attach to the container with an interactive command prompt, you can run the tools locally. SQL client tools are NOT available on the ARM64 platform, as such they are not included in the ARM64 version of the SQL Edge containers. +The [SQL Server command-line tools](/sql/linux/sql-server-linux-setup-tools) are included in the container image of Azure SQL Edge. If you attach to the container with an interactive command prompt, you can run the tools locally. SQL client tools aren't available on the ARM64 platform. 1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example, `e69e056c702d` is the container ID. - ```bash - docker exec -it <Azure SQL Edge container ID or name> /bin/bash - ``` + ```bash + docker exec -it e69e056c702d /bin/bash + ``` - > [!TIP] - > You don't always have to specify the entire container ID. You only have to specify enough characters to uniquely identify it. So in this example, it might be enough to use `e6` or `e69`, rather than the full ID. + > [!TIP] + > You don't always have to specify the entire container ID. You only have to specify enough characters to uniquely identify it. So in this example, it might be enough to use `e6` or `e69`, rather than the full ID. -2. When you're inside the container, connect locally with sqlcmd. Sqlcmd isn't in the path by default, so you have to specify the full path. +1. When you're inside the container, connect locally with **sqlcmd**. **sqlcmd** isn't in the path by default, so you have to specify the full path. - ```bash - /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourPassword>' - ``` + ```bash + /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourPassword>' + ``` -3. When you're finished with sqlcmd, type `exit`. +1. When you're finished with **sqlcmd**, type `exit`. -4. When you're finished with the interactive command prompt, type `exit`. Your container continues to run after you exit the interactive bash shell. +1. When you're finished with the interactive command prompt, type `exit`. Your container continues to run after you exit the interactive bash shell. ## Connect to Azure SQL Edge from another container on the same host Because two containers that are running on the same host are on the same Docker network, you can easily access them by using the container name and the port address for the service. For example, if you're connecting to the instance of Azure SQL Edge from another Python module (container) on the same host, you can use a connection string similar to the following. (This example assumes that Azure SQL Edge is configured to listen on the default port.) ```python- import pyodbc server = 'MySQLEdgeContainer' # Replace this with the actual name of your SQL Edge Docker container username = 'sa' # SQL Server username password = 'MyStrongestP@ssword' # Replace this with the actual SA password from database = 'MyEdgeDatabase' # Replace this with the actual database name from your deployment. If you do not have a database created, you can use Master database. db_connection_string = "Driver={ODBC Driver 17 for SQL Server};Server=" + server + ";Database=" + database + ";UID=" + username + ";PWD=" + password + ";" conn = pyodbc.connect(db_connection_string, autocommit=True)- ``` ## Connect to Azure SQL Edge from another network machine -You might want to connect to the instance of Azure SQL Edge from another machine on the network. To do so, use the IP address of the Docker host and the host port to which the Azure SQL Edge container is mapped. For example, if the IP address of the Docker host is *xxx.xxx.xxx.xxx*, and the Azure SQL Edge container is mapped to host port *1600*, then the server address for the instance of Azure SQL Edge would be *xxx.xxx.xxx.xxx,1600*. The updated Python script is: +You might want to connect to the instance of Azure SQL Edge from another machine on the network. To do so, use the IP address of the Docker host and the host port to which the Azure SQL Edge container is mapped. For example, if the IP address of the Docker host is `192.168.2.121`, and the Azure SQL Edge container is mapped to host port *1600*, then the server address for the instance of Azure SQL Edge would be `192.168.2.121,1600`. The updated Python script is: ```python- import pyodbc-server = 'xxx.xxx.xxx.xxx,1600' # Replace this with the actual name of your SQL Edge Docker container +server = '192.168.2.121,1600' # Replace this with the actual name or IP address of your SQL Edge Docker container username = 'sa' # SQL Server username password = 'MyStrongestP@ssword' # Replace this with the actual SA password from your deployment database = 'MyEdgeDatabase' # Replace this with the actual database name from your deployment. If you do not have a database created, you can use Master database. db_connection_string = "Driver={ODBC Driver 17 for SQL Server};Server=" + server + ";Database=" + database + ";UID=" + username + ";PWD=" + password + ";" conn = pyodbc.connect(db_connection_string, autocommit=True)- ``` To connect to an instance of Azure SQL Edge by using SQL Server Management Studio running on a Windows machine, see [SQL Server Management Studio](/sql/linux/sql-server-linux-manage-ssms). -To connect to an instance of Azure SQL Edge by using Visual Studio Code on a Windows, Mac or Linux machine, see [Visual Studio Code](/sql/visual-studio-code/sql-server-develop-use-vscode). +To connect to an instance of Azure SQL Edge by using Visual Studio Code on a Windows, macOS or Linux machine, see [Visual Studio Code](/sql/visual-studio-code/sql-server-develop-use-vscode). -To connect to an instance of Azure SQL Edge by using Azure Data Studio on a Windows, Mac or Linux machine, see [Azure Data Studio](/sql/azure-data-studio/quickstart-sql-server). +To connect to an instance of Azure SQL Edge by using Azure Data Studio on a Windows, macOS or Linux machine, see [Azure Data Studio](/sql/azure-data-studio/quickstart-sql-server). ## Next steps -[Connect and query](/sql/linux/sql-server-linux-configure-docker#connect-and-query) --[Install SQL Server tools on Linux](/sql/linux/sql-server-linux-setup-tools) +- [Connect and query](/sql/linux/sql-server-linux-configure-docker#connect-and-query) +- [Install SQL Server tools on Linux](/sql/linux/sql-server-linux-setup-tools) |
azure-web-pubsub | Reference Client Sdk Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-client-sdk-csharp.md | description: This reference describes the C# client-side SDK for Azure Web PubSu -+ Last updated 07/17/2023 export AZURE_LOG_LEVEL=verbose For more detailed instructions on how to enable logs, you can look at the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/core/logger). ### Live Trace-Use [Live Trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) from Azure portal to inspect live message traffic through your Web PubSub resource. +Use [Live Trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) from Azure portal to inspect live message traffic through your Web PubSub resource. |
azure-web-pubsub | Reference Client Sdk Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-client-sdk-java.md | description: This reference describes the Java client-side SDK for Azure Web Pub -+ Last updated 07/17/2023 export AZURE_LOG_LEVEL=verbose For more detailed instructions on how to enable logs, you can look at the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/core/logger). ### Live Trace-Use [Live Trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) from Azure portal to inspect live message traffic through your Web PubSub resource. +Use [Live Trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) from Azure portal to inspect live message traffic through your Web PubSub resource. |
azure-web-pubsub | Reference Client Sdk Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-client-sdk-python.md | description: This reference describes the Python client-side SDK for Azure Web P -+ Last updated 07/17/2023 export AZURE_LOG_LEVEL=verbose For more detailed instructions on how to enable logs, you can look at the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/core/logger). ### Live Trace-Use [Live Trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) from Azure portal to inspect live message traffic through your Web PubSub resource. +Use [Live Trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) from Azure portal to inspect live message traffic through your Web PubSub resource. |
backup | Azure Kubernetes Service Cluster Backup Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md | Title: Azure Kubernetes Service (AKS) backup using Azure Backup prerequisites description: This article explains the prerequisites for Azure Kubernetes Service (AKS) backup. Previously updated : 03/27/2023 Last updated : 07/27/2023 Azure Backup now allows you to back up AKS clusters (cluster resources and persi - Before you install an extension in an AKS cluster, you must register the `Microsoft.KubernetesConfiguration` resource provider at the subscription level. Learn how to [register the resource provider](azure-kubernetes-service-cluster-manage-backups.md#resource-provider-registrations). +- Extension agent and extension operator are the core platform components in AKS, which are installed when an extension of any type is installed for the first time in an AKS cluster. These provide capabilities to deploy *1P* and *3P* extensions. The backup extension also relies on these for installation and upgrades. ++- Both of these core components are deployed with aggressive hard limits on CPU and memory, with CPU *less than 0.5% of a core* and memory limit ranging from *50-200 MB*. So, the *COGS impact* of these components is very low. Because they are core platform components, there is no workaround available to remove them once installed in the cluster. ++++ Learn [how to manage the operation to install Backup Extension using Azure CLI](azure-kubernetes-service-cluster-manage-backups.md#backup-extension-related-operations). ## Trusted Access |
backup | Backup Azure Enhanced Soft Delete About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md | Title: Overview of enhanced soft delete for Azure Backup (preview) description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 07/14/2023 Last updated : 07/27/2023 The key benefits of enhanced soft delete are: >[!Note] >The soft delete doesn't cost you for 14 days of retention; however, you're charged for the period beyond 14 days. [Learn more](#pricing). - **Re-registration of soft deleted items**: You can now register the items in soft deleted state with another vault. However, you can't register the same item with two vaults for active backups. -- **Soft delete and reregistration of backup containers**: You can now unregister the backup containers (which you can soft delete) if you've deleted all backup items in the container. You can now register such soft deleted containers to other vaults. This is applicable for applicable workloads only, including SQL in Azure VM backup, SAP HANA in Azure VM backup and backup of on-premises servers.+- **Soft delete and reregistration of backup containers**: You can now unregister the backup containers (which you can soft delete) if you've deleted all backup items in the container. You can now register such soft deleted containers to other vaults. This is applicable for applicable workloads only, including SQL in Azure VM backup, SAP HANA in Azure VM backup and backup of on-premises servers. [Learn more](#soft-deleted-items-reregistration). - **Soft delete across workloads**: Enhanced soft delete applies to all vaulted datasources alike and is supported for Recovery Services vaults and Backup vaults. Enhanced soft delete also applies to operational backups of disks and VM backup snapshots used for instant restores. However, unlike vaulted backups, these snapshots can be directly accessed and deleted before the soft delete period expires. Enhanced soft delete is currently not supported for operational backup for Blobs and Azure Files. - **Soft delete of recovery points**: This feature allows you to recover data from recovery points that might have been deleted due to making changes in a backup policy or changing the backup policy associated with a backup item. Soft delete of recovery points isn't supported for log recovery points in SQL and SAP HANA workloads. [Learn more](manage-recovery-points.md#impact-of-expired-recovery-points-for-items-in-soft-deleted-state). Soft delete retention is the retention period (in days) of a deleted item in sof If a backup item/container is in soft deleted state, you can register it to a vault different from the original one where the soft deleted data belongs. >[!Note]->You can't actively protect one item to two vaults simultaneously. So, if you start protecting a backup container using another vault, you can no longer re-protect the same backup container to the previous vault. +>- You can't actively protect one item to two vaults simultaneously. So, if you start protecting a backup container using another vault, you can no longer re-protect the same backup container to the previous vault. +>- Reregistration is currently not supported with Always on availability group (AAG) or SAP HANA System Replication (HSR) configuration. ## Soft delete of recovery points |
backup | Backup Azure Private Endpoints Configure Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-configure-manage.md | Title: How to create and manage private endpoints (with v2 experience) for Azure description: This article explains how to configure and manage private endpoints for Azure Backup. Previously updated : 04/26/2023 Last updated : 07/27/2023 Follow these steps: :::image type="content" source="./media/backup-azure-private-endpoints/deny-public-network.png" alt-text="Screenshot showing how to select the Deny option."::: >[!Note]- >Once you deny access, you can still access the vault, but you can't move data to/from networks that don't contain private endpoints. For more information, see [Create private endpoints for Azure Backup](#create-private-endpoints-for-azure-backup). + >- Once you deny access, you can still access the vault, but you can't move data to/from networks that don't contain private endpoints. For more information, see [Create private endpoints for Azure Backup](#create-private-endpoints-for-azure-backup). + >- Denial of public access is currently not supported for vaults that have *Cross Region Restore* enabled. 3. Select **Apply** to save the changes. |
backup | Encryption At Rest With Cmk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk.md | Title: Encryption of backup data using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Previously updated : 01/13/2023 Last updated : 07/27/2023 In this article, you'll learn how to: - The Recovery Services vault can be encrypted only with keys stored in Azure Key Vault, located in the **same region**. Also, keys must be [supported](../key-vault/keys/about-keys.md#key-types-and-protection-methods) **RSA keys** only and should be in **enabled** state. - Moving CMK encrypted Recovery Services vault across Resource Groups and Subscriptions isn't currently supported.-- Recovery Services vaults encrypted with customer-managed keys currently don't support cross-region restore of backed-up instances. - When you move a Recovery Services vault already encrypted with customer-managed keys to a new tenant, you'll need to update the Recovery Services vault to recreate and reconfigure the vault's managed identity and CMK (which should be in the new tenant). If this isn't done, the backup and restore operations will fail. Also, any Azure role-based access control (Azure RBAC) permissions set up within the subscription will need to be reconfigured. - This feature can be configured through the Azure portal and PowerShell. The process to configure and perform backups to a Recovery Services vault encryp Data stored in the Recovery Services vault can be restored according to the steps described [here](./backup-azure-arm-restore-vms.md). When restoring from a Recovery Services vault encrypted using customer-managed keys, you can choose to encrypt the restored data with a Disk Encryption Set (DES). +>[!Note] +>The experience described in this section only applies to restore of data from CMK encrypted vaults. When you restore data from a vault that isn't using CMK encryption, the restored data would be encrypted using Platform Managed Keys. If you restore from an instant recovery snapshot, it would be encrypted using the mechanism used for encrypting the source disk. + #### Restore VM/disk 1. When you recover disk / VM from a *Snapshot* recovery point, the restored data will be encrypted with the DES used for encrypting the source VM's disks. To specify the Disk Encryption Set under Encryption Settings in the restore pane 2. From the dropdown, select the DES you wish to use for the restored disk(s). **Ensure you have access to the DES.** >[!NOTE]->The ability to choose a DES while restoring isn't available if you're restoring a VM that uses Azure Disk Encryption. +>The ability to choose a DES while restoring isn't available if you're restoring a VM that uses Azure Disk Encryption or if you're performing cross region restore. ![Encrypt disk using your key](./media/encryption-at-rest-with-cmk/encrypt-disk-using-your-key.png) |
baremetal-infrastructure | Supported Instances And Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md | Learn about instances and regions supported for NC2 on Azure. Nutanix Clusters on Azure supports: * Minimum of three bare metal nodes per cluster.-* Maximum of 13 bare metal nodes. +* Maximum of 28 bare metal nodes. * Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure. * Prism Central instance deployed on Nutanix Clusters on Azure to manage the Nutanix clusters in Azure. |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates ## July 2023 Guest OS ->[!NOTE] -->The July Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the July Guest OS. This list is subject to change. | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |-| Rel 23-07 | [5028168] | Latest Cumulative Update(LCU) | 6.60 | Jul 11, 2023 | -| Rel 23-07 | [5028171] | Latest Cumulative Update(LCU) | 7.28 | Jul 11, 2023 | -| Rel 23-07 | [5028169] | Latest Cumulative Update(LCU) | 5.84 | Jun 11, 2023 | -| Rel 23-07 | [5028871] | .NET Framework 3.5 Security and Quality Rollup | 2.140 | Jul 11, 2023 | -| Rel 23-07 | [5028865] | .NET Framework 4.7.2 Security and Quality Rollup | 2.140 | Jul 11, 2023 | -| Rel 23-07 | [5028872] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.120 | Jul 11, 2023 | -| Rel 23-07 | [5028864] | .NET Framework 4.7.2 Cumulative Update LKG | 4.120 | Ju1 11, 2023 | -| Rel 23-07 | [5028869] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.128 | Jul 11, 2023 | -| Rel 23-07 | [5028863] | .NET Framework 4.7.2 Cumulative Update LKG | 3.128 | Jul 11, 2023 | -| Rel 23-07 | [5028862] | .NET Framework DotNet | 6.60 | Jul 11, 2023 | -| Rel 23-07 | [5028858] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.28 | Ju1 11, 2023 | -| Rel 23-07 | [5028240] | Monthly Rollup | 2.140 | Jul 11, 2023 | -| Rel 23-07 | [5028232] | Monthly Rollup | 3.128 | Jul 11, 2023 | -| Rel 23-07 | [5028228] | Monthly Rollup | 4.120 | Jul 11, 2023 | -| Rel 23-07 | [5027575] | Servicing Stack Update | 3.128 | Jun 13, 2023 | -| Rel 23-07 | [5027574] | Servicing Stack Update LKG | 4.120 | Jun 13, 2023 | -| Rel 23-07 | [4578013] | OOB Standalone Security Update | 4.120 | Aug 19, 2023 | -| Rel 23-07 | [5023788] | Servicing Stack Update LKG | 5.84 | Mar 14, 2023 | -| Rel 23-07 | [5028264] | Servicing Stack Update LKG | 2.140 | Jul 11, 2023 | -| Rel 23-07 | [4494175] | Microcode | 5.84 | Sep 1, 2020 | -| Rel 23-07 | [4494174] | Microcode | 6.60 | Sep 1, 2020 | -| Rel 23-07 | 5028317 | Servicing Stack Update | 7.28 | | -| Rel 23-07 | 5028316 | Servicing Stack Update | 6.60 | | +| Rel 23-07 | [5028168] | Latest Cumulative Update(LCU) | [6.60] | Jul 11, 2023 | +| Rel 23-07 | [5028171] | Latest Cumulative Update(LCU) | [7.28] | Jul 11, 2023 | +| Rel 23-07 | [5028169] | Latest Cumulative Update(LCU) | [5.84] | Jun 11, 2023 | +| Rel 23-07 | [5028871] | .NET Framework 3.5 Security and Quality Rollup | [2.140] | Jul 11, 2023 | +| Rel 23-07 | [5028865] | .NET Framework 4.7.2 Security and Quality Rollup | [2.140] | Jul 11, 2023 | +| Rel 23-07 | [5028872] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.120] | Jul 11, 2023 | +| Rel 23-07 | [5028864] | .NET Framework 4.7.2 Cumulative Update LKG | [4.120] | Ju1 11, 2023 | +| Rel 23-07 | [5028869] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.128] | Jul 11, 2023 | +| Rel 23-07 | [5028863] | .NET Framework 4.7.2 Cumulative Update LKG | [3.128] | Jul 11, 2023 | +| Rel 23-07 | [5028862] | .NET Framework DotNet | [6.60] | Jul 11, 2023 | +| Rel 23-07 | [5028858] | .NET Framework 4.8 Security and Quality Rollup LKG | [7.28] | Ju1 11, 2023 | +| Rel 23-07 | [5028240] | Monthly Rollup | [2.140] | Jul 11, 2023 | +| Rel 23-07 | [5028232] | Monthly Rollup | [3.128] | Jul 11, 2023 | +| Rel 23-07 | [5028228] | Monthly Rollup | [4.120] | Jul 11, 2023 | +| Rel 23-07 | [5027575] | Servicing Stack Update | [3.128] | Jun 13, 2023 | +| Rel 23-07 | [5027574] | Servicing Stack Update LKG | [4.120] | Jun 13, 2023 | +| Rel 23-07 | [4578013] | OOB Standalone Security Update | [4.120] | Aug 19, 2023 | +| Rel 23-07 | [5023788] | Servicing Stack Update LKG | [5.84] | Mar 14, 2023 | +| Rel 23-07 | [5028264] | Servicing Stack Update LKG | [2.140] | Jul 11, 2023 | +| Rel 23-07 | [4494175] | Microcode | [5.84] | Sep 1, 2020 | +| Rel 23-07 | [4494174] | Microcode | [6.60] | Sep 1, 2020 | +| Rel 23-07 | 5028317 | Servicing Stack Update | [7.28] | | +| Rel 23-07 | 5028316 | Servicing Stack Update | [6.60] | | [5028168]: https://support.microsoft.com/kb/5028168 [5028171]: https://support.microsoft.com/kb/5028171 The following tables show the Microsoft Security Response Center (MSRC) updates [4494174]: https://support.microsoft.com/kb/4494174 [5028317]: https://support.microsoft.com/kb/5028317 [5028316]: https://support.microsoft.com/kb/5028316+[2.140]: ./cloud-services-guestos-update-matrix.md#family-2-releases +[3.128]: ./cloud-services-guestos-update-matrix.md#family-3-releases +[4.120]: ./cloud-services-guestos-update-matrix.md#family-4-releases +[5.84]: ./cloud-services-guestos-update-matrix.md#family-5-releases +[6.60]: ./cloud-services-guestos-update-matrix.md#family-6-releases +[7.28]: ./cloud-services-guestos-update-matrix.md#family-7-releases ## June 2023 Guest OS |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **July 27, 2023** +The July Guest OS has released. + ###### **July 8, 2023** The June Guest OS has released. The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-7.28_202307-01 | July 27, 2023 | Post 7.30 | | WA-GUEST-OS-7.27_202306-02 | July 8, 2023 | Post 7.29 |-| WA-GUEST-OS-7.25_202305-01 | May 19, 2023 | Post 7.28 | +|~~WA-GUEST-OS-7.25_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-7.24_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-7.23_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-7.22_202302-01~~| March 1, 2023 | April 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-6.60_202307-01 | July 27, 2023 | Post 6.62 | | WA-GUEST-OS-6.59_202306-02 | July 8, 2023 | Post 6.61 |-| WA-GUEST-OS-6.57_202305-01 | May 19, 2023 | Post 6.60 | +|~~WA-GUEST-OS-6.57_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-6.56_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-6.55_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-6.54_202302-01~~| March 1, 2023 | April 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-5.84_202307-01 | July 27, 2023 | Post 5.86 | | WA-GUEST-OS-5.83_202306-02 | July 8, 2023 | Post 5.85 | -| WA-GUEST-OS-5.81_202305-01 | May 19, 2023 | Post 5.84 | +|~~WA-GUEST-OS-5.81_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-5.80_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-5.79_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-5.78_202302-01~~| March 1, 2023 | April 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-4.120_202307-01 | July 27, 2023 | Post 4.122 | | WA-GUEST-OS-4.119_202306-02 | July 8, 2023 | Post 4.121 |-| WA-GUEST-OS-4.117_202305-01 | May 19, 2023 | Post 4.120 | +|~~WA-GUEST-OS-4.117_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-4.116_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-4.115_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-4.114_202302-01~~| March 1, 2023 | April 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-3.128_202307-01 | July 27, 2023 | Post 3.130 | | WA-GUEST-OS-3.127_202306-02 | July 8, 2023 | Post 3.129 |-| WA-GUEST-OS-3.125_202305-01 | May 19, 2023 | Post 3.128 | +|~~WA-GUEST-OS-3.125_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-3.124_202304-02~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-3.122_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-3.121_202302-01~~| March 1, 2023 | April 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-2.140_202307-01 | July 27, 2023 | Post 2.142 | | WA-GUEST-OS-2.139_202306-02 | July 8, 2023 | Post 2.141 |-| WA-GUEST-OS-2.137_202305-01 | May 19, 2023 | Post 2.140 | +|~~WA-GUEST-OS-2.137_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-2.136_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-2.135_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-2.134_202302-01~~| March 1, 2023 | April 27, 2023 | |
cognitive-services | Bing Autosuggest Upgrade Guide V5 To V7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/bing-autosuggest-upgrade-guide-v5-to-v7.md | Title: Upgrade Bing Autosuggest API v5 to v7-+ description: Identifies the parts of your Bing Autosuggest application that you need to update to use version 7. |
cognitive-services | Get Suggestions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/concepts/get-suggestions.md | Title: Suggesting search terms with the Bing Autosuggest API-+ description: This article discusses the concept of suggesting query terms using the Bing Autosuggest API and the impact of query length on relevance. |
cognitive-services | Sending Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/concepts/sending-requests.md | Title: "Sending requests to the Bing Autosuggest API"-+ description: The Bing Autosuggest API returns a list of suggested queries based on the partial query string in the search box. Learn more about sending requests. |
cognitive-services | Get Suggested Search Terms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md | Title: What is Bing Autosuggest?-+ description: The Bing Autosuggest API returns a list of suggested queries based on the partial query string in the search box. If your application sends queries to any of the Bing Search APIs, you can use th The Bing Autosuggest API is a RESTful web service, easy to call from any programming language that can make HTTP requests and parse JSON. -1. Create a [Cognitive Services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. +1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. 2. Send a request to this API each time a user types a new character in your application's search box. 3. Process the API response by parsing the returned JSON message. |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/language-support.md | Title: Language support - Bing Autosuggest API-+ description: A list of supported languages and regions for the Bing Autosuggest API. The following lists the languages supported by Bing Autosuggest API. ## See also -- [Azure AI services documentation page](../../ai-services/index.yml)-- [Azure Cognitive Services Product page](https://azure.microsoft.com/services/cognitive-services/)+- [Azure AI services documentation](../../ai-services/index.yml) +- [Azure AI services product information](https://azure.microsoft.com/services/cognitive-services/) |
cognitive-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/client-libraries.md | Title: 'Quickstart: Use the Bing Autosuggest client library'-+ description: The Autosuggest API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results. |
cognitive-services | Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/csharp.md | Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and C#"-+ description: "Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and C#." |
cognitive-services | Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/java.md | Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and Java"-+ description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and Java. |
cognitive-services | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/nodejs.md | Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and Node.js"-+ description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and Node.js. |
cognitive-services | Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/php.md | Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and PHP"-+ description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and PHP. |
cognitive-services | Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/python.md | Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and Python"-+ description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and Python. |
cognitive-services | Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/ruby.md | Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and Ruby"-+ description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and Ruby. |
cognitive-services | Autosuggest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/tutorials/autosuggest.md | Title: "Tutorial: Getting Automatic suggestions Results using Bing Autosuggest API"-+ description: In this tutorial, you will build a web page that allows users to query the Bing Autosuggest API and displays the query results. |
cognitive-services | Call Endpoint Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-csharp.md | Title: "Quickstart: Call your Bing Custom Search endpoint using C# | Microsoft Docs"-+ description: "Use this quickstart to begin requesting search results from your Bing Custom Search instance in C#. " |
cognitive-services | Call Endpoint Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-java.md | Title: "Quickstart: Call your Bing Custom Search endpoint using Java | Microsoft Docs"-+ description: Use this quickstart to begin requesting search results from your Bing Custom Search instance in Java. |
cognitive-services | Call Endpoint Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-nodejs.md | Title: "Quickstart: Call your Bing Custom Search endpoint using Node.js | Microsoft Docs"-+ description: Use this quickstart to begin requesting search results from your Bing Custom Search instance using Node.js. |
cognitive-services | Call Endpoint Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-python.md | Title: "Quickstart: Call your Bing Custom Search endpoint using Python | Microsoft Docs"-+ description: Use this quickstart to begin requesting search results from your Bing Custom Search instance using Python. |
cognitive-services | Define Custom Suggestions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/define-custom-suggestions.md | Title: Define Custom Autosuggest suggestions - Bing Custom Search-+ description: Custom Autosuggest returns a list of suggested search query strings that are relevant to your search experience. |
cognitive-services | Define Your Custom View | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/define-your-custom-view.md | Title: Configure your Bing Custom Search experience | Microsoft Docs-+ description: The portal lets you create a search instance that specifies the slices of the web; domains, subpages, and webpages. |
cognitive-services | Endpoint Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/endpoint-custom.md | Title: Bing Custom Search endpoint-+ description: Create tailored search experiences for topics that you care about. Users see search results tailored to the content they care about. |
cognitive-services | Get Images From Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/get-images-from-instance.md | Title: Get images from your custom view - Bing Custom Search-+ description: High-level overview about using Bing Custom Search to get images from your custom view of the Web. |
cognitive-services | Get Videos From Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/get-videos-from-instance.md | Title: Get videos from your custom view - Bing Custom Search-+ description: High-level overview about using Bing Custom Search to get videos from your custom view of the Web. |
cognitive-services | Hosted Ui | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/hosted-ui.md | Title: Configure a hosted UI for Bing Custom Search | Microsoft Docs-+ description: Use this article to configure and integrate a hosted UI for Bing Custom Search. To configure a hosted UI for your web applications, follow these steps. As you m 6. Under **Additional Configurations**, provide values as appropriate for your app. These settings are optional. To see the effect of applying or removing them, see the preview pane on the right. Available configuration options are: -7. Enter the search subscription key or choose one from the dropdown list. The dropdown list is populated with keys from your Azure account's subscriptions. See [Cognitive Services API account](../cognitive-services-apis-create-account.md). +7. Enter the search subscription key or choose one from the dropdown list. The dropdown list is populated with keys from your Azure account's subscriptions. See [Azure AI services API account](../cognitive-services-apis-create-account.md). 8. If you enabled autosuggest, enter the autosuggest subscription key or choose one from the dropdown list. The dropdown list is populated with keys from your Azure account's subscriptions. Custom Autosuggest requires a specific subscription tier, see the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/bing-custom-search/). |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/language-support.md | Title: Language support - Bing Custom Search API-+ description: A list of supported languages and regions for the Bing Custom Search API. |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/overview.md | Title: What is the Bing Custom Search API?-+ description: The Bing Custom Search API enables you to create tailored search experiences for topics that you care about. |
cognitive-services | Quick Start | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/quick-start.md | Title: "Quickstart: Create a first Bing Custom Search instance"-+ description: Use this quickstart to create a custom Bing instance that can search domains and webpages that you define. |
cognitive-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/quickstarts/client-libraries.md | Title: "Quickstart: Use the Bing Custom Search client library"-+ description: The Custom Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results. |
cognitive-services | Search Your Custom View | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/search-your-custom-view.md | Title: Search a custom view - Bing Custom Search-+ description: After you've configured your custom search experience, you can test it from within the Bing Custom Search portal. |
cognitive-services | Share Your Custom Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/share-your-custom-search.md | Title: Share your custom search - Bing Custom Search-+ description: Easily allow collaborative editing and testing of your instance by sharing it with members of your team. |
cognitive-services | Custom Search Web Page | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/tutorials/custom-search-web-page.md | Title: "Tutorial: Create a custom search web page - Bing Custom Search"-+ description: Learn how to configure a custom Bing search instance and integrate it into a web page with this tutorial. |
cognitive-services | Search For Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/concepts/search-for-entities.md | Title: Search for entities with the Bing Entity Search API-+ description: Use the Bing Entity Search API to extract and search for entities and places from search queries. |
cognitive-services | Sending Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/concepts/sending-requests.md | Title: "Sending search requests to the Bing Entity Search API"-+ description: The Bing Entity Search API sends a search query to Bing and gets results that include entities and places. |
cognitive-services | Entity Search Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/entity-search-endpoint.md | Title: The Bing Entity Search API endpoint-+ description: The Bing Entity Search API has one endpoint that returns entities from the Web based on a query. These search results are returned in JSON. |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/overview.md | Title: What is the Bing Entity Search API?-+ description: Learn details about the Bing Entity Search API and how to extract and search for entities and places from search queries. The Bing Entity Search API sends a search query to Bing and gets results that in The Bing Entity Search API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON. You can use the service using either the REST API, or the SDK. -1. Create a [Cognitive Services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. +1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. 2. Send a request to the API, with a valid search query. 3. Process the API response by parsing the returned JSON message. |
cognitive-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/client-libraries.md | Title: 'Quickstart: Use the Bing Entity Search client library'-+ description: The Entity Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results. |
cognitive-services | Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/csharp.md | Title: "Quickstart: Send a search request to the REST API using C# - Bing Entity Search"-+ description: "Use this quickstart to send a request to the Bing Entity Search REST API using C#, and receive a JSON response." |
cognitive-services | Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/java.md | Title: "Quickstart: Send a search request to the REST API using Java - Bing Entity Search"-+ description: Use this quickstart to send a request to the Bing Entity Search REST API using Java, and receive a JSON response. |
cognitive-services | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/nodejs.md | Title: "Quickstart: Send a search request to the REST API using Node.js - Bing Entity Search"-+ description: Use this quickstart to send a request to the Bing Entity Search REST API using Node.js and receive a JSON response. |
cognitive-services | Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/php.md | Title: "Quickstart: Send a search request to the REST API using PHP - Bing Entity Search"-+ description: Use this quickstart to send a request to the Bing Entity Search REST API using PHP, and receive a JSON response. |
cognitive-services | Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/python.md | Title: "Quickstart: Send a search request to the REST API using Python - Bing Entity Search"-+ description: Use this quickstart to send a request to the Bing Entity Search REST API using Python, and receive a JSON response. |
cognitive-services | Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/ruby.md | Title: "Quickstart: Send a search request to the REST API using Ruby - Bing Entity Search"-+ description: Use this quickstart to send a request to the Bing Entity Search REST API using Ruby, and receive a JSON response. |
cognitive-services | Rank Results | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/rank-results.md | Title: Using ranking to display answers - Bing Entity Search-+ description: Learn how to use ranking to display the answers that the Bing Entity Search API returns. |
cognitive-services | Tutorial Bing Entities Search Single Page App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/tutorial-bing-entities-search-single-page-app.md | Title: "Tutorial: Bing Entity Search single-page web app"-+ description: This tutorial shows how to use the Bing Entity Search API in a single-page Web application. |
cognitive-services | Bing Image Search Resource Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/bing-image-search-resource-faq.md | Title: Frequently asked questions (FAQ) - Bing Image Search API-+ description: Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API. -Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API for Azure Cognitive Services on Azure. +Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API for Azure AI services on Azure. ## Response headers in JavaScript Is your question about a missing feature or functionality? Consider requesting o ## See also - [Stack Overflow: Cognitive Services](https://stackoverflow.com/questions/tagged/bing-api) + [Stack Overflow: Azure AI services](https://stackoverflow.com/questions/tagged/bing-api) |
cognitive-services | Bing Image Upgrade Guide V5 To V7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/bing-image-upgrade-guide-v5-to-v7.md | Title: Upgrade from Bing Image Search API v5 to v7-+ description: This upgrade guide describes changes between version 5 and version 7 of the Bing Image Search API. Use this guide to help you identify the parts of your application that you need to update to use version 7. |
cognitive-services | Bing Image Search Get Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/concepts/bing-image-search-get-images.md | Title: Get images from the web - Bing Image Search API-+ description: Use the Bing Image Search API to search for and get relevant images from the web. Host: api.cognitive.microsoft.com ## Bing Image Search response format -The response message from Bing contains an [Images](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#images) answer that contains a list of images that Cognitive Services determined to be relevant to the query. Each [Image](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#image) object in the list includes the following information about the image: the URL, its size, its dimensions, its encoding format, a URL to a thumbnail of the image, and the thumbnail's dimensions. +The response message from Bing contains an [Images](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#images) answer that contains a list of images that Azure AI services determined to be relevant to the query. Each [Image](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#image) object in the list includes the following information about the image: the URL, its size, its dimensions, its encoding format, a URL to a thumbnail of the image, and the thumbnail's dimensions. > [!NOTE] > * Images must be displayed in the order provided in the response. |
cognitive-services | Bing Image Search Sending Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/concepts/bing-image-search-sending-queries.md | Title: Customize and suggest image search queries - Bing Image Search API-+ description: Learn about customizing the search queries you send to the Bing Image Search API. Use this article to learn how to customize queries and suggest search terms to s ## Suggest search terms -If your app has a search box where search terms are entered, you can use the [Bing Autosuggest API](../../bing-autosuggest/get-suggested-search-terms.md) to improve the experience. The API can display suggested search terms in real time. The API returns suggested query strings based on partial search terms and Cognitive Services. +If your app has a search box where search terms are entered, you can use the [Bing Autosuggest API](../../bing-autosuggest/get-suggested-search-terms.md) to improve the experience. The API can display suggested search terms in real time. The API returns suggested query strings based on partial search terms and Azure AI services. ## Pivot the query |
cognitive-services | Gif Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/gif-images.md | Title: Search for GIF images using the Bing Image Search API-+ description: The Bing Image Search API enables you to also search across the entire Web for the most relevant .gif images. |
cognitive-services | Image Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/image-insights.md | Title: Get image insights - Bing Image Search API-+ description: Learn how to use the Bing Image Search API to get more information about an image. |
cognitive-services | Image Search Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/image-search-endpoint.md | Title: Endpoints for the Bing Image Search API-+ description: The Image Search API includes three endpoints. Endpoint 1 returns images from the web. Endpoint 2 returns ImageInsights. Endpoint 3 returns trending images. |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/language-support.md | Title: Language support - Bing Image Search API-+ description: Find out which countries/regions and languages are supported by the Bing Image Search API. |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/overview.md | Title: What is the Bing Image Search API?-+ description: The Bing Image Search API enables you to use Bing's cognitive image search capabilities in your application. By sending user search queries with the API, you can get and display relevant and high-quality images similar to Bing Images. While the Bing Image Search API provides image-only search results, you can comb The Bing Image Search API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON. You can use the service using either the [REST API](./quickstarts/csharp.md), or the [SDK](./quickstarts/client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp). -1. Create a [Cognitive Services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. +1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. 2. Send a request to the API, with a valid [search query](./concepts/bing-image-search-sending-queries.md). 3. Process the API response by parsing the returned JSON message. |
cognitive-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/client-libraries.md | Title: 'Quickstart: Use the Bing Image Search client library'-+ description: The Image Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results. |
cognitive-services | Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/csharp.md | Title: "Quickstart: Search for images using the Bing Image Search REST API and C#"-+ description: "Use this quickstart to send image search requests to the Bing Image Search REST API using C#, and receive JSON responses." |
cognitive-services | Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/java.md | Title: "Quickstart: Search for images using the Bing Image Search REST API and Java"-+ description: Use this quickstart to send image search requests to the Bing Image Search REST API using Java, and receive JSON responses. documentationcenter: '' -Use this quickstart to learn how to send search requests to the Bing Image Search API in Azure Cognitive Services. This Java application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in Java, the API is a RESTful web service compatible with most programming languages. +Use this quickstart to learn how to send search requests to the Bing Image Search API in Azure AI services. This Java application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in Java, the API is a RESTful web service compatible with most programming languages. ## Prerequisites |
cognitive-services | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/nodejs.md | Title: "Quickstart: Search for images using the Bing Image Search REST API and Node.js"-+ description: Use this quickstart to send image search requests to the Bing Image Search REST API using JavaScript, and JSON responses. documentationcenter: '' Use this quickstart to learn how to send search requests to the Bing Image Searc * The [JavaScript Request Library](https://github.com/request/request). -For more information, see [Cognitive Services Pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). +For more information, see [Azure AI services pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). ## Create and initialize the application |
cognitive-services | Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/php.md | Title: "Quickstart: Search for images using the Bing Image Search REST API and PHP"-+ description: Use this quickstart to send image search requests to the Bing Image Search REST API using PHP, and receive JSON responses. documentationcenter: '' Although this application is written in PHP, the API is a RESTful Web service co * [PHP 5.6.x or later](https://php.net/downloads.php) -For more information, see [Cognitive Services Pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). +For more information, see [Azure AI services pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). ## Create and initialize the application |
cognitive-services | Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/python.md | Title: "Quickstart: Search for images using the Bing Image Search REST API and Python"-+ description: Use this quickstart to send image search requests to the Bing Image Search REST API using Python, and receive JSON responses. |
cognitive-services | Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/ruby.md | Title: "Quickstart: Search for images using the Bing Image Search REST API and Ruby"-+ description: Use this quickstart to send image search requests to the Bing Image Search REST API using Ruby, and receive JSON responses. documentationcenter: '' Although this application is written in Ruby, the API is a RESTful Web service c * [The latest version of Ruby](https://www.ruby-lang.org/en/downloads/). -For more information, see [Cognitive Services Pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). +For more information, see [Azure AI services pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). ## Create and initialize the application |
cognitive-services | Trending Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/trending-images.md | Title: Get trending images with the Bing Image Search API-+ description: Search for today's trending images from the web with the Bing Image Search API. |
cognitive-services | Tutorial Bing Image Search Single Page App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/tutorial-bing-image-search-single-page-app.md | Title: "Tutorial: Create a single-page web app - Bing Image Search API"-+ description: The Bing Image Search API enables you to search the web for high-quality, relevant images. Use this tutorial to build a single-page web application that can send search queries to the API, and display the results within the webpage. |
cognitive-services | Tutorial Image Post | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/tutorial-image-post.md | Title: "Tutorial: Extract image details with the REST API and C# - Bing Image Search"-+ description: Use this tutorial to create a C# application that extracts image details using the Bing Image Search API. |
cognitive-services | Bing News Upgrade Guide V5 To V7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/bing-news-upgrade-guide-v5-to-v7.md | Title: Upgrade Bing News Search API v5 to v7-+ description: Identifies the parts of your Bing News Search application that you need to update to use version 7. |
cognitive-services | Search For News | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/concepts/search-for-news.md | Title: Search for news with the Bing News Search API-+ description: Learn how to send search queries for general news, trending topics, and headlines. |
cognitive-services | Send Search Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/concepts/send-search-queries.md | Title: "Sending queries to the Bing News Search API"-+ description: The Bing News Search API enables you to search the web for relevant news items. Use this article to learn more about sending search queries to the API. |
cognitive-services | Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/csharp.md | Title: "Quickstart: Perform a news search with C# - Bing News Search REST API"-+ description: "Use this quickstart to send a request to the Bing News Search REST API using C#, and receive a JSON response." |
cognitive-services | Endpoint News | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/endpoint-news.md | Title: Bing News Search endpoints-+ description: This article provides a summary of the News search API endpoints; news, top news, and trending news. |
cognitive-services | Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/go.md | Title: "Quickstart: Get news using Bing News Search REST API and Go"-+ description: This quickstart uses the Go language to call the Bing News Search API. The results include names and URLs of news sources identified by the query string. |
cognitive-services | Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/java.md | Title: "Quickstart: Perform a web search with Java - Bing Web Search REST API"-+ description: Use this quickstart to send a request to the Bing News Search REST API using Java, and receive a JSON response. |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/language-support.md | Title: Language support - Bing News Search API-+ description: A list of natural languages, countries and regions that are supported by the Bing News Search API. |
cognitive-services | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/nodejs.md | Title: "Quickstart: Perform a news search with Node.js - Bing News Search REST API"-+ description: Use this quickstart to send a request to the Bing News Search REST API using Node.js, and receive a JSON response. |
cognitive-services | Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/php.md | Title: "Quickstart: Perform a news search with PHP and the Bing News Search REST API"-+ description: Use this quickstart to send a request to the Bing News Search REST API using PHP, and receive a JSON response. Although this application is written in PHP, the API is a RESTful Web service co [!INCLUDE [cognitive-services-bing-news-search-signup-requirements](../../../includes/cognitive-services-bing-news-search-signup-requirements.md)] -For more information, see [Cognitive Services Pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). +For more information, see [Azure AI services pricing - Bing Search API](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/). ## Run the application |
cognitive-services | Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/python.md | Title: "Quickstart: Perform a news search with Python and the Bing News Search REST API"-+ description: Use this quickstart to send a request to the Bing News Search REST API using Python, and receive a JSON response. |
cognitive-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/quickstarts/client-libraries.md | Title: 'Quickstart: Use the Bing News Search client library'-+ description: The News Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results. |
cognitive-services | Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/ruby.md | Title: "Quickstart: Perform a news search with Ruby and the Bing News Search REST API"-+ description: Use this quickstart to send a request to the Bing News Search REST API using Ruby, and receive a JSON response. |
cognitive-services | Search The Web | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/search-the-web.md | Title: What is the Bing News Search API?-+ description: Learn how to use the Bing News Search API to search the web for current headlines across categories, including headlines and trending topics. While the Bing News Search API primarily finds and returns relevant news article The Bing News Search API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON. You can use the service using either the REST API, or the SDK. -1. Create a [Cognitive Services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. +1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. 2. Send a request to the API, with a valid search query. 3. Process the API response by parsing the returned JSON message. |
cognitive-services | Tutorial Bing News Search Single Page App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/tutorial-bing-news-search-single-page-app.md | Title: "Tutorial: Create a single-page web app using the Bing News Search API"-+ description: Use this tutorial to build a single-page web application that can send search queries to the Bing News API, and display the results within the webpage. |
cognitive-services | Bing Spell Check Upgrade Guide V5 To V7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/bing-spell-check-upgrade-guide-v5-to-v7.md | Title: Upgrade Bing Spell Check API v5 to v7-+ description: Identifies the parts of your Bing Spell Check application that you need to update to use version 7. |
cognitive-services | Sending Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/concepts/sending-requests.md | Title: Sending requests to the Bing Spell Check API-+ description: Learn about the Bing Spell Check modes, settings, and other information relating to the API. |
cognitive-services | Using Spell Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/concepts/using-spell-check.md | Title: Using the Bing Spell Check API-+ description: Learn about the Bing Spell Check modes, settings, and other information related to the API. |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/language-support.md | Title: Language support - Bing Spell Check API-+ description: A list of natural languages supported by the Bing Spell Check API. Please note that to work with any other language than `en-US`, the `mkt` should ## See also -- [Cognitive Services Documentation page](../../ai-services/index.yml)-- [Cognitive Services Product page](https://azure.microsoft.com/services/cognitive-services/)+- [Azure AI services documentation](../../ai-services/index.yml) +- [Azure AI services product information](https://azure.microsoft.com/services/cognitive-services/) |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/overview.md | Title: What is the Bing Spell Check API?-+ description: Learn about the Bing Spell Check API, which uses machine learning and statistical machine translation for contextual spell checking. The Bing Spell Check API enables you to perform contextual grammar and spell che The Bing Spell Check API is easy to call from any programming language that can make HTTP requests and parse JSON responses. The service is accessible using the REST API or the Bing Spell Check SDKs. -1. Create a [Cognitive Services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can create a free account. +1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can create a free account. 2. Send a request to the Bing Web Search API. 3. Parse the JSON response |
cognitive-services | Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/csharp.md | Title: "Quickstart: Check spelling with the REST API and C# - Bing Spell Check"-+ description: "Get started using the Bing Spell Check REST API and C# to check spelling and grammar." |
cognitive-services | Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/java.md | Title: "Quickstart: Check spelling with the REST API and Java - Bing Spell Check"-+ description: Get started using the Bing Spell Check REST API and Java to check spelling and grammar. |
cognitive-services | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/nodejs.md | Title: "Quickstart: Check spelling with the REST API and Node.js - Bing Spell Check"-+ description: Get started using the Bing Spell Check REST API and Node.js to check spelling and grammar. |
cognitive-services | Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/php.md | Title: "Quickstart: Check spelling with the REST API and PHP - Bing Spell Check"-+ description: This quickstart shows how a simple PHP application sends a request to the Bing Spell Check API and returns a list of suggested corrections. |
cognitive-services | Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/python.md | Title: "Quickstart: Check spelling with the REST API and Python - Bing Spell Check"-+ description: Get started using the Bing Spell Check REST API and Python to check spelling and grammar. |
cognitive-services | Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/ruby.md | Title: "Quickstart: Check spelling with the REST API and Ruby - Bing Spell Check"-+ description: Get started using the Bing Spell Check REST API and Ruby to check spelling and grammar. |
cognitive-services | Sdk Quickstart Spell Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/sdk-quickstart-spell-check.md | Title: "Quickstart: Check spelling with the Bing Spell Check SDK for C#"-+ description: Get started using the Bing Spell Check REST API to check spelling and grammar. |
cognitive-services | Spellcheck | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/tutorials/spellcheck.md | Title: "Tutorial: Getting Spell Check Results using Bing Spell Check API"-+ description: Use this tutorial to build a web page that sends queries to the Bing Spell Check API, and displays the results. |
cognitive-services | Bing Video Upgrade Guide V5 To V7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/bing-video-upgrade-guide-v5-to-v7.md | Title: Upgrade Bing Video Search API v5 to v7-+ description: Identifies the parts of your Bing Video Search application that you need to update to use version 7. |
cognitive-services | Get Videos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/concepts/get-videos.md | Title: "Search for videos using the Bing Video Search API"-+ description: The Bing Video Search APIfinds and returns relevant videos from the web, it provides several features for intelligent and focused video retrieval on the web. |
cognitive-services | Sending Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/concepts/sending-requests.md | Title: "Send search requests to the Bing Video Search API"-+ description: This article describes the parameters and attributes of requests sent to the Bing Video Search API, as well as the JSON response object it returns. |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/overview.md | Title: What is the Bing Video Search API?-+ description: Learn how to search for videos across the web, using the Bing Video Search API. The Bing Video Search API makes it easy to add video searching capabilities to y The Bing Video Search API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON. You can use the service using either the [REST API](./quickstarts/csharp.md), or the [SDK](./quickstarts/client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp). -1. Create a [Cognitive Services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. +1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free. 2. Send a request to the API, with a valid search query. 3. Process the API response by parsing the returned JSON message. |
cognitive-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/client-libraries.md | Title: 'Quickstart: Use the Bing Video Search client library'-+ description: The Video Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results. |
cognitive-services | Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/csharp.md | Title: "Quickstart: Search for videos using the REST API and C# - Bing Video Search"-+ description: "Use this quickstart to send video search requests to the Bing Video Search REST API using C#." |
cognitive-services | Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/java.md | Title: "Quickstart: Search for videos using the REST API and Java - Bing Video Search"-+ description: Use this quickstart to send video search requests to the Bing Video Search REST API using Java. |
cognitive-services | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/nodejs.md | Title: "Quickstart: Search for videos using the REST API and Node.js - Bing Video Search"-+ description: Use this quickstart to send video search requests to the Bing Video Search REST API using JavaScript. |
cognitive-services | Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/php.md | Title: "Quickstart: Search for videos using the REST API and PHP - Bing Video Search"-+ description: Use this quickstart to send video search requests to the Bing Video Search REST API using PHP |
cognitive-services | Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/python.md | Title: "Quickstart: Search for videos using the REST API and Python - Bing Video Search"-+ description: Use this quickstart to send video search requests to the Bing Video Search REST API using Python. |
cognitive-services | Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/ruby.md | Title: "Quickstart: Search for videos using the REST API and Ruby - Bing Video Search"-+ description: Use this quickstart to send video search requests to the Bing Video Search REST API using Ruby. |
cognitive-services | Trending Videos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/trending-videos.md | Title: Search the web for trending videos using the Bing Video Search API-+ description: Learn how to use the Bing Video Search API to search the web for trending videos. |
cognitive-services | Tutorial Bing Video Search Single Page App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/tutorial-bing-video-search-single-page-app.md | Title: "Tutorial: Build a single-page Bing Video Search app"-+ description: This tutorial explains how to use the Bing Video Search API in a single-page Web application. |
cognitive-services | Video Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/video-insights.md | Title: Get video insights using the Bing Video Search API-+ description: Learn how to use the Bing Video Search API to get more information about videos, such as related videos. |
cognitive-services | Autosuggest Bing Search Terms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/autosuggest-bing-search-terms.md | Title: Autosuggest search terms - Bing Web Search API-+ description: Pair the Bing Web Search API with the Bing Autosuggest API to provide users with an enhanced search experience. |
cognitive-services | Bing Api Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/bing-api-comparison.md | Title: What are the Bing Search APIs?-+ description: Use this article to learn about the Bing Search APIs, and how you can enable cognitive internet searches in your apps and services. |
cognitive-services | Bing Web Stats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/bing-web-stats.md | Title: Add analytics to the Bing Web Search API-+ description: Bing Statistics provides analytics to the Bing Image Search API. Analytics include call volume, top query strings, geographic distribution, and more. |
cognitive-services | Bing Web Upgrade Guide V5 To V7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/bing-web-upgrade-guide-v5-to-v7.md | Title: Upgrade from API v5 to v7 - Bing Web Search API-+ description: Determine which parts of your application require updates to use the Bing Web Search v7 APIs. |
cognitive-services | Csharp Ranking Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/csharp-ranking-tutorial.md | Title: Using rank to display search results-+ description: Shows how to use the Bing RankingResponse answer to display search results in rank order. |
cognitive-services | Filter Answers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/filter-answers.md | Title: How to filter search results - Bing Web Search API-+ description: You can filter the types of answers that Bing includes in the response (for example images, videos, and news) by using the 'responseFilter' query parameter. |
cognitive-services | Hit Highlighting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/hit-highlighting.md | Title: How to use decoration markers to highlight text - Bing Web Search API-+ description: Learn how to use text decorations and hit highlighting in your search results using the Bing Web Search API. |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/language-support.md | Title: Language support - Bing Web Search API-+ description: A list of natural languages, countries and regions that are supported by the Bing Web Search API. |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/overview.md | Title: What is the Bing Web Search API?-+ description: The Bing Web Search API is a RESTful service that provides instant answers to web search queries. Configure results to include web pages, images, videos, news, and more. Results are provided as JSON and based on search relevance and your Bing Web Search subscriptions. -This API is optimal for applications that need access to all content that is relevant to a user's search query. If you're building an application that requires only a specific type of result, consider using the [Bing Image Search API](../bing-image-search/overview.md), [Bing Video Search API](../bing-video-search/overview.md), or [Bing News Search API](../bing-news-search/search-the-web.md). See [Cognitive Services APIs](../../ai-services/index.yml) for a complete list of Bing Search APIs. +This API is optimal for applications that need access to all content that is relevant to a user's search query. If you're building an application that requires only a specific type of result, consider using the [Bing Image Search API](../bing-image-search/overview.md), [Bing Video Search API](../bing-video-search/overview.md), or [Bing News Search API](../bing-news-search/search-the-web.md). See [Azure AI services APIs](../../ai-services/index.yml) for a complete list of Bing Search APIs. Want to see how it works? Try our [Bing Web Search API demo](https://azure.microsoft.com/services/cognitive-services/bing-web-search-api/). |
cognitive-services | Paging Search Results | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/paging-search-results.md | Title: How to page through search results - Bing Search APIs-+ description: Learn how to page through search results from the Bing Search APIs. |
cognitive-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/client-libraries.md | Title: "Quickstart: Use a Bing Web Search client library"-+ description: The Bing Web Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results. |
cognitive-services | Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/csharp.md | Title: "Quickstart: Perform a web search with C# - Bing Web Search REST API"-+ description: "Use this quickstart to send requests to the Bing Web Search REST API using C#, and receive a JSON response." |
cognitive-services | Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/go.md | Title: "Quickstart: Perform a web search with Go - Bing Web Search REST API"-+ description: Use this quickstart to send requests to the Bing Web Search REST API using Go, and receive a JSON response |
cognitive-services | Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/java.md | Title: "Quickstart: Use Java to call the Bing Web Search REST API"-+ description: Use this quickstart to send requests to the Bing Web Search REST API using Java, and receive a JSON response |
cognitive-services | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/nodejs.md | Title: "Quickstart: Perform a web search with Node.js - Bing Web Search REST API"-+ description: Use this quickstart to send requests to the Bing Web Search REST API using Node.js, and receive a JSON response function bingWebSearch(query) { ## Get the query -Let's look at the program's arguments to find the query. The first argument is the path to the node, the second is our filename, and the third is your query. If the query is absent, a default query of "Microsoft Cognitive Services" is used. +Let's look at the program's arguments to find the query. The first argument is the path to the node, the second is our filename, and the third is your query. If the query is absent, a default query of "Microsoft Azure AI services" is used. ```javascript const query = process.argv[2] || 'Microsoft Cognitive Services' |
cognitive-services | Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/php.md | Title: "Quickstart: Perform a search with PHP - Bing Web Search API"-+ description: Use this quickstart to send requests to the Bing Web Search REST API using PHP, and receive a JSON response |
cognitive-services | Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/python.md | Title: "Quickstart: Perform a search with Python - Bing Web Search API"-+ description: Use this quickstart to send requests to the Bing Web Search REST API using Python, and receive a JSON response |
cognitive-services | Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/ruby.md | Title: "Quickstart: Perform a web search with Ruby - Bing Web Search API"-+ description: Use this quickstart to send requests to the Bing Web Search REST API using Ruby, and receive a JSON response |
cognitive-services | Rank Results | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/rank-results.md | Title: How to use rankings to display search results - Bing Web Search API-+ description: Learn how to use ranking to display search results from the Bing Web Search API. |
cognitive-services | Resize And Crop Thumbnails | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/resize-and-crop-thumbnails.md | Title: Resize and crop image thumbnails - Bing Web Search API-+ description: Some answers from the Bing Search APIs include URLs to thumbnail images served by Bing, which you can resize and crop, and may contain query parameters. |
cognitive-services | Sdk Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/sdk-samples.md | Title: Bing Web Search SDK samples-+ description: Use the Bing Web Search SDK to add search capabilities to your Python, Node.js, C#, or Java application. |
cognitive-services | Search Responses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/search-responses.md | Title: Bing Web Search API response structure and answer types -+ description: When you send Bing Web Search a search request, it returns a `SearchResponse` object in the response body. |
cognitive-services | Throttling Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/throttling-requests.md | Title: Throttling requests - Bing Web Search API-+ description: The service and your subscription type determine the number of queries per second (QPS) that you can make. |
cognitive-services | Tutorial Bing Web Search Single Page App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/tutorial-bing-web-search-single-page-app.md | Title: "Tutorial: Create a single-page web app - Bing Web Search API"-+ description: This single-page app demonstrates how the Bing Web Search API can be used to retrieve, parse, and display relevant search results in a single-page app. This sample app can: > * Manage subscription keys > * Handle errors -To use this app, an [Azure Cognitive Services account](../cognitive-services-apis-create-account.md) with Bing Search APIs is required. +To use this app, an [Azure AI services account](../cognitive-services-apis-create-account.md) with Bing Search APIs is required. ## Prerequisites |
cognitive-services | Use Display Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/use-display-requirements.md | Title: Use and display requirements for the Bing Search APIs-+ description: The requirements for displaying search results from the Bing Search APIs in your applications. |
cognitive-services | Web Search Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/web-search-endpoints.md | Title: Web search endpoint-+ description: To get web search results, send a `GET` request to the following endpoint. The headers and URL parameters define further specifications. |
cognitive-services | Local Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/local-categories.md | Title: Search categories for the Bing Local Business Search API-+ description: Use this article to learn how to specify search categories for the Bing Local Business search API endpoint. |
cognitive-services | Local Search Query Response | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/local-search-query-response.md | Title: Sending and using API queries and responses - Bing Local Business Search-+ description: Use this article to learn how to send and use search queries with the Bing Local Business Search API. |
cognitive-services | Local Search Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/local-search-reference.md | Title: Bing Local Business Search API v7 Reference-+ description: This article provides technical details about the response objects, and the query parameters and headers that affect the search results. The following are the headers that a request and response may include. |<a name="acceptlanguage"></a>Accept-Language|Optional request header.<br /><br /> A comma-delimited list of languages to use for user interface strings. The list is in decreasing order of preference. For more information, including expected format, see [RFC2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html).<br /><br /> This header and the [setLang](#setlang) query parameter are mutually exclusive—do not specify both.<br /><br /> If you set this header, you must also specify the cc query parameter. To determine the market to return results for, Bing uses the first supported language it finds from the list and combines it with the `cc` parameter value. If the list does not include a supported language, Bing finds the closest language and market that supports the request or it uses an aggregated or default market for the results. To determine the market that Bing used, see the BingAPIs-Market header.<br /><br /> Use this header and the `cc` query parameter only if you specify multiple languages. Otherwise, use the [mkt](#mkt) and [setLang](#setlang) query parameters.<br /><br /> A user interface string is a string that's used as a label in a user interface. There are few user interface strings in the JSON response objects. Any links to Bing.com properties in the response objects apply the specified language.| |<a name="market"></a>BingAPIs-Market|Response header.<br /><br /> The market used by the request. The form is \<languageCode\>-\<countryCode\>. For example, en-US.| |<a name="traceid"></a>BingAPIs-TraceId|Response header.<br /><br /> The ID of the log entry that contains the details of the request. When an error occurs, capture this ID. If you are not able to determine and resolve the issue, include this ID along with the other information that you provide the Support team.| -|<a name="subscriptionkey"></a>Ocp-Apim-Subscription-Key|Required request header.<br /><br /> The subscription key that you received when you signed up for this service in [Cognitive Services](https://www.microsoft.com/cognitive-services/).| +|<a name="subscriptionkey"></a>Ocp-Apim-Subscription-Key|Required request header.<br /><br /> The subscription key that you received when you signed up for this service in [Azure AI services](https://www.microsoft.com/cognitive-services/).| |<a name="pragma"></a>Pragma|Optional request header<br /><br /> By default, Bing returns cached content, if available. To prevent Bing from returning cached content, set the Pragma header to no-cache (for example, Pragma: no-cache). |<a name="useragent"></a>User-Agent|Optional request header.<br /><br /> The user agent originating the request. Bing uses the user agent to provide mobile users with an optimized experience. Although optional, you are encouraged to always specify this header.<br /><br /> The user-agent should be the same string that any commonly used browser sends. For information about user agents, see [RFC 2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html).<br /><br /> The following are examples of user-agent strings.<br /><ul><li>Windows Phone—Mozillash;Mozilla/5.0 (iPad; CPU OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53</li></ul>| |<a name="clientid"></a>X-MSEdge-ClientID|Optional request and response header.<br /><br /> Bing uses this header to provide users with consistent behavior across Bing API calls. Bing often flights new features and improvements, and it uses the client ID as a key for assigning traffic on different flights. If you do not use the same client ID for a user across multiple requests, then Bing may assign the user to multiple conflicting flights. Being assigned to multiple conflicting flights can lead to an inconsistent user experience. For example, if the second request has a different flight assignment than the first, the experience may be unexpected. Also, Bing can use the client ID to tailor web results to that client IDΓÇÖs search history, providing a richer experience for the user.<br /><br /> Bing also uses this header to help improve result rankings by analyzing the activity generated by a client ID. The relevance improvements help with better quality of results delivered by Bing APIs and in turn enables higher click-through rates for the API consumer.<br /><br /> **IMPORTANT:** Although optional, you should consider this header required. Persisting the client ID across multiple requests for the same end user and device combination enables 1) the API consumer to receive a consistent user experience, and 2) higher click-through rates via better quality of results from the Bing APIs.<br /><br /> The following are the basic usage rules that apply to this header.<br /><ul><li>Each user that uses your application on the device must have a unique, Bing generated client ID.<br /><br/>If you do not include this header in the request, Bing generates an ID and returns it in the X-MSEdge-ClientID response header. The only time that you should NOT include this header in a request is the first time the user uses your app on that device.<br /><br/></li><li>Use the client ID for each Bing API request that your app makes for this user on the device.<br /><br/></li><li>**ATTENTION:** You must ensure that this Client ID is not linkable to any authenticatable user account information.</li><br/><li>Persist the client ID. To persist the ID in a browser app, use a persistent HTTP cookie to ensure the ID is used across all sessions. Do not use a session cookie. For other apps such as mobile apps, use the device's persistent storage to persist the ID.<br /><br/>The next time the user uses your app on that device, get the client ID that you persisted.</li></ul><br /> **NOTE:** Bing responses may or may not include this header. If the response includes this header, capture the client ID and use it for all subsequent Bing requests for the user on that device.<br /><br /> **NOTE:** If you include the X-MSEdge-ClientID, you must not include cookies in the request.| |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/overview.md | Title: What is the Bing Local Business Search API?-+ description: The Bing Local Business Search API is a RESTful service that enables your applications to find information about local places and businesses based on search queries. The Bing Local Business Search API is a RESTful service that enables your applic ## Workflow Call the Bing Local Business Search API from any programming language that can make HTTP requests and parse JSON responses. This service is accessible using the REST API. -1. Create a [Cognitive Services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create a free account](https://azure.microsoft.com/free/cognitive-services/). +1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create a free account](https://azure.microsoft.com/free/cognitive-services/). 2. URL encode your search terms for the `q=""` query parameter. For example, `q=nearby+restaurant` or `q=nearby%20restaurant`. Set pagination as well, if needed. 3. Send a [request to the Bing Local Business Search API](quickstarts/local-quickstart.md) 4. Parse the JSON response |
cognitive-services | Local Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/quickstarts/local-quickstart.md | Title: "Quickstart - Send a query to the API in C# using Bing Local Business Search"-+ description: "Use this quickstart to begin sending requests in C# to the Bing Local Business Search API, which is an Azure Cognitive Service." |
cognitive-services | Local Search Java Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/quickstarts/local-search-java-quickstart.md | Title: Quickstart - Send a query to the API using Java - Bing Local Business Search-+ description: Use this quickstart to begin sending requests in Java to the Bing Local Business Search API, which is an Azure Cognitive Service. |
cognitive-services | Local Search Node Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/quickstarts/local-search-node-quickstart.md | Title: Quickstart - Send a query to the API using Node.js - Bing Local Business Search-+ description: Use this quickstart to begin sending requests to the Bing Local Business Search API, which is an Azure Cognitive Service. |
cognitive-services | Local Search Python Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/quickstarts/local-search-python-quickstart.md | Title: Quickstart - Send a query to the API in Python - Bing Local Business Search-+ description: Use this quickstart to start using the Bing Local Business Search API in Python. |
cognitive-services | Specify Geographic Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/specify-geographic-search.md | Title: Use geographic boundaries to filter results from the Bing Local Business Search API-+ description: Use this article to learn how to filter search results from the Bing Local Business Search API. |
cognitive-services | Bing Insights Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/bing-insights-usage.md | Title: Examples of Bing insights - Bing Visual Search-+ description: This article contains examples of how Bing Visual Search might use and display image insights on Bing.com. |
cognitive-services | Sending Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/concepts/sending-queries.md | Title: "Sending search queries to the Bing Visual Search API"-+ description: This article describes the parameters and attributes of requests sent to the Bing Visual Search API, as well as the response object. The following are the headers that your request should specify. The `Content-Typ | <a name="contenttype"></a>Content-Type | | | <a name="market"></a>BingAPIs-Market | Response header.<br /><br /> The market used by the request. The form is \<languageCode\>-\<countryCode\>. For example, en-US. | | <a name="traceid"></a>BingAPIs-TraceId | Response header.<br /><br /> The ID of the log entry that contains the details of the request. When an error occurs, capture this ID. If you are not able to determine and resolve the issue, include this ID along with the other information that you provide the Support team. |-| <a name="subscriptionkey"></a>Ocp-Apim-Subscription-Key | Required request header.<br /><br /> The subscription key that you received when you signed up for this service in [Cognitive Services](https://www.microsoft.com/cognitive-services/). | +| <a name="subscriptionkey"></a>Ocp-Apim-Subscription-Key | Required request header.<br /><br /> The subscription key that you received when you signed up for this service in [Azure AI services](https://www.microsoft.com/cognitive-services/). | | <a name="pragma"></a>Pragma | | | <a name="useragent"></a>User-Agent | Optional request header.<br /><br /> The user agent originating the request. Bing uses the user agent to provide mobile users with an optimized experience. Although optional, you are encouraged to always specify this header.<br /><br /> The user-agent should be the same string that any commonly used browser sends. For information about user-agents, see [RFC 2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html).<br /><br /> The following are examples of user-agent strings.<br /><ul><li>Windows Phone—Mozillash;Mozilla/5.0 (iPad; CPU OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53</li></ul> | | <a name="clientid"></a>X-MSEdge-ClientID | Optional request and response header.<br /><br /> Bing uses this header to provide users with consistent behavior across Bing API calls. Bing often flights new features and improvements, and it uses the client ID as a key for assigning traffic on different flights. If you do not use the same client ID for a user across multiple requests, then Bing may assign the user to multiple conflicting flights. Being assigned to multiple conflicting flights can lead to an inconsistent user experience. For example, if the second request has a different flight assignment than the first, the experience may be unexpected. Also, Bing can use the client ID to tailor web results to that client IDΓÇÖs search history, providing a richer experience for the user.<br /><br /> Bing also uses this header to help improve result rankings by analyzing the activity generated by a client ID. The relevance improvements help with better quality of results delivered by Bing APIs and in turn enables higher click-through rates for the API consumer.<br /><br /> **IMPORTANT:** Although optional, you should consider this header required. Persisting the client ID across multiple requests for the same end user and device combination enables 1) the API consumer to receive a consistent user experience, and 2) higher click-through rates via better quality of results from the Bing APIs.<br /><br /> The following are the basic usage rules that apply to this header.<br /><ul><li>Each user that uses your application on the device must have a unique, Bing generated client ID.<br /><br/>If you do not include this header in the request, Bing generates an ID and returns it in the X-MSEdge-ClientID response header. The only time that you should NOT include this header in a request is the first time the user uses your app on that device.<br /><br/></li><li>**ATTENTION:** You must ensure that this Client ID is not linkable to any authenticated user account information.</li><li>Use the client ID for each Bing API request that your app makes for this user on the device.<br /><br/></li><li>Persist the client ID. To persist the ID in a browser app, use a persistent HTTP cookie to ensure the ID is used across all sessions. Do not use a session cookie. For other apps such as mobile apps, use the device's persistent storage to persist the ID.<br /><br/>The next time the user uses your app on that device, get the client ID that you persisted.</li></ul><br /> **NOTE:** Bing responses may or may not include this header. If the response includes this header, capture the client ID and use it for all subsequent Bing requests for the user on that device.<br /><br /> **NOTE:** If you include the X-MSEdge-ClientID, you must not include cookies in the request. | |
cognitive-services | Default Insights Tag | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/default-insights-tag.md | Title: Default insights tag - Bing Visual Search-+ description: Provides details about the default insights that Bing Visual Search returns about an image. |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/language-support.md | Title: Language support - Bing Visual Search API-+ description: A list of natural languages, countries and regions that are supported by the Bing Visual Search API. The Bing Visual Search API supports more than three dozen countries/regions, many with more than one language. |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/overview.md | Title: What is the Bing Visual Search API?-+ description: Bing Visual Search provides details or insights about an image such as similar images or shopping sources. Bing Visual Search results also include bounding boxes for regions of interest i The Bing Visual Search API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON. You can use either the REST API or the SDK for the service. -1. Create a [Cognitive Services account](../cognitive-services-apis-create-account.md) to access the Bing Search APIs. If you don't have an Azure subscription, you can [create an account for free](https://azure.microsoft.com/free/cognitive-services/). +1. Create an [Azure AI services account](../cognitive-services-apis-create-account.md) to access the Bing Search APIs. If you don't have an Azure subscription, you can [create an account for free](https://azure.microsoft.com/free/cognitive-services/). 2. Send a request to the API with a valid search query. 3. Process the API response by parsing the returned JSON message. |
cognitive-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/client-libraries.md | Title: 'Quickstart: Use the Bing Visual Search client library'-+ description: The Visual Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results. |
cognitive-services | Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/csharp.md | Title: "Quickstart: Get image insights using the REST API and C# - Bing Visual Search"-+ description: "Learn how to upload an image using the Bing Visual Search API and C#, and then get insights about the image." |
cognitive-services | Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/go.md | Title: "Quickstart: Get image insights using the REST API and Go - Bing Visual Search"-+ description: Learn how to upload an image using the Bing Visual Search API and Go, and then get insights about the image. |
cognitive-services | Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/java.md | Title: "Quickstart: Get image insights using the REST API and Java - Bing Visual Search"-+ description: Learn how to upload an image to the Bing Visual Search API and get insights about it. |
cognitive-services | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/nodejs.md | Title: "Quickstart: Get image insights using the REST API and Node.js - Bing Visual Search"-+ description: Learn how to upload an image using the Bing Visual Search API and Node.js, and then get insights about the image. |
cognitive-services | Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/python.md | Title: "Quickstart: Get image insights using the REST API and Python - Bing Visual Search"-+ description: Learn how to upload an image using the Bing Visual Search API and Python, and then get insights about the image. |
cognitive-services | Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/ruby.md | Title: "Quickstart: Get image insights using the REST API and Ruby - Bing Visual Search"-+ description: Learn how to upload an image using the Bing Visual Search API and Ruby, and then get insights about the image. |
cognitive-services | Tutorial Bing Visual Search Single Page App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-bing-visual-search-single-page-app.md | Title: " Build a single-page Web app - Bing Visual Search"-+ description: Learn how to integrate the Bing Visual Search API into a single-page Web application. |
cognitive-services | Tutorial Visual Search Crop Area Results | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-crop-area-results.md | Title: "Tutorial: Crop an image with the Bing Visual Search SDK" description: Use the Bing Visual Search SDK to get insights from specific ares on an image. -+ |
cognitive-services | Tutorial Visual Search Image Upload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-image-upload.md | Title: "Tutorial: How to upload image using the Bing Visual Search API"-+ description: Learn how to upload an image to Bing, get insights about it, display the response. |
cognitive-services | Tutorial Visual Search Insights Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-insights-token.md | Title: "Find similar images from previous searches using image insights tokens and the Bing Visual Search API"-+ description: Use the Bing Visual Search client library to get URLs of images from previous searches. |
cognitive-services | Use Insights Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/use-insights-token.md | Title: Using insights token - Bing Visual Search-+ description: Shows how to use an image's insight token with Bing Visual Search API to get insights about an image. if __name__ == '__main__': [Create a Visual Search single-page web app](tutorial-bing-visual-search-single-page-app.md) [What is the Bing Visual Search API?](overview.md) -[Try Cognitive Services](https://aka.ms/bingvisualsearchtryforfree) +[Try Azure AI services](https://aka.ms/bingvisualsearchtryforfree) [Images - Visual Search](/rest/api/cognitiveservices/bingvisualsearch/images/visualsearch) |
communication-services | Azure Communication Services Azure Cognitive Services Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md | Title: Connect Azure Communication Services to Azure Cognitive Services + Title: Connect Azure Communication Services to Azure AI services -description: Provides a how-to guide for connecting ACS to Azure Cognitive Services. +description: Provides a how-to guide for connecting ACS to Azure AI services. -# Connect Azure Communication Services with Azure Cognitive Services +# Connect Azure Communication Services with Azure AI services >[!IMPORTANT] >Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.-All this is possible with one-click where enterprises can access a secure solution and link their models through the portal. Furthermore, developers and enterprises don't need to manage credentials. Connecting your Cognitive Services uses managed identities to access user-owned resources. Developers can use managed identities to authenticate any resource that supports Azure Active Directory authentication. +All this is possible with one-click where enterprises can access a secure solution and link their models through the portal. Furthermore, developers and enterprises don't need to manage credentials. Connecting your Azure AI services uses managed identities to access user-owned resources. Developers can use managed identities to authenticate any resource that supports Azure Active Directory authentication. -BYO Cognitive Services can be easily integrated into any application regardless of the programming language. When creating an Azure Resource in Azure portal, enable the BYO option and provide the URL to the Cognitive Services. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution. +BYO Azure AI services can be easily integrated into any application regardless of the programming language. When creating an Azure Resource in Azure portal, enable the BYO option and provide the URL to the Azure AI services. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution. > [!NOTE]-> This integration is only supported in limited regions for Azure Cognitive Services, for more information about which regions are supported please view the limitations section at the bottom of this document. It is also recommended that when you're creating a new Azure Cognitive Service resource that you create a Multi-service Cognitive Service resource. +> This integration is only supported in limited regions for Azure AI services, for more information about which regions are supported please view the limitations section at the bottom of this document. It is also recommended that when you're creating a new Azure Cognitive Service resource that you create a Multi-service Cognitive Service resource. ## Common use cases ### Build applications that can play and recognize speech -With the ability to, connect your Cognitive Services to Azure Communication Services, you can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [SSML](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Cognitive Services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Cognitive services that are bespoke to your domain and region through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience. +With the ability to, connect your Azure AI services to Azure Communication Services, you can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [SSML](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Azure AI services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Azure AI services that are bespoke to your domain and region through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience. ## Run time flow [![Run time flow](./media/run-time-flow.png)](./media/run-time-flow.png#lightbox) ## Azure portal experience-You can also configure and bind your Communication Services and Cognitive Services through the Azure portal. +You can also configure and bind your Communication Services and Azure AI services through the Azure portal. ### Add a Managed Identity to the Azure Communication Services Resource You can also configure and bind your Communication Services and Cognitive Servic [![Enable managed identiy](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox) -### Option 1: Add role from Azure Cognitive Services in the Azure portal +<a name='option-1-add-role-from-azure-cognitive-services-in-the-azure-portal'></a> ++### Option 1: Add role from Azure AI services in the Azure portal 1. Navigate to your Azure Cognitive Service resource. 2. Select the "Access control (IAM)" tab. 3. Click the "+ Add" button. You can also configure and bind your Communication Services and Cognitive Servic 5. Choose the "Cognitive Services User" role to assign, then click "Next". -[![Cognitive Services user](./media/cognitive-service-user.png)](./media/cognitive-service-user.png#lightbox) +[![Cognitive Services User](./media/cognitive-service-user.png)](./media/cognitive-service-user.png#lightbox) 6. For the field "Assign access to" choose the "User, group or service principal". 7. Press "+ Select members" and a side tab opens. You can also configure and bind your Communication Services and Cognitive Servic Your Communication Service has now been linked to your Azure Cognitive Service resource. -## Azure Cognitive Services regions supported +<a name='azure-cognitive-services-regions-supported'></a> ++## Azure AI services regions supported -This integration between Azure Communication Services and Azure Cognitive Services is only supported in the following regions at this point in time: +This integration between Azure Communication Services and Azure AI services is only supported in the following regions at this point in time: - westus - westus2 - westus3 |
communication-services | Play Ai Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-ai-action.md | Title: Play audio in call -description: Conceptual information about playing audio in a call using Call Automation and Azure Cognitive Services +description: Conceptual information about playing audio in a call using Call Automation and Azure AI services +- Regular text that can be converted into speech output through the integration with Azure AI services. -You can leverage the newly announced integration between [Azure Communication Services and Azure Cognitive Services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales please see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). +You can leverage the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales please see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). > [!NOTE] > ACS currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../../articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md). |
communication-services | Recognize Ai Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-ai-action.md | Title: Recognize Action -description: Conceptual information gathering user voice input using Call Automation and Azure Cognitive Services +description: Conceptual information gathering user voice input using Call Automation and Azure AI services With the release of ACS Call Automation Recognize action, developers can now enh **Voice recognition with speech-to-text** -[Azure Communications services integration with Azure Cognitive Services](./azure-communication-services-azure-cognitive-services-integration.md), allows you through the Recognize action to analyze audio in real-time to transcribe spoken word into text. Out of the box Microsoft utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This model is pre-trained with dialects and phonetics representing a variety of common domains. For more information about support languages please see [Languages and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). +[Azure Communications services integration with Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md), allows you through the Recognize action to analyze audio in real-time to transcribe spoken word into text. Out of the box Microsoft utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This model is pre-trained with dialects and phonetics representing a variety of common domains. For more information about support languages please see [Languages and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). **DTMF** |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md | For customers that use Virtual appointments, refer to our Teams Interoperability - The maximum number of participants allowed in a chat thread is 250. - The maximum message size allowed is approximately 28 KB. - For chat threads with more than 20 participants, read receipts and typing indicator features are not supported.+- For Teams Interop scenarios, it is the number of ACS users, not Teams users that must be below 20 for read receipts and typing indicator features to be supported. ## Chat architecture One way to achieve this is by having your trusted service act as a participant o This way, the message history contains both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../ai-services/translator/quickstart-text-rest-api.md) to understand how to use AI APIs to translate text to different languages. ## Next steps |
communication-services | Exception Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/exception-policy.md | |
communication-services | Matching Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/matching-concepts.md | This document describes the registration of workers, the submission of jobs and ## Worker Registration -Before a worker can receive offers to service a job, it must be registered first by setting `availableForOffers` to true. Next, we need to specify which queues the worker listens on and which channels it can handle. Once registered, you receive a [RouterWorkerRegistered](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerregistered) event from Event Grid. +Before a worker can receive offers to service a job, it must be registered first by setting `availableForOffers` to true. Next, we need to specify which queues the worker listens on and which channels it can handle. Once registered, you receive a [RouterWorkerRegistered](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerregistered) event from Event Grid and the worker's status is changed to `active`. -In the following example, we register a worker to +In the following example, we register a worker to: - Listen on `queue-1` and `queue-2` - Be able to handle both the voice and chat channels. In this case, the worker could either take a single `voice` job at one time or two `chat` jobs at the same time. This setting is configured by specifying the total capacity of the worker and assigning a cost per job for each channel. client.update_worker(worker_id = "worker-1", router_worker = RouterWorker(availa ::: zone pivot="programming-language-java" ```java-client.updateWorker(new UpdateWorkerOptions("worker-1").setAvailableForOffers(true)); +client.updateWorker(new UpdateWorkerOptions("worker-1").setAvailableForOffers(false)); ``` ::: zone-end > [!NOTE]-> If a worker is registered and idle for more than 7 days, it'll be automatically deregistered. +> If a worker is registered and idle for more than 7 days, it'll be automatically deregistered. Once deregistered, the worker's status is `draining` if one or more jobs are still assigned, or `inactive` if no jobs are assigned. <!-- LINKS --> [subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md |
communication-services | Router Rule Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/router-rule-concepts.md | |
communication-services | Worker Capacity Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/worker-capacity-concepts.md | |
communication-services | Teams Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-endpoint.md | Title: Communication as Teams user + Title: Communication as a Microsoft 365 user description: This article discusses how to integrate communication as Teams user with Azure Communication Services and Graph API. -# Communication as Teams user +# Communication as a Microsoft 365 user -You can use Azure Communication Services and Graph API to integrate communication as Teams users into your products. Teams users can communicate with other people in and outside their organization. The benefits for enterprises are: +You can use Azure Communication Services and Graph API to integrate communication as Microsoft 365 users into your products. Microsoft 365 users can communicate with other people in and outside their organization. The benefits for enterprises are: - No requirement to download Teams desktop, mobile or web clients for Teams users - Teams users don't lose context by switching between applications for day-to-day work and Teams client for communication - Teams is a single source for chat messages and call history within the organization Teams users can join the Teams meeting experience, manage calls, and manage chat Find more details in the following articles: - [Teams interoperability](./teams-interop.md) - [Issue a Teams access token](../quickstarts/manage-teams-identity.md)-- [Start a call to Teams user as a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)+- [Start a call to Teams user as a Microsoft 365 user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md) |
communication-services | User Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/user-experience.md | + + Title: Build excellent user experience with Azure Communication Services +description: Build excellent user experience with Azure Communication Services +++++ Last updated : 07/13/2023++++++# User experience +Developers strive to bring the best user experience to their users. Azure Communication Services is integrating the best practices into the UI library, but if you are building your user interface, here are some tools to help you achieve the best user experience. +++## Hide buttons that are not enabled +Developers building their audio and video experience with Azure Communication Services calling SDK are facing issues that some actions available in the calling SDK are unavailable for a given call type, role, meeting, or tenant. This issue leads to undesired situations when users see a button to, for example, share screen. Still, this action fails because the user cannot share the screen. Capability API is the tool that helps you render just the actions that apply to the user in the current context. You can learn more about [Capability APIs here](../../how-tos/calling-sdk/capabilities.md). ++## Next steps +- Learn more about [Capability APIs](../../how-tos/calling-sdk/capabilities.md). |
communication-services | Accept Decline Offer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/accept-decline-offer.md | This guide lays out the steps you need to take to observe a Job Router offer. It ## Accept job offers -After you create a job, observe the [worker offer-issued event](subscribe-events.md#microsoftcommunicationrouterworkerofferissued), which contains the worker ID and the job offer ID. The worker can accept job offers by using the SDK. Once the offer is accepted, the job will be assigned to the worker. +After you create a job, observe the [worker offer-issued event](subscribe-events.md#microsoftcommunicationrouterworkerofferissued), which contains the worker ID and the job offer ID. The worker can accept job offers by using the SDK. Once the offer is accepted, the job is assigned to the worker, and the job's status is updated to `assigned`. ::: zone pivot="programming-language-csharp" ```csharp // Event handler logic omitted-await client.AcceptJobOfferAsync(offerIssuedEvent.Data.WorkerId, offerIssuedEvent.Data.OfferId); +var accept = await client.AcceptJobOfferAsync(offerIssuedEvent.Data.WorkerId, offerIssuedEvent.Data.OfferId); ``` ::: zone-end await client.AcceptJobOfferAsync(offerIssuedEvent.Data.WorkerId, offerIssuedEven ```typescript // Event handler logic omitted-await client.acceptJobOffer(offerIssuedEvent.data.workerId, offerIssuedEvent.data.offerId); +const accept = await client.acceptJobOffer(offerIssuedEvent.data.workerId, offerIssuedEvent.data.offerId); ``` ::: zone-end await client.acceptJobOffer(offerIssuedEvent.data.workerId, offerIssuedEvent.dat ```python # Event handler logic omitted-client.accept_job_offer(offerIssuedEvent.data.worker_id, offerIssuedEvent.data.offer_id) +accept = client.accept_job_offer(offerIssuedEvent.data.worker_id, offerIssuedEvent.data.offer_id) ``` ::: zone-end client.accept_job_offer(offerIssuedEvent.data.worker_id, offerIssuedEvent.data.o ```java // Event handler logic omitted-client.acceptJobOffer(offerIssuedEvent.getData().getWorkerId(), offerIssuedEvent.getData().getOfferId()); +AcceptJobOfferResult accept = client.acceptJobOffer(offerIssuedEvent.getData().getWorkerId(), offerIssuedEvent.getData().getOfferId()); ``` ::: zone-end ## Decline job offers -The worker can decline job offers by using the SDK. Once the offer is declined, the job will be offered to the next available worker. The same job will not be offered to the worker that declined the job unless the worker is deregistered and registered again. +The worker can decline job offers by using the SDK. Once the offer is declined, the job is offered to the next available worker. The job is not offered to the same worker that declined the job until the worker is deregistered and registered again. ::: zone pivot="programming-language-csharp" client.declineJobOffer( ### Retry offer after some time -In some scenarios, a worker may want to automatically retry an offer after some time. For example, a worker may want to retry an offer after 5 minutes. To do this, the worker can use the SDK to decline the offer and specify the `retryOfferAfterUtc` property. +In some scenarios, a worker may want to automatically retry an offer after some time. For example, a worker may want to retry an offer after 5 minutes. To achieve this flow, the worker can use the SDK to decline the offer and specify the `retryOfferAfter` property. ::: zone pivot="programming-language-csharp" client.declineJobOffer( ::: zone-end +## Complete the job ++Once the worker has completed the work associated with the job (for example, completed the call), we complete the job, which updates the status to `completed`. +++```csharp +await routerClient.CompleteJobAsync(new CompleteJobOptions(accept.Value.JobId, accept.Value.AssignmentId)); +``` ++++```typescript +await routerClient.completeJob(accept.jobId, accept.assignmentId); +``` ++++```python +router_client.complete_job(job_id = job.id, assignment_id = accept.assignment_id) +``` ++++```java +routerClient.completeJob(new CompleteJobOptions(accept.getJobId(), accept.getAssignmentId())); +``` +++## Close the job ++Once the worker is ready to take on new jobs, the worker should close the job, which updates the status to `closed`. Optionally, the worker can provide a disposition code to indicate the outcome of the job. +++```csharp +await routerClient.CloseJobAsync(new CloseJobOptions(accept.Value.JobId, accept.Value.AssignmentId) { + DispositionCode = "Resolved" +}); +``` ++++```typescript +await routerClient.closeJob(accept.jobId, accept.assignmentId, { dispositionCode: "Resolved" }); +``` ++++```python +router_client.close_job(job_id = job.id, assignment_id = accept.assignment_id, disposition_code = "Resolved") +``` ++++```java +routerClient.closeJob(new CloseJobOptions(accept.getJobId(), accept.getAssignmentId()) + .setDispositionCode("Resolved")); +``` ++ ## Next steps - Review how to [manage a Job Router queue](manage-queue.md). |
communication-services | Escalate Job | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/escalate-job.md | |
communication-services | Estimated Wait Time | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/estimated-wait-time.md | |
communication-services | Job Classification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/job-classification.md | Learn to use a classification policy in Job Router to dynamically resolve the qu ## Create a classification policy -The following example will leverage [PowerFx Expressions](https://powerapps.microsoft.com/blog/what-is-microsoft-power-fx/) to select both the queue and priority. The expression will attempt to match the Job label called `Region` equal to `NA` resulting in the Job being put in the `XBOX_NA_QUEUE`. Otherwise, the job will be sent to the fallback queue `XBOX_DEFAULT_QUEUE` as defined by `fallbackQueueId`. Additionally, the priority will be `10` if a label called `Hardware_VIP` was matched, otherwise it will be `1`. +The following example leverages [PowerFx Expressions](https://powerapps.microsoft.com/blog/what-is-microsoft-power-fx/) to select both the queue and priority. The expression attempts to match the Job label called `Region` equal to `NA` resulting in the Job being put in the `XBOX_NA_QUEUE`. Otherwise, the job is sent to the fallback queue `XBOX_DEFAULT_QUEUE` as defined by `fallbackQueueId`. Additionally, the priority is `10` if a label called `Hardware_VIP` was matched, otherwise it is `1`. ::: zone pivot="programming-language-csharp" ClassificationPolicy classificationPolicy = administrationClient.createClassific ## Submit the job -The following example will cause the classification policy to evaluate the Job labels. The outcome will place the Job in the queue called `XBOX_NA_QUEUE` and set the priority to `1`. +The following example causes the classification policy to evaluate the Job labels. The outcome places the Job in the queue called `XBOX_NA_QUEUE` and sets the priority to `1`. Before the classification policy is evaluated, the job's state is `pendingClassification`. Once the classification policy is evaluated, the job's state is updated to `queued`. ::: zone pivot="programming-language-csharp" client.createJob(new CreateJobWithClassificationPolicyOptions("job1", "voice", " ## Attaching Worker Selectors -You can use the classification policy to attach additional worker selectors to a job. +You can use the classification policy to attach more worker selectors to a job. ### Static Attachments -In this example, the Classification Policy is configured with a static attachment, which will always attach the specified label selector to a job. +In this example, the Classification Policy is configured with a static attachment, which always attaches the specified label selector to a job. ::: zone pivot="programming-language-csharp" administrationClient.createClassificationPolicy(new CreateClassificationPolicyOp ### Conditional Attachments -In this example, the Classification Policy is configured with a conditional attachment. So it will evaluate a condition against the job labels to determine if the said label selectors should be attached to the job. +In this example, the Classification Policy is configured with a conditional attachment. So it evaluates a condition against the job labels to determine if the said label selectors should be attached to the job. ::: zone pivot="programming-language-csharp" administrationClient.createClassificationPolicy(new CreateClassificationPolicyOp ### Weighted Allocation Attachments -In this example, the Classification Policy is configured with a weighted allocation attachment. This will divide up jobs according to the weightings specified and attach different selectors accordingly. Here, 30% of jobs should go to workers with the label `Vendor` set to `A` and 70% should go to workers with the label `Vendor` set to `B`. +In this example, the Classification Policy is configured with a weighted allocation attachment. This policy divides up jobs according to the weightings specified and attach different selectors accordingly. Here, 30% of jobs should go to workers with the label `Vendor` set to `A` and 70% should go to workers with the label `Vendor` set to `B`. ::: zone pivot="programming-language-csharp" administrationClient.createClassificationPolicy(new CreateClassificationPolicyOp ## Reclassify a job after submission -Once the Job Router has received, and classified a Job using a policy, you have the option of reclassifying it using the SDK. The following example illustrates one way to increase the priority of the Job to `10`, simply by specifying the **Job ID**, calling the `UpdateJobAsync` method, and updating the classificationPolicyId and including the `Hardware_VIP` label. +Once the Job Router has received, and classified a Job using a policy, you have the option of reclassifying it using the SDK. The following example illustrates one way of increasing the priority of the Job to `10`, simply by specifying the **Job ID**, calling the `UpdateJobAsync` method, and updating the classificationPolicyId and including the `Hardware_VIP` label. ::: zone pivot="programming-language-csharp" client.updateJob(new UpdateJobOptions("job1") ::: zone-end > [!NOTE]-> If the job labels, queueId, channelId or worker selectors are updated, any existing offers on the job are revoked and you'll receive a [RouterWorkerOfferRevoked](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferrevoked) event for each offer from EventGrid. The job will be re-queued and you'll receive a [RouterJobQueued](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobqueued) event. Job offers may also be revoked when a worker's total capacity is reduced, or the channel configurations are updated. +> If the job labels, queueId, channelId or worker selectors are updated, any existing offers on the job are revoked and you receive a [RouterWorkerOfferRevoked](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferrevoked) event for each offer from EventGrid. The job is re-queued and you receive a [RouterJobQueued](../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobqueued) event. Job offers may also be revoked when a worker's total capacity is reduced, or the channel configurations are updated. |
communication-services | Manage Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/manage-queue.md | |
communication-services | Preferred Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/preferred-worker.md | |
communication-services | Scheduled Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/scheduled-jobs.md | zone_pivot_groups: acs-js-csharp-java-python [!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include-document.md)] -In the context of a call center, customers may want to receive a scheduled callback at a later time. As such, you will need to create a scheduled job in Job Router. +In the context of a call center, customers may want to receive a scheduled callback at a later time. As such, you need to create a scheduled job in Job Router. ## Prerequisites In the context of a call center, customers may want to receive a scheduled callb ## Create a job using the ScheduleAndSuspendMode -In the following example, a job is created that will be scheduled 3 minutes from now by setting the `MatchingMode` to `ScheduleAndSuspendMode` with a `scheduleAt` parameter. This assumes that you've already [created a queue](manage-queue.md) with the queueId `Callback` and that there is an active [worker registered](../../concepts/router/matching-concepts.md) to the queue with available capacity on the `Voice` channel. +In the following example, a job is created that is scheduled 3 minutes from now by setting the `MatchingMode` to `ScheduleAndSuspendMode` with a `scheduleAt` parameter. This example assumes that you've already [created a queue](manage-queue.md) with the queueId `Callback` and that there's an active [worker registered](../../concepts/router/matching-concepts.md) to the queue with available capacity on the `Voice` channel. ::: zone pivot="programming-language-csharp" client.createJob(new CreateJobOptions("job1", "Voice", "Callback") ::: zone-end +> [!NOTE] +> The job's status after being scheduled is initially `PendingSchedule`, and once Job Router successfully schedules the job, the status is updated to `Scheduled`. + ## Wait for the scheduled time to be reached, then queue the job -When the scheduled time has been reached, Job Router will emit a [RouterJobWaitingForActivation event](subscribe-events.md#microsoftcommunicationrouterjobwaitingforactivation). If this event has been subscribed, the event can be parsed into a variable called `eventGridEvent`. At this time, some required actions may be performed, before enabling the job to be matched to a worker. For example, in the context of the contact center, such an action could be making an outbound call and waiting for the customer to accept the callback. Once the required actions are complete, the job can be queued by calling the `UpdateJobAsync` method with the `MatchingMode` set to `QueueAndMatchMode` and priority set to `100` to quickly find an eligible worker. +When the scheduled time has been reached, the job's status is updated to `WaitingForActivation` and Job Router emits a [RouterJobWaitingForActivation event](subscribe-events.md#microsoftcommunicationrouterjobwaitingforactivation) to Event Grid. If this event has been subscribed, some required actions may be performed, before enabling the job to be matched to a worker. For example, in the context of the contact center, such an action could be making an outbound call and waiting for the customer to accept the callback. Once the required actions are complete, the job can be queued by calling the `UpdateJobAsync` method with the `MatchingMode` set to `QueueAndMatchMode` and priority set to `100` to quickly find an eligible worker, which updates the job's status to `queued`. ::: zone pivot="programming-language-csharp" if (eventGridEvent.EventType == "Microsoft.Communication.RouterJobWaitingForActi ## Next steps -- Learn how to [accept the Job Router offer](accept-decline-offer.md) that will be issued once a matching worker has been found for the job.+- Learn how to [accept the Job Router offer](accept-decline-offer.md) that is issued once a matching worker has been found for the job. |
communication-services | Get Started Router | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/router/get-started-router.md | |
communication-services | Get Started Data Channel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-data-channel.md | Last updated 05/04/2023 -+ # Quickstart: Add Data Channel messaging to your calling app |
communication-services | Add Voip Push Notifications Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-voip-push-notifications-event-grid.md | + + Title: Using Event Grid Notifications to send VOIP push payload to ANH. ++description: Using Event Grid Notification from Azure Communication Services Native Calling to Incoming VOIP payload to devices via ANH. +++ Last updated : 07/25/2023++++# Deliver VOIP Push Notification to Devices without ACS Calling SDK ++This tutorial explains how to deliver VOIP push notifications to native applications without using the Azure Communication Services register push notifications API [here](../how-tos/calling-sdk/push-notifications.md). ++## Current Limitations +The current limitations of using the ACS Native Calling SDK are that + * There's a 24-hour limit after the register push notification API is called when the device token information is saved. After 24 hours, the device endpoint information is deleted. Any incoming calls on those devices will not be delivered to the devices if those devices don't call the register push notification API again. + * Can't deliver push notifications using Baidu or any other notification types supported by Azure Notification Hub but not yet supported in the ACS SDK. ++## Setup for listening the events from Event Grid Notification +To listen to the `Microsoft.Communication.IncomingCall` event from Event Grid notifications of the Azure Communication Calling resource in Azure. +1. Azure functions with APIs + 1. Save device endpoint information. + 2. Delete device endpoint information. + 3. Get device endpoint information for a given `CommunicationIdentifier`. +2. Azure function API with EventGridTrigger that listens to the `Microsoft.Communication.IncomingCall` event from the Azure Communication resource. +3. Some kind of database like MongoDb to save the device endpoint information. +4. Azure Notification Hub to deliver the VOIP notifications. ++## Steps to deliver the Push Notifications +Here are the steps to deliver the push notification: +1. Instead of calling the API `CallAgent.registerPushNotifications` with device token when the application starts, send the device token to the Azure function app. +2. When there's an incoming call for an ACS user, Azure Communication calling resource will trigger the `EventGridTrigger` Azure function API with the incoming call payload. +3. Get all the device token information from the database. +4. Convert the payload to how the VOIP push notification payload is by `PushNotificationInfo.fromDictionary` API like in iOS SDK. +5. Send the push payload using the REST API provided by Azure Notification Hub. +6. VOIP push is successfully delivered to the device and `CallAgent.handlePush` API should be called. ++## Sample +Code sample is provided [here](https://github.com/Azure-Samples/communication-services-ios-quickstarts/tree/main/add-calling-push-notifications-event-grid). |
communication-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md | Listen to Azure Communication Services PMs Ashwinder Bhatti and Anuj Bhatia talk [Watch the video](https://youtu.be/EYTjH1xrmtI) -[Learn more about Azure Cognitive Services](https://azure.microsoft.com/products/cognitive-services/) +[Learn more about Azure AI services](https://azure.microsoft.com/products/cognitive-services/) [Learn more about Azure Event Grid](../event-grid/overview.md) In May, we launched a host of new features, including: <br> -Enjoy all of these new features. Be sure to check back here periodically for more news and updates on all of the new capabilities we've added to our platform! For a complete list of new features and bug fixes, visit our [releases page](https://github.com/Azure/Communication/releases) on GitHub. For more blog posts, as they're released, visit the [Azure Communication Services blog](https://techcommunity.microsoft.com/t5/azure-communication-services/bg-p/AzureCommunicationServicesBlog) +Enjoy all of these new features. Be sure to check back here periodically for more news and updates on all of the new capabilities we've added to our platform! For a complete list of new features and bug fixes, visit our [releases page](https://github.com/Azure/Communication/releases) on GitHub. For more blog posts, as they're released, visit the [Azure Communication Services blog](https://techcommunity.microsoft.com/t5/azure-communication-services/bg-p/AzureCommunicationServicesBlog) |
communications-gateway | Emergency Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calling.md | Microsoft Teams always sends location information on SIP INVITEs for emergency c - When Microsoft Teams clients make an emergency call, they obtain their physical location based on their network location. - Static locations that you assign to numbers. - The Operator Connect API allows you to associate numbers with locations that enterprise administrators have already configured in the Microsoft Teams Admin Center as part of uploading numbers.- - Azure Communications Gateway's API Bridge Number Management Portal also allows you to associate numbers with locations during upload. You can also manage the locations associated with numbers after the numbers have been uploaded. + - Azure Communications Gateway's Number Management Portal also allows you to associate numbers with locations during upload. You can also manage the locations associated with numbers after the numbers have been uploaded. - Static locations that your enterprise customers assign. When you upload numbers, you can choose whether enterprise administrators can modify the location information associated with each number. > [!NOTE] |
communications-gateway | Manage Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/manage-enterprise.md | Title: Manage an enterprise in Azure Communications Gateway's Number Management Portal -description: Learn how to manage enterprises and numbers for Operator Connect and Teams Phone Mobile with Azure Communication Gateway's API Bridge Number Management Portal. +description: Learn how to manage enterprises and numbers for Operator Connect and Teams Phone Mobile with Azure Communication Gateway's Number Management Portal. |
communications-gateway | Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/onboarding.md | Title: Onboarding to Microsoft Teams Phone with Azure Communications Gateway -description: Understand the Azure Communications Gateway Basic Integration Included Benefit for onboarding to Operator Connect and your other options for onboarding +description: Understand the Included Benefits and your other options for onboarding Previously updated : 01/18/2023 Last updated : 07/27/2023 -# Onboarding to Microsoft Teams with Azure Communications Gateway +# Onboarding with Included Benefits for Azure Communications Gateway -To launch Operator Connect and/or Teams Phone Mobile, you'll need an onboarding partner. Launching requires changes to the Operator Connect or Teams Phone Mobile environments and your onboarding partner manages the integration process and coordinates with Microsoft Teams on your behalf. They can also help you design and set up your network for success. +To launch Operator Connect and/or Teams Phone Mobile, you need an onboarding partner. Launching requires changes to the Operator Connect or Teams Phone Mobile environments and your onboarding partner manages the integration process and coordinates with Microsoft Teams on your behalf. They can also help you design and set up your network for success. -If you're launching Operator Connect, Azure Communications Gateway includes an off-the-shelf onboarding service called the Basic Integration Included Benefit. It's suitable for simple Operator Connect use cases. +We provide a customer success program and onboarding service called Included Benefits for operators deploying Azure Communications Gateway. We work with your team to enable rapid and effective solution design and deployment. The program includes tailored guidance from Azure for Operators engineers, using proven practices and architectural guides. -If you're launching Teams Phone Mobile, you're not eligible for the Basic Integration Included Benefit. See [Alternatives to the Basic Integration Included Benefit](#alternatives-to-the-basic-integration-included-benefit). +## Eligibility for Included Benefits and alternatives -## Onboarding with the Basic Integration Included Benefit for Operator Connect +Included Benefits is available to operator customers who: -The Basic Integration Included Benefit (BIIB) helps you to onboard customers to your Microsoft Teams Operator Connect offering as quickly as possible. You'll need to meet the [eligibility requirements](#eligibility-for-the-basic-integration-included-benefit). +- Have an active paid Azure subscription. +- Have a defined project using Azure Communications Gateway with intent to deploy. A defined project has an executive sponsor, committed customer/partner resources, established success metrics, and clear timelines for start and end of the project. +- Are located in a country/region supported by Azure Communications Gateway. Engagements are in English (although we may offer engagements in your local language, depending on the availability of our teams). -If you're eligible, we'll assign the following people as your onboarding team. +There's no cost to you for the Included Benefits program. -- A remote **Project Manager** as a single point of contact. The Project Manager is responsible for communicating the schedule and keeping you up to date with your onboarding status.-- Microsoft **Delivery Consultants** and other technical personnel, led by a **Technical Delivery Manager**. These people guide and support you through the onboarding process for Microsoft Teams Operator Connect. The process includes providing and certifying the Operator Connect SBC functionality and launching your Operator Connect service in the Teams Admin Center.+If your requirements exceed the scope of the program or you fail to meet your responsibilities, at any point we might ask you to: -### Eligibility for the Basic Integration Included Benefit +- Contact your Microsoft sales representative to arrange extra services. +- Find your own partner able to meet your needs (for example a System Integrator). -To be eligible for the BIIB, you must first deploy an Azure Communications Gateway resource. In addition: +## Process overview for Included Benefits -- You must be launching Microsoft Teams Operator Connect for fixed-line calls (not Teams Phone Mobile).-- Your network must be capable of meeting the [reliability requirements for Azure Communications Gateway](reliability-communications-gateway.md).-- You must not have more than two Azure service regions (the regions containing the voice and API infrastructure for traffic).-- You must not require any interworking options that aren't listed in the [interoperability description](interoperability.md).-- You must not require any API customization as part of the API Bridge feature (if you choose to deploy the API Bridge).+Azure for Operators partners with you to drive your business outcomes and success. Our goal is to empower you to deliver a new breed of solutions that meet the fast-changing needs of business today and in the future. To achieve this goal, Included Benefits takes a solution-centric approach, providing you with design principles and tools for your solutions: we work with you from design to deployment in a production environment. When you use Included Benefits to help accelerate and deploy solutions on Azure, there are three phases involved in the process: -If you don't meet these requirements, see [Alternatives to the Basic Integration Included Benefit](#alternatives-to-the-basic-integration-included-benefit). +### Phase 1: Discovery -If we (Microsoft) determine at our sole discretion that your integration needs are unusually complex, we might: +Once you've identified your Azure Communications Gateway project, you can ask your account team to nominate your project for Included Benefits. The Included Benefits team works with you to identify key stakeholders, understand the vision/goals, and assess the architectural needs for problems you're trying to solve. -- Decline to provide the BIIB.-- Stop providing the BIIB, even if we've already started providing it.+In this phase, we establish eligibility, scope and conditions of success for the project. -This limitation applies even if you're otherwise eligible. +### Phase 2: Solution enablement -We might also stop providing the BIIB if you don't meet [your obligations with the Basic Integration Included Benefit](#your-obligations-with-the-basic-integration-included-benefit), including making timely responses to questions and fulfilling dependencies. +If your project is approved, the Included Benefits team carries out architectural design reviews and delivers solution enablement guidance by sharing proven practices and design principles for the solution. -### Phases of the Basic Integration Included Benefit +A typical project might include the following engagements: -When you've deployed your Azure Communications Gateway resource, your onboarding team will help you to ensure that Azure Communications Gateway and your network are properly configured for Operator Connect. Your onboarding team will then help you through the Operator Connect onboarding process, so that your service is launched in the Teams Admin Center. +- Technical talks and workshops: subject matter deep dives and best practices to accelerate your deployment. These meetings often provide an overview of available documentation and admin consoles. +- Review of your architecture and design with Azure Communications Gateway. +- Checkpoint meetings: weekly or biweekly touchpoints to ensure that your deployment is progressing to completion. -The BIIB has three phases. During these phases, you'll be responsible for some steps. See [Your obligations with the Basic Integration Included Benefit](#your-obligations-with-the-basic-integration-included-benefit). +### Phase 3: Deployment -#### Phase 1: gathering information +The Included Benefits team supports your in-house resources (your teams and people) or a partner to successfully deploy Azure Communications Gateway into production. -We'll share the Teams Operator Connect specification documents (for example, for network connectivity) if you don't already have access to them. We'll also provide an Operator Connect onboarding form and a proposed test plan. When you've given us the information listed in the onboarding form, your onboarding team will work with you to create a project timeline describing your path to launching in the Teams Admin Center. +A typical project might include the following engagements: -#### Phase 2: preparing Azure Communications Gateway and your networks +- Integration testing: support and advice for troubleshooting of issues identified with the solution. +- Help gathering artifacts for Microsoft Teams Operator Connect Certification (if applicable) -We'll use the information you provided with the onboarding form to set up Azure Communications Gateway. We'll also provide guidance on preparing your own environment for Azure Communications Gateway. +## Scope of Included Benefits -#### Phase 3: preparing for live traffic +Included Benefits is aimed at standard deployments of Azure Communications Gateway. It covers: -Your onboarding team will work through the steps described in [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md) with you. As part of these steps, we'll: +- Initial deployment of Azure Communications Gateway and integration into operator's production network +- If you're launching Microsoft Teams Operator Connect, fulfilling the role of an Operator Connect Accelerator by guiding and supporting you through the onboarding process for Microsoft Teams Operator Connect. - - A technical overview of the Azure Communications Gateway platform. - - How your engineering and operations staff can interact with Azure Communications Gateway and Operator Connect. - - How your teams can get support for Azure Communications Gateway and Operator Connect. +## Responsibilities with Included Benefits -### Your obligations with the Basic Integration Included Benefit +We provide: -You're responsible for: +- Guidance on: + - Enablement of Azure Communications Gateway (including remote support). + - Configuration tasks and integration testing. + - Azure platform configuration. +- Architectural guidance and design principles for eligible solutions. +- Resources committed to making your project successful. -- Arranging Microsoft Azure Peering Service (MAPS) connectivity. If you haven't finished rolling out MAPS yet, you must have started the roll-out and have a known delivery date.-- Signing the Operator Connect agreement.-- Providing someone as a single point-of-contact to assist us in collecting information and coordinating your resources. This person must have the authority to review and approve deliverables, and otherwise ensure that these responsibilities are carried out.-- Completing the onboarding form after we've supplied it.-- Providing test numbers and working with your onboarding team to run the test plan, including testing from your network to find call flow integration issues.-- Providing timely responses to questions, issues and dependencies to ensure the project finishes on time.-- Configuring your Operator Connect and Azure environments as described in [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md), [Deploy Azure Communications Gateway](deploy.md) and [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md).-- Ensuring that your network is compliant with the Microsoft Teams _Network Connectivity Specification_ and _Operational Excellence Specification_, and any other specifications provided by Microsoft Teams.-- Ensuring that your network engineers watch the training that your onboarding team provides.+Your responsibilities include: -## Alternatives to the Basic Integration Included Benefit --If you're not eligible for the Basic Integration Included Benefit (because you're deploying Teams Phone Mobile or you don't meet the [eligibility requirements](#eligibility-for-the-basic-integration-included-benefit)), you must arrange onboarding separately. You can: --- Contact your Microsoft sales representative to arrange onboarding through Microsoft.-- Find your own onboarding partner.+- Providing someone as a single point-of-contact to assist us in collecting information and coordinating your resources (your teams and people). This person must have the authority to review and approve deliverables, and otherwise ensure that these responsibilities are carried out. +- Providing timely responses to questions, issues, and dependencies to ensure the project finishes on time. +- Creating architectural documentation and technical documentation specific to your organization. +- Hands-on administration, support, and deployment on Azure. +- Producing any reports, presentations, and meeting minutes that are specific to your organization. +- Overall program management and project management for your resources. +- Managing your deployment, third-party suppliers (for example, any system integrators or provisioning partners), and your suppliers' equipment, as necessary. +- Providing test numbers, devices and testing from your network to find integration issues. +- Demos, POCs or extra testing specific to your organization. ## Next steps |
communications-gateway | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md | Azure Communications Gateway's voice features include: ## API features -Azure Communications Gateway includes optional API integration features. These features can help you to: +Azure Communications Gateway includes optional API integration features. These features can help you to speed up your rollout and monetization of Teams Calling support. -- Adapt your existing systems to meet the requirements of the Operator Connect and Teams Phone Mobile programs with minimal disruption.-- Provide a consistent look and feel across your Operator Connect and Teams Phone Mobile offerings and the rest of your portfolio.-- Speed up your rollout and monetization of Teams Calling support.--### CallDuration upload --Azure Communications Gateway can use the Operator Connect APIs to upload information about the duration of individual calls into the Microsoft Teams environment. This allows Microsoft Teams clients to display the call duration recorded by your network, instead of the call duration recorded by Microsoft Teams. Providing this information to Microsoft Teams is a requirement of the Operator Connect program that Azure Communications Gateway performs on your behalf. --### API Bridge Number Management Portal +### Number Management Portal Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment has been certified and launched, you must not use the Operator Connect portal for provisioning. You can use Azure Communications Gateway's Number Management Portal instead. This Azure portal feature enables you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project. The Number Management Portal is available as part of the optional API Bridge feature. > [!TIP]-> The API Bridge Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals. +> The Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals. -### API mediation +### CallDuration upload -Azure Communications Gateway's API Bridge feature includes a flexible custom interface to the Operator Connect APIs. Microsoft Professional Services can create REST or SOAP APIs that adapt the Teams Operator Connect API to your networks' requirements for APIs. These custom APIs can reduce the size of an IT integration project by reducing the changes required in your existing infrastructure. +Azure Communications Gateway can use the Operator Connect APIs to upload information about the duration of individual calls into the Microsoft Teams environment. This allows Microsoft Teams clients to display the call duration recorded by your network, instead of the call duration recorded by Microsoft Teams. Providing this information to Microsoft Teams is a requirement of the Operator Connect program that Azure Communications Gateway performs on your behalf. -The API mediation function is designed to map between CRM and BSS systems in your network and the Teams Operator Connect API. Your CRM and BSS systems must be able to handle the information required by Teams Operator Connect. You must work with Microsoft to determine whether you can use the API mediation feature and to scope the project. ## Next steps - [Learn how Azure Communications Gateway fits into your network](interoperability.md).-- [Learn about onboarding to Microsoft Teams and Azure Communications Gateway's Basic Integration Included Benefit](onboarding.md).+- [Learn about onboarding to Microsoft Teams and Azure Communications Gateway's Included Benefits](onboarding.md). - [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md). |
communications-gateway | Plan And Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md | For example, if you have 28,000 users assigned to the deployment each month you' > [!TIP] > If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed. -If you choose to deploy the API Bridge (for API mediation or the API Bridge Number Management Portal), you'll also be charged for your API Bridge usage. Fees for API Bridge work in the same way as the SBC User meters: a service availability meter and a per-user meter. The number of users charged for the API Bridge is always the same as the number of users charged on the SBC User meters. +If you choose to deploy the Number Management Portal by selecting the API Bridge option, you'll also be charged for the Number Management Portal. Fees work in the same way as the SBC User meters: a service availability meter and a per-user meter. The number of users charged for the Number Management Portal is always the same as the number of users charged on the SBC User meters. > [!NOTE] > A user is any telephone number that meets all the following criteria. |
communications-gateway | Prepare For Live Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic.md | Before you can launch your Operator Connect or Teams Phone Mobile service, you a In this article, you learn about the steps you and your onboarding team must take. > [!TIP]-> In many cases, your onboarding team is from Microsoft, provided through the [Basic Integration Included Benefit](onboarding.md) or through a separate arrangement. +> In many cases, your onboarding team is from Microsoft, provided through the [Included Benefits](onboarding.md) or through a separate arrangement. ## Prerequisites In this article, you learn about the steps you and your onboarding team must tak ## Methods -In some parts of this article, the steps you must take depend on whether your deployment includes the API Bridge. This article provides instructions for both types of deployment. Choose the appropriate instructions. +In some parts of this article, the steps you must take depend on whether your deployment includes the Number Management Portal. This article provides instructions for both types of deployment. Choose the appropriate instructions. ## 1. Connect Azure Communications Gateway to your networks Your onboarding team must register the test enterprise tenant that you chose in 1. Select your company in the list of operators, fill in the form and select **Add as my operator**. 1. In your test tenant, create some test users (if you don't already have suitable users). These users must be licensed for Teams Phone System and in Teams Only mode. 1. Configure emergency locations in your test tenant.-1. Upload numbers in the API Bridge Number Management Portal (if you deployed the API Bridge) or the Operator Connect Operator Portal. Use the Calling Profile that you obtained from your onboarding team. +1. Upload numbers in the Number Management Portal (if you chose to deploy it as part of Azure Communications Gateway) or the Operator Connect Operator Portal. Use the Calling Profile that you obtained from your onboarding team. - # [API Bridge Number Management Portal](#tab/api-bridge) + # [Number Management Portal](#tab/number-management-portal) - 1. Open the API Bridge Number Management Portal from your list of Azure resources. - 1. Select **Go to Consents**. + 1. Sign in to the [Azure portal](https://azure.microsoft.com/). + 1. In the search bar at the top of the page, search for your Communications Gateway resource. + 1. Select your Communications Gateway resource. + 1. On the overview page, select **Consents** in the sidebar. 1. Select your test tenant. 1. From the menu, select **Update Relationship Status**. Set the status to **Agreement signed**. 1. From the menu, select **Manage Numbers**. 1. Select **Upload numbers**. 1. Fill in the fields as required, and then select **Review + upload** and **Upload**. - # [Operator Portal](#tab/no-api-bridge) + # [Operator Portal](#tab/no-number-management-portal) 1. Open the Operator Portal. 1. Select **Customer Consents**. Your staff can use a selection of key metrics to monitor Azure Communications Ga Your onboarding team must provide Microsoft with proof that you have integrated with the Microsoft Teams Operator Connect API for provisioning. -# [API Bridge](#tab/api-bridge) +# [Number Management Portal](#tab/number-management-portal) -If you have the API Bridge, your onboarding team can obtain proof automatically. You don't need to do anything. +If you have the Number Management Portal, your onboarding team can obtain proof automatically. You don't need to do anything. -# [Without the API Bridge](#tab/no-api-bridge) +# [Without the Number Management Portal](#tab/no-number-management-portal) -If you don't have the API Bridge, you must provide your onboarding team with proof that you have made successful API calls for: +If you don't have the Number Management Portal, you must provide your onboarding team with proof that you have made successful API calls for: - Partner consent - TN Upload to Account |
communications-gateway | Prepare To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md | The following sections describe the information you need to collect and the deci You must be a Telecommunications Service Provider who has signed an Operator Connect agreement with Microsoft. For more information, see [Operator Connect](https://cloudpartners.transform.microsoft.com/practices/microsoft-365-for-operators/connect). -You need an onboarding partner for integrating with Microsoft Phone System. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Basic Integration Included Benefit](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself. +You need an onboarding partner for integrating with Microsoft Phone System. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Included Benefits](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself. You must own globally routable numbers that you can use for testing, as follows. |
communications-gateway | Provision User Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provision-user-roles.md | This article will guide you through how to configure the permissions required fo - Deploy Azure Communications Gateway through the portal - Raise customer support requests (support tickets) - Monitor Azure Communications Gateway-- Use the API Bridge Number Management Portal for provisioning+- Use the Number Management Portal for provisioning ## Prerequisites Your staff might need different user roles, depending on the tasks they need to | Deploying Azure Communications Gateway |**Contributor** access to your subscription| | Raising support requests |**Owner**, **Contributor** or **Support Request Contributor** access to your subscription or a custom role with `Microsoft.Support/*` access at the subscription level| |Monitoring logs and metrics | **Reader** access to your subscription|-|Using the API Bridge Number Management Portal| [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** permissions to the Azure portal for your subscription| +|Using the Number Management Portal| [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** permissions to the Azure portal for your subscription| ## 2. Configure user roles You need to use the Azure portal to configure user roles. - Know who needs access. - Know the appropriate user role or roles to assign them. - Are signed in with a user that is assigned a role that has role assignments write permission, such as **Owner** or **User Access Administrator** for the subscription.-1. If you're managing access to the API Bridge Number Management Portal, ensure that you're signed in with a user that can change permissions for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md). +1. If you're managing access to the Number Management Portal, ensure that you're signed in with a user that can change permissions for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md). ### 2.2 Assign a user role 1. Follow the steps in [Assign a user role using the Azure portal](../role-based-access-control/role-assignments-portal.md) to assign the permissions you determined in [1. Understand the user roles required for Azure Communications Gateway](#1-understand-the-user-roles-required-for-azure-communications-gateway).-1. If you're managing access to the API Bridge Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for each user in the Project Synergy application. +1. If you're managing access to the Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for each user in the Project Synergy application. ## Next steps |
communications-gateway | Reliability Communications Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md | The SIP peers in Azure Communications Gateway provide OPTIONS polling to allow y ### Disaster recovery: cross-region failover for management regions -Voice traffic and the API Bridge are unaffected by failures in the management region, because the corresponding Azure resources are hosted in service regions. Users of the API Bridge Number Management Portal might need to sign in again. +Voice traffic and provisioning through the Number Management Portal are unaffected by failures in the management region, because the corresponding Azure resources are hosted in service regions. Users of the Number Management Portal might need to sign in again. Monitoring services might be temporarily unavailable until service has been restored. If the management region experiences extended downtime, we'll migrate the impacted resources to another available region. |
communications-gateway | Request Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md | Perform initial troubleshooting to help determine if you should raise an issue w Raise an issue with Azure Communications Gateway if you experience an issue with: - SIP and RTP exchanged by Azure Communications Gateway and your network.+- The Number Management Portal. - Your Azure bill relating to Azure Communications Gateway.-- The API Bridge, including the API Bridge Number Management Portal. You must have an **Owner**, **Contributor**, or **Support Request Contributor** role in your Azure Communications Gateway subscription, or a custom role with [Microsoft.Support/*](../role-based-access-control/resource-provider-operations.md#microsoftsupport) at the subscription level. You must have an **Owner**, **Contributor**, or **Support Request Contributor** 1. A new **Service** option will appear giving you the option to select either **My services** or **All services**. Select **My services**. 1. In **Service type** select **Azure Communications Gateway** from the drop-down menu. 1. A new **Problem type** option will appear. Select the problem type that most accurately describes your issue from the drop-down menu.- * Select **API Bridge Issue** if your API Bridge Number Management Portal is returning errors when you try to gain access or carry out actions. + * Select **API Bridge Issue** if your Number Management Portal is returning errors when you try to gain access or carry out actions. * Select **Configuration and Setup** if you experience issues during initial provisioning and onboarding, or if you want to change configuration for an existing deployment. * Select **Monitoring** for issues with metrics and logs. * Select **Voice Call Issue** if calls aren't connecting, have poor quality, or show unexpected behavior. |
confidential-computing | Quick Create Confidential Vm Portal Amd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal-amd.md | To create a confidential VM in the Azure portal using an Azure Marketplace image 1. For the key type, select **RSA-HSM** 1. Select your key size+ + n. Under Confidential Key Options select **Exportable** and set the Confidential operation policy as **CVM confidential operation policy**. - 1. Select **Create** to finish creating the key. + o. Select **Create** to finish creating the key. - 1. Select **Review + create** to create new disk encryption set. Wait for the resource creation to complete successfully. + p. Select **Review + create** to create new disk encryption set. Wait for the resource creation to complete successfully. - 1. Go to the disk encryption set resource in the Azure portal. + q. Go to the disk encryption set resource in the Azure portal. - 1. Select the pink banner to grant permissions to Azure Key Vault. -- > [!IMPORTANT] - > You must perform this step to successfully create the confidential VM. + r. Select the pink banner to grant permissions to Azure Key Vault. + + > [!IMPORTANT] + > You must perform this step to successfully create the confidential VM. 1. As needed, make changes to settings under the tabs **Networking**, **Management**, **Guest Config**, and **Tags**. |
connectors | Connectors Native Http Swagger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http-swagger.md | This built-in trigger sends an HTTP request to a URL for a Swagger file that des 1. In the **SWAGGER ENDPOINT URL** box, enter the URL for the Swagger file that you want, and select **Next**. - Make sure to use or create your own endpoint. As an example only, these steps use the following [Cognitive Services Face API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) Swagger URL located in the West US region and might not work in your specific trigger: + Make sure to use or create your own endpoint. As an example only, these steps use the following [Azure AI Face API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) Swagger URL located in the West US region and might not work in your specific trigger: `https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/export?DocumentFormat=Swagger&ApiName=Face%20API%20-%20V1.0` This built-in action sends an HTTP request to the URL for the Swagger file that 1. In the **SWAGGER ENDPOINT URL** box, enter the URL for the Swagger file that you want, and select **Next**. - Make sure to use or create your own endpoint. As an example only, these steps use the following [Cognitive Services Face API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) Swagger URL located in the West US region and might not work in your specific action: + Make sure to use or create your own endpoint. As an example only, these steps use the following [Azure AI Face API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) Swagger URL located in the West US region and might not work in your specific action: `https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/export?DocumentFormat=Swagger&ApiName=Face%20API%20-%20V1.0` |
container-apps | Azure Resource Manager Api Spec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md | -> [!NOTE] -> Azure Container Apps resources have migrated from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details. +## API versions ++The latest management API versions for Azure Container Apps are: ++- [`2022-10-01`](/rest/api/containerapps/stable/container-apps) (stable) +- [`2023-04-01-preview`](/rest/api/containerapps/preview/container-apps) (preview) ++### Updating API versions ++To use a specific API version in ARM or Bicep, update the version referenced in your templates. To use the latest API version in the Azure CLI, update the Azure Container Apps extension by running the following command: ++```bash +az extension add -n containerapp --upgrade +``` ++To programmatically manage Azure Container Apps with the latest API version, use the latest versions of the management SDK: ++- [.NET](/dotnet/api/azure.resourcemanager.appcontainers) +- [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/appcontainers/armappcontainers) +- [Java](/java/api/overview/azure/resourcemanager-appcontainers-readme) +- [Node.js](/javascript/api/overview/azure/arm-appcontainers-readme) +- [Python](/python/api/azure-mgmt-appcontainers/azure.mgmt.appcontainers) ## Container Apps environment |
container-apps | Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md | You can define multiple containers in a single container app to implement the [s To run multiple containers in a container app, add more than one container in the `containers` array of the container app template. -### <a name="init-containers"></a>Init containers (preview) +### <a name="init-containers"></a>Init containers You can define one or more [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) in a container app. Init containers run before the primary app container and can be used to perform initialization tasks such as downloading data or preparing the environment. |
container-apps | Dapr Component Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-component-connection.md | Start by navigating to the Dapr component creation feature. 1. In the Azure portal, navigate to your Container Apps environment. 1. In the left-side menu, under **Settings**, select **Dapr components**. -1. From the top menu, select **Add** > **Azure component (preview)** to open the **Add Dapr Component** configuration pane. +1. From the top menu, select **Add** > **Azure component** to open the **Add Dapr Component** configuration pane. :::image type="content" source="media/dapr-component-connection/select-azure-component.png" alt-text="Screenshot of selecting Azure Component from the drop down menu."::: |
container-apps | Dapr Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md | Now that you've learned about Dapr and some of the challenges it solves: [dapr-cncf]: https://www.cncf.io/projects/dapr/ [dapr-args]: https://docs.dapr.io/reference/arguments-annotations-overview/ [dapr-component]: https://docs.dapr.io/concepts/components-concept/-[dapr-component-spec]: https://docs.dapr.io/operations/components/component-schema/ +[dapr-component-spec]: https://docs.dapr.io/reference/resource-specs/ [dapr-release]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions |
container-apps | Manage Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md | Here, a connection string to a queue storage account is declared. The value for -### <a name="reference-secret-from-key-vault"></a>Reference secret from Key Vault (preview) +### <a name="reference-secret-from-key-vault"></a>Reference secret from Key Vault When you define a secret, you create a reference to a secret stored in Azure Key Vault. Container Apps automatically retrieves the secret value from Key Vault and makes it available as a secret in your container app. |
container-apps | Sticky Sessions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sticky-sessions.md | -# Session Affinity in Azure Container Apps (preview) +# Session Affinity in Azure Container Apps Session affinity, also known as sticky sessions, is a feature that allows you to route all requests from a client to the same replica. This feature is useful for stateful applications that require a consistent connection to the same replica. If your app doesn't require session affinity, we recommend that you don't enable > [!NOTE] > Session affinity is only supported when your app is in [single revision mode](revisions.md#single-revision-mode) and the ingress type is HTTP. > -> This feature is in public preview. ## Configure session affinity |
container-registry | Authenticate Kubernetes Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/authenticate-kubernetes-options.md | Last updated 10/11/2022 You can use an Azure container registry as a source of container images for Kubernetes, including clusters you manage, managed clusters hosted in [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) or other clouds, and "local" Kubernetes configurations such as [minikube](https://minikube.sigs.k8s.io/) and [kind](https://kind.sigs.k8s.io/). -To pull images to your Kuberentes cluster from an Azure container registry, an authentication and authorization mechanism needs to be established. Depending on your cluster environment, choose one of the following methods: +To pull images to your Kubernetes cluster from an Azure container registry, an authentication and authorization mechanism needs to be established. Depending on your cluster environment, choose one of the following methods: ## Scenarios |
container-registry | Container Registry Get Started Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-bicep.md | +tags: azure-resource-manager, bicep If you don't have an Azure subscription, create a [free](https://azure.microsoft ## Review the Bicep file -Use Visual studio code or your favorite editor to create a file with the following content and name it **main.bicep**: +Use Visual Studio Code or your favorite editor to create a file with the following content and name it **main.bicep**: ```bicep @minLength(5) |
container-registry | Container Registry Get Started Geo Replication Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-geo-replication-template.md | +tags: azure-resource-manager More Azure Container Registry template samples can be found in the [quickstart t * **Region**: select a location for the resource group. Example: **Central US**. * **Acr Name**: accept the generated name for the registry, or enter a name. It must be globally unique. * **Acr Admin User Enabled**: accept the default value.- * **Location**: accept the generated location for the registry's home replica, or enter a location such as **Central US**. + * **Location**: accept the generated location for the registry's home replica, or enter a location such as **Central US**. * **Acr Sku**: accept the default value. * **Acr Replica Location**: enter a location for the registry replica, using the region's short name. It must be different from the home registry location. Example: **westeurope**. |
cosmos-db | Hierarchical Partition Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md | |
cosmos-db | How To Restore In Account Continuous Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-restore-in-account-continuous-backup.md | |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md | Cosmos DB for MongoDB has numerous benefits compared to other MongoDB service of - **Active-active database**: Unlike MongoDB Atlas, Cosmos DB for MongoDB supports active-active across multiple regions. Databases can span multiple regions, with no single point of failure for **writes and reads for the same data**. MongoDB Atlas global clusters only support active-passive deployments for writes for the same data. - **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. The Azure Cosmos DB platform can scale in increments as small as 1/100th of a VM due to its architecture. This means that you can scale your database to the exact size you need, without paying for unused resources. -- **Real time analytics (HTAP) at any scale**: Run analytics workloads against your transactional MongoDB data in real time with no effect on your database. This analysis is fast and inexpensive, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Easily create Power BI dashboards, integrate with Azure Machine Learning and Azure Cognitive Services, and bring all of your data from your MongoDB workloads into a single data warehousing solution. Learn more about the [Azure Synapse Link](../synapse-link.md).+- **Real time analytics (HTAP) at any scale**: Run analytics workloads against your transactional MongoDB data in real time with no effect on your database. This analysis is fast and inexpensive, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Easily create Power BI dashboards, integrate with Azure Machine Learning and Azure AI services, and bring all of your data from your MongoDB workloads into a single data warehousing solution. Learn more about the [Azure Synapse Link](../synapse-link.md). - **Serverless deployments**: Cosmos DB for MongoDB offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you're only charged per operation, and don't pay for the database when you don't use it. |
cosmos-db | Sdk Java Spring Data V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md | The [Spring Framework](https://spring.io/projects/spring-framework) is a program You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/). -## Version Support Policy +## Version support policy -### Spring Boot Version Support +### Spring Boot version support This project supports multiple Spring Boot Versions. Visit [spring boot support policy](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-data-cosmos#spring-boot-support-policy) for more information. Maven users can inherit from the `spring-boot-starter-parent` project to obtain a dependency management section to let Spring manage the versions for dependencies. Visit [spring boot version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-data-cosmos#spring-boot-version-support) for more information. -### Spring Data Version Support +### Spring Data version support This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-data-cosmos#spring-data-version-support) for more information. -### Which Version of Azure Spring Data Azure Cosmos DB Should I Use +### Which version of Azure Spring Data Azure Cosmos DB should I use Azure Spring Data Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure Spring Data Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Data Azure Cosmos DB to use with Spring Boot / Spring Cloud version. |
cosmos-db | Sdk Java Spring Data V5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v5.md | + + Title: 'Spring Data Azure Cosmos DB v5 for API for NoSQL release notes and resources' +description: Learn about the Spring Data Azure Cosmos DB v5 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK. ++++ms.devlang: java + Last updated : 07/24/2023+++++# Spring Data Azure Cosmos DB v5 for API for NoSQL: Release notes and resources +++The Spring Data Azure Cosmos DB version 5 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact. ++The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model and framework for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application. ++You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/). ++## Version support policy ++### Spring Boot version support ++This project supports multiple Spring Boot Versions. Visit [spring boot support policy](https://github.com/Azure/azure-sdk-for-java/tree/feature/spring-boot-3/sdk/spring/azure-spring-data-cosmos#spring-boot-support-policy) for more information. Maven users can inherit from the `spring-boot-starter-parent` project to obtain a dependency management section to let Spring manage the versions for dependencies. Visit [spring boot version support](https://github.com/Azure/azure-sdk-for-java/tree/feature/spring-boot-3/sdk/spring/azure-spring-data-cosmos#spring-boot-version-support) for more information. ++### Spring Data version support ++This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/feature/spring-boot-3/sdk/spring/azure-spring-data-cosmos#spring-data-version-support) for more information. ++### Which version of Azure Spring Data Azure Cosmos DB should I use ++Azure Spring Data Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure Spring Data Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-java/tree/feature/spring-boot-3/sdk/spring/azure-spring-data-cosmos#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Data Azure Cosmos DB to use with Spring Boot / Spring Cloud version. ++> [!IMPORTANT] +> These release notes are for version 5 of Spring Data Azure Cosmos DB. +> +> Azure Spring Data Azure Cosmos DB SDK has dependency on the Spring Data framework, and supports only the API for NoSQL. +> +> See these articles for information about Spring Data on other Azure Cosmos DB APIs: +> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db) +> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db) +> ++## Get started fast ++ Get up and running with Spring Data Azure Cosmos DB by following our [Spring Boot Starter guide](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Data Azure Cosmos DB connector. ++ Alternatively, you can add the Spring Data Azure Cosmos DB dependency to your `pom.xml` file as shown below: ++ ```xml + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-spring-data-cosmos</artifactId> + <version>latest-version</version> + </dependency> + ``` ++## Helpful content ++| Content | Link | +||| +| **Release notes** | [Release notes for Spring Data Azure Cosmos DB SDK v5](https://github.com/Azure/azure-sdk-for-jav) | +| **SDK Documentation** | [Azure Spring Data Azure Cosmos DB SDK v5 documentation](https://github.com/Azure/azure-sdk-for-jav) | +| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) | +| **API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) | +| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/feature/spring-boot-3/sdk/spring/azure-spring-data-cosmos) | +| **Get started** | [Quickstart: Build a Spring Data Azure Cosmos DB app to manage Azure Cosmos DB for NoSQL data](./quickstart-java-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) | +| **Basic code samples** | [Azure Cosmos DB: Spring Data Azure Cosmos DB examples for the API for NoSQL](samples-java-spring-data.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)| +| **Performance tips**| [Performance tips for Java SDK v4 (applicable to Spring Data)](performance-tips-java-sdk-v4.md)| +| **Troubleshooting** | [Troubleshoot Java SDK v4 (applicable to Spring Data)](troubleshoot-java-sdk-v4.md) | +| **Azure Cosmos DB workshops and labs** |[Azure Cosmos DB workshops home page](https://aka.ms/cosmosworkshop) ++## Release history +Release history is maintained in the azure-sdk-for-java repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-jav). ++## Recommended version ++It's strongly recommended to use version 5.3.0 and above. ++## Additional notes ++* Spring Data Azure Cosmos DB v5 supports only Java JDK 17 and above. ++## FAQ +++## Next steps ++Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/). ++Learn more about the [Spring Framework](https://spring.io/projects/spring-framework). ++Learn more about [Spring Boot](https://spring.io/projects/spring-boot). ++Learn more about [Spring Data](https://spring.io/projects/spring-data). |
cosmos-db | Product Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md | Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### June 2023+* General availability: Terraform support is now available for all cluster management operations. See the following pages for details: + * [Cluster management](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_postgresql_cluster) + * [Worker node configuration](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_postgresql_node_configuration) + * [Coordinator / single node configuration](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_postgresql_coordinator_configuration) + * [Postgres role management](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_postgresql_role) + * [Public access: Firewall rule management](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_postgresql_firewall_rule) + * [Private access: Private endpoint management](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/private_endpoint) + * [Private access: Private Link service management](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/private_link_service) * General availability: 99.99% monthly availability [Service Level Agreement (SLA)](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services). ### June 2023 * General availability: Customer-defined database name is now available in [all regions](./resources-regions.md) at [cluster provisioning](./quickstart-create-portal.md) time.- * If the database name is not specified, the default `citus` name is used. + * If the database name isn't specified, the default `citus` name is used. * General availability: [Managed PgBouncer settings](./reference-parameters.md#managed-pgbouncer-parameters) are now configurable on all clusters. * Learn more about [connection pooling](./concepts-connection-pool.md). * General availability: Preferred availability zone (AZ) selection is now enabled in [all Azure Cosmos DB for PostgreSQL regions](./resources-regions.md) that support AZs. |
cosmos-db | Social Media Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/social-media-apps.md | Now that I got you hooked, youΓÇÖll probably think you need some PhD in math sci To achieve any of these Machine Learning scenarios, you can use [Azure Data Lake](https://azure.microsoft.com/services/data-lake-store/) to ingest the information from different sources. You can also use [U-SQL](https://azure.microsoft.com/documentation/videos/data-lake-u-sql-query-execution/) to process the information and generate an output that can be processed by Azure Machine Learning. -Another available option is to use [Azure Cognitive Services](https://www.microsoft.com/cognitive-services) to analyze your users content; not only can you understand them better (through analyzing what they write with [Text Analytics API](https://www.microsoft.com/cognitive-services/en-us/text-analytics-api)), but you could also detect unwanted or mature content and act accordingly with [Computer Vision API](https://www.microsoft.com/cognitive-services/en-us/computer-vision-api). Cognitive Services includes many out-of-the-box solutions that don't require any kind of Machine Learning knowledge to use. +Another available option is to use [Azure AI services](https://www.microsoft.com/cognitive-services) to analyze your users content; not only can you understand them better (through analyzing what they write with [Text Analytics API](https://www.microsoft.com/cognitive-services/en-us/text-analytics-api)), but you could also detect unwanted or mature content and act accordingly with [Computer Vision API](https://www.microsoft.com/cognitive-services/en-us/computer-vision-api). Azure AI services includes many out-of-the-box solutions that don't require any kind of Machine Learning knowledge to use. ## A planet-scale social experience |
cost-management-billing | Ea Portal Administration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md | Title: Azure EA portal administration description: This article explains the common tasks that an administrator accomplishes in the Azure EA portal. Previously updated : 07/05/2023 Last updated : 07/28/2023 To add another account, select **Add Another Account**, or select **Add** at the To confirm account ownership: 1. Sign in to the Azure Enterprise portal.-1. View the status. -- The status should change from **Pending** to **Start/End date**. The Start/End date is the date the user first signed in and the agreement end date. +1. View the status. + The status changes from **Pending** to **Active**. When Active, dates shown under the **Start/End Date** column are the start and end dates of the agreement. 1. When the **Warning** message pops up, the account owner needs to select **Continue** to activate the account the first time they sign in to the Azure Enterprise portal. ## Add an account from another Azure AD tenant |
cost-management-billing | Link Partner Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md | Title: Link a partner ID to your account thatΓÇÖs used to manage customers description: Track engagements with Azure customers by linking a partner ID to the user account that you use to manage the customer's resources.--++ Previously updated : 07/06/2023 Last updated : 07/27/2023 |
cost-management-billing | Understand Vm Reservation Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-vm-reservation-charges.md | Title: Understand Azure Reserved VM Instances discount description: Learn how Azure Reserved VM Instance discount is applied to running virtual machines. -+ Previously updated : 10/03/2022 Last updated : 07/27/2023 A reservation discount applies to the base VMs that you purchase from the Azure For SQL Database reserved capacity, see [Understand Azure Reserved Instances discount](../reservations/understand-reservation-charges.md). +>[!NOTE] +> Azure doesn't offer reservations for Spot VMs. + The following table illustrates the costs for your virtual machine after you purchase a Reserved VM Instance. In all cases, you're charged for storage and networking at the normal rates. | Virtual Machine Type | Charges with Reserved VM Instance | |
cost-management-billing | View Purchase Refunds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-purchase-refunds.md | ms.reviwer: nitinarora Previously updated : 12/06/2022 Last updated : 07/28/2023 Enterprise Agreement and Microsoft Customer Agreement billing readers can view a An Enterprise enrollment or Microsoft Customer Agreement billing administrator can view reservation transactions in Cost Management and Billing. +To view the corresponding refunds for reservation transactions, select a **Timespan** that includes the purchase refund dates. You might have to select **Custom** under the **Timespan** list option. + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for **Cost Management + Billing**. 1. Select **Reservation transactions**. |
cost-management-billing | Analyze Unexpected Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md | Whether you know if you have any existing cost anomalies or not, Cost analysis i ### View anomalies in Cost analysis -Anomaly detection is available in Cost analysis (preview) when you select a subscription scope. You view your anomaly status as part of **Insights**. And as with [other insights](https://azure.microsoft.com/blog/azure-cost-management-and-billing-updates-february-2021/#insights), the experience is simple. +Anomaly detection is available in Cost analysis (preview) when you select a subscription scope. You can view your anomaly status as part of **[Insights](https://azure.microsoft.com/blog/azure-cost-management-and-billing-updates-february-2021/#insights)**. In the Azure portal, navigate to Cost Management from Azure Home. Select a subscription scope and then in the left menu, select **Cost analysis**. In the view list, select any view under **Preview views**. In the following example, the **Resources** preview view is selected. If you have a cost anomaly, you see an insight. |
data-factory | Connector Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-explorer.md | To use service principal authentication, follow these steps to get a service pri 2. Grant the service principal the correct permissions in Azure Data Explorer. See [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions) for detailed information about roles and permissions and about managing permissions. In general, you must: - **As source**, grant at least the **Database viewer** role to your database- - **As sink**, grant at least the **Database ingestor** role to your database + - **As sink**, grant at least the **Database user** role to your database >[!NOTE] >When you use the UI to author, by default your login user account is used to list Azure Data Explorer clusters, databases, and tables. You can choose to list the objects using the service principal by clicking the dropdown next to the refresh button, or manually enter the name if you don't have permission for these operations. |
data-factory | Solution Template Pii Detection And Masking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-pii-detection-and-masking.md | Last updated 06/13/2023 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article describes a solution template that you can use to detect and mask PII data in your data flow with Azure Cognitive Services. +This article describes a solution template that you can use to detect and mask PII data in your data flow with Azure AI services. ## About this solution template -This template retrieves a dataset from Azure Data Lake Storage Gen2 source. Then, a request body is created with a derived column and an external call transformation calls Azure Cognitive Services and masks PII before loading to the destination sink. +This template retrieves a dataset from Azure Data Lake Storage Gen2 source. Then, a request body is created with a derived column and an external call transformation calls Azure AI services and masks PII before loading to the destination sink. The template contains one activity: - **Data flow** to detect and mask PII data This template defines 3 parameters: ## Prerequisites -* Azure Cognitive Services Resource Endpoint URL and Key (create a new resource [here](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics)) +* Azure AI services resource endpoint URL and Key (create a new resource [here](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics)) ## How to use this solution template This template defines 3 parameters: :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-12.png" alt-text="Screenshot of the template set up page with a fly-out open to create a new linked service connection to a data source."::: -3. Use the drop down to create a **New** connection to your Cognitive Services resource or choose an existing connection. You will need an endpoint URL and resource key to create this connection. +3. Use the drop down to create a **New** connection to your Azure AI services resource or choose an existing connection. You will need an endpoint URL and resource key to create this connection. - :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-2.png" alt-text="Screenshot of the template set up page to create a new connection or select an existing connection to Cognitive Services from a drop down menu."::: + :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-2.png" alt-text="Screenshot of the template set up page to create a new connection or select an existing connection to Azure AI services from a drop down menu."::: Clicking **New** will require you to create a new linked service connection. Make sure to enter your resource's endpoint URL and the resource key under the Auth header **Ocp-Apim-Subscription-Key**. - :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-13.png" alt-text="Screenshot of the template set up page with a fly-out open to create a new linked service connection to Cognitive Services."::: + :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-13.png" alt-text="Screenshot of the template set up page with a fly-out open to create a new linked service connection to Azure AI services."::: 4. Select **Use this template** to create the pipeline. This template defines 3 parameters: - [What's New in Azure Data Factory](whats-new.md) - [Introduction to Azure Data Factory](introduction.md)----- |
databox-online | Azure Stack Edge Gpu Remote Support Diagnostics Repair | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-remote-support-diagnostics-repair.md | -> Remote support is in public preview and applies to Azure Stack Edge version 2110 or later. +> Remote support applies to Azure Stack Edge version 2110 or later. On your Azure Stack Edge device, you can enable remote support to allow Microsoft Engineers to diagnose and remediate issues by accessing your device remotely. When you enable this feature, you provide consent for the level of access and the duration of access. |
databox | Data Box Troubleshoot Time Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot-time-sync.md | |
ddos-protection | Ddos Pricing Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-pricing-guide.md | + + Title: Compare pricing between Azure DDoS Protection tiers +description: Learn about Azure DDoS Protection pricing and compare pricing between Azure DDoS Protection tiers. +++++ Last updated : 07/19/2023+++++# Compare pricing between Azure DDoS Protection tiers ++Azure DDoS Protection has two tiers: Network Protection and IP Protection. The Network Protection tier is available for resources deployed in virtual networks that are enabled for DDoS Protection. The IP Protection tier is available for public IP addresses that are enabled for DDoS Protection. We recommend a cost analysis to understand the pricing differences between the tiers. In this article, we show you how to evaluate cost for your environment. ++++## Cost assessment ++Network Protection cost begins once the DDoS protection plan is created. IP Protection cost begins once the Public IP address is configured with IP Protection, and its associated virtual network isn't protected by a DDoS protection plan. +For more information, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). ++When IP Protection is enabled for a public IP resource and a DDoS protection plan is created and enabled on its virtual network, customers are billed for the lower *per Public IP resource* rate. In this case, we'll automatically start billing for Network Protection. +## Example scenarios ++For this section we use the following pricing information: ++| Network Protection | IP Protection | +||| +| $29.5 per resource. | $199 per resource. | ++> [!NOTE] +> Prices shown in this article are examples and are for illustration purposes only. For pricing information according to your region, see the [Pricing page](https://azure.microsoft.com/pricing/details/ddos-protection/) ++### Scenario: A virtual network with 10 Public IP addresses ++In this example, we compare the cost of Network Protection and IP Protection for a virtual network with 10 Public IP addresses. ++#### Network Protection ++Let's assume you have only one subscription in your tenant. If you create a Network Protection plan, the plan includes protection for 100 IP address. That subscription is billed for $2944 USD per month (29.5 USD x 100 resources). To learn more about different scenarios within DDoS Network Protection, see [Pricing examples](https://azure.microsoft.com/pricing/details/ddos-protection/#pricing). ++#### IP Protection ++Let's take this same scenario and assume you have 10 Public IP addresses. If you enable IP Protection for each Public IP address, you're billed for $1990 USD per month (199 USD x 10 resources). ++Under this scenario, it's more cost effective to enable IP Protection for each Public IP address. For environments with more than 15 Public IP addresses, it's more cost effective to create a Network Protection plan. To calculate your unique pricing scenarios, see the [pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=ddos-protection). ++> [!NOTE] +> Network Protection includes valued-added benefits such as DDoS Rapid Protection, WAF Discount, and Cost Protection. For more information, see [Azure DDoS Protection SKU Comparison](ddos-protection-sku-comparison.md). ++## Next steps ++- Learn more about [reference architectures](ddos-protection-reference-architectures.md). |
ddos-protection | Ddos Protection Sku Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md | Title: 'About Azure DDoS Protection SKU Comparison' -description: Learn about the available SKUs for Azure DDoS Protection. + Title: 'About Azure DDoS Protection tier Comparison' +description: Learn about the available tiers for Azure DDoS Protection. -# About Azure DDoS Protection SKU Comparison +# About Azure DDoS Protection tier Comparison The sections in this article discuss the resources and settings of Azure DDoS Protection. Azure DDoS Network Protection, combined with application design best practices, DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added -## SKUs +## Tiers -Azure DDoS Protection supports two SKU Types, DDoS IP Protection and DDoS Network Protection. The SKU is configured in the Azure portal during the workflow when you configure Azure DDoS Protection. +Azure DDoS Protection supports two tier Types, DDoS IP Protection and DDoS Network Protection. The tier is configured in the Azure portal during the workflow when you configure Azure DDoS Protection. -The following table shows features and corresponding SKUs. +The following table shows features and corresponding tiers. | Feature | DDoS IP Protection | DDoS Network Protection | |||| The following table shows features and corresponding SKUs. | Integration with Firewall Manager | Yes | Yes | | Microsoft Sentinel data connector and workbook | Yes | Yes | | Protection of resources across subscriptions in a tenant | Yes | Yes |-| Public IP Standard SKU protection | Yes | Yes | -| Public IP Basic SKU protection | No | Yes | +| Public IP Standard tier protection | Yes | Yes | +| Public IP Basic tier protection | No | Yes | | DDoS rapid response support | Not available | Yes | | Cost protection | Not available | Yes | | WAF discount | Not available | Yes | DDoS Network Protection and DDoS IP Protection have the following limitations: DDoS IP Protection is similar to Network Protection, but has the following additional limitation: -- Public IP Basic SKU protection isn't supported. +- Public IP Basic tier protection isn't supported. >[!Note] >Scenarios in which a single VM is running behind a public IP is supported, but not recommended. For more information, see [Fundamental best practices](./fundamental-best-practices.md#design-for-scalability). |
defender-for-cloud | Defender For Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md | With a simple agentless setup at scale, you can [enable Defender for Storage](.. |Aspect|Details| |-|:-|-|Release state:|General availability (GA)| -|Feature availability:|- Activity monitoring (security alerts) - General availability (GA)<br>- Malware Scanning ΓÇô Preview<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview| -|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): Free during public preview\*\*<br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* In the future, Malware Scanning will be priced at $0.15/GB of data ingested. Billing for Malware Scanning is not enabled during public preview and advanced notice will be given before billing starts.| +|Release state:|General Availability (GA)| +|Feature availability:|- Activity monitoring (security alerts) - General Availability (GA)<br>- Malware Scanning ΓÇô Preview, **General Availability (GA) on September 1, 2023** <br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview| +|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): Free during public preview\*\* <br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* Malware Scanning is offered for free during the public preview but will **start being billed on September 1, 2023, at $0.15/GB (USD) of data ingested.** Customers are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per month per storage account and control costs using this feature.| | Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring | |Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.| |Clouds:|:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds\*<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Azure Government (only activity monitoring support on the [classic plan](/azure/defender-for-cloud/defender-for-storage-classic))<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts| Defender for Storage continuously analyzes data and control plane logs from prot ### Malware Scanning (powered by Microsoft Defender Antivirus) +> [!NOTE] +> Malware Scanning is offered for free during public preview. **Billing will begin when generally available (GA) on September 1, 2023 and priced at $0.15 (USD)/GB of data scanned.** You are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per storage account per month and control costs. + Malware Scanning in Defender for Storage helps protect storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, applying Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements to handle untrusted content. Every file type is scanned, and scan results are returned for every file. The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale. This is a configurable feature in the new Defender for Storage plan that is priced per GB scanned. Learn more about [Malware Scanning](defender-for-storage-malware-scan.md). |
defender-for-cloud | Defender For Storage Malware Scan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md | +> [!NOTE] +> Malware Scanning is offered for free during public preview. **Billing will begin when generally available (GA) on September 1, 2023 and priced at $0.15 (USD)/GB of data scanned.** You are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per storage account per month and control costs. + Malware Scanning in Defender for Storage helps protect your storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements for handling untrusted content. The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale. |
defender-for-cloud | Defender For Storage Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md | After you [enable Microsoft Defender for Storage](../storage/common/azure-defend There are three main components to test: - Malware Scanning (if enabled)- - Sensitive data threat detection (if enabled)- - Activity monitoring +> [!TIP] +> **A hands-on lab to try out Malware Scanning in Defender for Storage** +> +> We recommend you try the [Ninja training instructions](https://aka.ms/DfStorage/NinjaTrainingLab) for detailed step-by-step instructions on how to test Malware Scanning end-to-end with setting up responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities. + ## Testing Malware Scanning Follow these steps to test Malware Scanning after enabling the feature: Learn more about: - [Threat response](defender-for-storage-threats-alerts.md) - [Customizing data sensitivity settings](defender-for-storage-data-sensitivity.md) - [Threat detection and alerts](defender-for-storage-threats-alerts.md)+++ |
defender-for-cloud | Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md | The recommendations listed below are being moved to the **Implement security bes - Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest - Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)-- Cognitive Services accounts should enable data encryption with a customer-managed key (CMK)+- Azure AI services accounts should enable data encryption with a customer-managed key (CMK) - Container registries should be encrypted with a customer-managed key (CMK) - SQL managed instances should use customer-managed keys to encrypt data at rest - SQL servers should use customer-managed keys to encrypt data at rest To increase the coverage of this benchmark, the following 35 preview recommendat | Security control | New recommendations | |--|--|-| Enable encryption at rest | - Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest<br>- Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)<br>- Bring your own key data protection should be enabled for MySQL servers<br>- Bring your own key data protection should be enabled for PostgreSQL servers<br>- Cognitive Services accounts should enable data encryption with a customer-managed key (CMK)<br>- Container registries should be encrypted with a customer-managed key (CMK)<br>- SQL managed instances should use customer-managed keys to encrypt data at rest<br>- SQL servers should use customer-managed keys to encrypt data at rest<br>- Storage accounts should use customer-managed key (CMK) for encryption | +| Enable encryption at rest | - Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest<br>- Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)<br>- Bring your own key data protection should be enabled for MySQL servers<br>- Bring your own key data protection should be enabled for PostgreSQL servers<br>- Azure AI services accounts should enable data encryption with a customer-managed key (CMK)<br>- Container registries should be encrypted with a customer-managed key (CMK)<br>- SQL managed instances should use customer-managed keys to encrypt data at rest<br>- SQL servers should use customer-managed keys to encrypt data at rest<br>- Storage accounts should use customer-managed key (CMK) for encryption | | Implement security best practices | - Subscriptions should have a contact email address for security issues<br> - Auto provisioning of the Log Analytics agent should be enabled on your subscription<br> - Email notification for high severity alerts should be enabled<br> - Email notification to subscription owner for high severity alerts should be enabled<br> - Key vaults should have purge protection enabled<br> - Key vaults should have soft delete enabled | | Manage access and permissions | - Function apps should have 'Client Certificates (Incoming client certificates)' enabled | | Protect applications against DDoS attacks | - Web Application Firewall (WAF) should be enabled for Application Gateway<br> - Web Application Firewall (WAF) should be enabled for Azure Front Door Service service | |
dev-box | Concept Dev Box Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md | The virtual network specified in a network connection also determines the region ## Dev box pool -A dev box pool is a collection of dev boxes that you manage together and to which you apply similar settings. You can create multiple dev box pools to support the needs of hybrid teams that work in different regions or on different workloads. +A dev box pool is a collection of dev boxes that you manage together and to which you apply similar settings. You can create multiple dev box pools to support the needs of hybrid teams that work in different regions or on different workloads. |
dev-box | How To Configure Azure Compute Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md | The image version must meet the following requirements: - Windows 10 Enterprise version 20H2 or later. - Windows 11 Enterprise 21H2 or later. - Generalized VM image.- - You must create the image using these three sysprep options: `/mode:vm flag: Sysprep /generalize /oobe /mode:vm`. </br> + - You must create the image using these three sysprep options: `/generalize /oobe /mode:vm`. </br> For more information, see: [Sysprep Command-Line Options](/windows-hardware/manufacture/desktop/sysprep-command-line-options?view=windows-11#modevm&preserve-view=true). - To speed up the Dev Box creation time, you can disable the reserved storage state feature in the image by using the following command: `DISM.exe /Online /Set-ReservedStorageState /State:Disabled`. </br> For more information, see: [DISM Storage reserve command-line options](/windows-hardware/manufacture/desktop/dism-storage-reserve?view=windows-11#set-reservedstoragestate&preserve-view=true). |
digital-twins | Concepts Data Ingress Egress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md | Once the data has been historized, you can query this data in Azure Data Explore You can also use data history in combination with [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) to aggregate data from disparate sources. This can be useful in many scenarios. Here are two examples: * Combine information technology (IT) data from ERP or CRM systems (like Dynamics 365, SAP, or Salesforce) with operational technology (OT) data from IoT devices and production management systems. For an example that illustrates how a company might combine this data, see the following blog post: [Integrating IT and OT Data with Azure Digital Twins, Azure Data Explorer, and Azure Synapse](https://techcommunity.microsoft.com/t5/internet-of-things-blog/integrating-it-and-ot-data-with-azure-digital-twins-azure-data/ba-p/3401981).-* Integrate with the Azure AI and Cognitive Services [Multivariate Anomaly Detector](../ai-services/anomaly-detector/overview.md), to quickly connect your Azure Digital Twins data with a downstream AI/machine learning solution that specializes in anomaly detection. The [Azure Digital Twins Multivariate Anomaly Detection Toolkit](/samples/azure-samples/digital-twins-mvad-integration/adt-mvad-integration/) is a sample project that provides a workflow for training multiple Multivariate Anomaly Detector models for several scenario analyses, based on historical digital twin data. It then leverages the trained models to detect abnormal operations and anomalies in modeled Azure Digital Twins environments, in near real-time. +* Integrate with the Azure AI and Azure AI services [Multivariate Anomaly Detector](../ai-services/anomaly-detector/overview.md), to quickly connect your Azure Digital Twins data with a downstream AI/machine learning solution that specializes in anomaly detection. The [Azure Digital Twins Multivariate Anomaly Detection Toolkit](/samples/azure-samples/digital-twins-mvad-integration/adt-mvad-integration/) is a sample project that provides a workflow for training multiple Multivariate Anomaly Detector models for several scenario analyses, based on historical digital twin data. It then leverages the trained models to detect abnormal operations and anomalies in modeled Azure Digital Twins environments, in near real-time. ### Security and delivery details Learn more about endpoints and routing events to external * [Endpoints and event routes](concepts-route-events.md) See how to set up Azure Digital Twins to ingest data from IoT Hub:-* [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md) +* [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md) |
energy-data-services | Tutorial Wellbore Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-wellbore-ddms.md | The first step is to get the following information from your [Azure Data Manager | Parameter | Value | Example | | | |-- |-| CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | -| CLIENT_SECRET | Client secrets | _fl****************** | -| TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx | -| SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | -| base_uri | URI | `<instance>.energy.azure.com` | -| data-partition-id | Data Partition(s) | `<instance>-<data-partition-name>` | +| client_id | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | +| client_secret | Client secrets | _fl****************** | +| tenant_id | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx | +| base_url | URL | `https://<instance>.energy.azure.com` | +| data-partition-id | Data Partition(s) | `<data-partition-name>` | You'll use this information later in the tutorial. Next, set up Postman: To import the files: - 1. Create two JSON files on your computer by copying the data that's in the collection and environment files. + 1. Select **Import** in Postman. - 1. In Postman, select **Import** > **Files** > **Choose Files**, and then select the two JSON files on your computer. + :::image type="content" source="media/tutorial-wellbore-ddms/postman-import-button.png" alt-text="Screenshot that shows the import button in Postman." lightbox="media/tutorial-wellbore-ddms/postman-import-files.png"::: - 1. In **Import Entities** in Postman, select **Import**. + 1. Paste the URL of each file into the search box. - :::image type="content" source="media/tutorial-wellbore-ddms/postman-import-files.png" alt-text="Screenshot that shows importing collection and environment files in Postman." lightbox="media/tutorial-wellbore-ddms/postman-import-files.png"::: + :::image type="content" source="media/tutorial-wellbore-ddms/postman-import-search.png" alt-text="Screenshot that shows importing collection and environment files in Postman via URL." lightbox="media/tutorial-wellbore-ddms/postman-import-files.png"::: 1. In the Postman environment, update **CURRENT VALUE** with the information from your Azure Data Manager for Energy instance details The Postman collection for Wellbore DDMS contains requests you can use to intera :::image type="content" source="media/tutorial-wellbore-ddms/postman-test-failure.png" alt-text="Screenshot that shows failure for a Postman call." lightbox="media/tutorial-wellbore-ddms/postman-test-failure.png"::: -## Generate a token to use in APIs --To generate a token: --1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Azure Data Manager for Energy instance. -- ```bash - curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \ - --header 'Content-Type: application/x-www-form-urlencoded' \ - --data-urlencode 'grant_type=client_credentials' \ - --data-urlencode 'client_id={{CLIENT_ID}}' \ - --data-urlencode 'client_secret={{CLIENT_SECRET}}' \ - --data-urlencode 'scope={{SCOPE}}' - ``` -- :::image type="content" source="media/tutorial-wellbore-ddms/postman-generate-token.png" alt-text="Screenshot of the Wellbore DDMs generate token cURL code." lightbox="media/tutorial-wellbore-ddms/postman-generate-token.png"::: --1. Use the token output to update `access_token` in your Wellbore DDMS environment. Then, you can use the bearer token as an authorization type in other API calls. - ## Use Wellbore DDMS APIs to work with well data records Successfully completing the Postman requests that are described in the following Wellbore DDMS APIs indicates successful ingestion and retrieval of well records in your Azure Data Manager for Energy instance. For more information, see [Manage legal tags](how-to-manage-legal-tags.md). Create a well record in your Azure Data Manager for Energy instance. -API: **Well** > **Create Well**. +API: **Well** > **Create Well** Method: POST Method: POST Get the well record data for your Azure Data Manager for Energy instance. -API: **Well** > **Well** +API: **Well** > **Well by ID** Method: GET ### Get well versions Get the versions of each ingested well record in your Azure Data Manager for Energy instance. -API: **Well** > **Well versions** +API: **Well** > **Well Versions** Method: GET Method: GET Get the details of a specific version for a specific well record in your Azure Data Manager for Energy instance. -API: **Well** > **Well Specific version** +API: **Well** > **Well Specific Version** Method: GET |
event-grid | Communication Services Email Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-email-events.md | This section contains an example of what that data would look like for each even "eventType": "Microsoft.Communication.EmailDeliveryReportReceived", "dataVersion": "1.0", "metadataVersion": "1",- "eventTime": "2020-09-18T00:22:20+00:00" + "eventTime": "2020-09-18T00:22:20.822Z" }] ``` This section contains an example of what that data would look like for each even "eventType": "Microsoft.Communication.EmailEngagementTrackingReportReceived", "dataVersion": "1.0", "metadataVersion": "1",- "eventTime": "2022-09-06T22:34:52.1303612+00:00" + "eventTime": "2022-09-06T22:34:52.688Z" }] ``` |
event-grid | Event Grid Dotnet Get Started Pull Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-grid-dotnet-get-started-pull-delivery.md | + + Title: Quickstart - Use Event Grid pull delivery from .NET app +description: This quickstart shows you how to send messages to and receive messages from Azure Event Grid namespace topics using the .NET programming language. ++++ Last updated : 07/26/2023++++# Quickstart: Send and receive messages from an Azure Event Grid namespace topic (.NET) - (Preview) ++In this quickstart, you'll do the following steps: ++1. Create an Event Grid namespace, using the Azure portal. +2. Create an Event Grid namespace topic, using the Azure portal. +3. Create an event subscription, using the Azure portal. +4. Write a .NET console application to send a set of messages to the topic +5. Write a .NET console application to receive those messages from the topic. +++>[!Important] +> Namespaces, namespace topics, and event subscriptions associated to namespace topics are initially available in the following regions: +> +>- East US +>- Central US +>- South Central US +>- West US 2 +>- East Asia +>- Southeast Asia +>- North Europe +>- West Europe +>- UAE North ++> [!NOTE] +> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to an Event Grid Namespace Topic and then receiving them. For an overview of the .NET client library, see [Azure Event Grid client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/Azure.Messaging.EventGrid_4.17.0-beta.1/sdk/eventgrid/Azure.Messaging.EventGridV2/src/Generated/EventGridClient.cs). For more samples, see [Event Grid .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/feature/eventgrid/namespaces/sdk/eventgrid/Azure.Messaging.EventGrid/samples). ++## Prerequisites ++If you're new to the service, see [Event Grid overview](overview.md) before you do this quickstart. ++- **Azure subscription**. To use Azure services, including Azure Event Grid, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/dotnet). +- **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. To use the latest syntax, we recommend that you install .NET 6.0, or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects. ++++++## Launch Visual Studio ++You can authorize access to the Event Grid namespace using the following steps: ++Launch Visual Studio. If you see the **Get started** window, select the **Continue without code** link in the right pane. +++## Send messages to the topic ++This section shows you how to create a .NET console application to send messages to an Event Grid topic. ++### Create a console application ++1. In Visual Studio, select **File** -> **New** -> **Project** menu. +2. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**. + 1. Select **C#** for the programming language. + 1. Select **Console** for the type of the application. + 1. Select **Console App** from the results list. + 1. Then, select **Next**. ++ :::image type="content" source="./media/event-grid-dotnet-get-started-events/new-send-project.png" alt-text="Screenshot showing the Create a new project dialog box with C# and Console selected."::: +3. Enter **EventSender** for the project name, **EventGridQuickStart** for the solution name, and then select **Next**. ++ :::image type="content" source="./media/event-grid-dotnet-get-started-events/event-sender.png" alt-text="Screenshot showing the solution and project names in the Configure your new project dialog box."::: +4. On the **Additional information** page, select **Create** to create the solution and the project. ++### Add the NuGet packages to the project ++1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu. +2. Run the following command to install the **Azure.Messaging.EventGrid** NuGet package: ++ ```powershell + Install-Package Azure.Messaging.EventGrid -Version 4.17.0-beta.1 + ``` +++## Add code to send event to the namespace topic ++1. Replace the contents of `Program.cs` with the following code. The important steps are outlined below, with additional information in the code comments. ++ > [!IMPORTANT] + > Update placeholder values (`<ENDPOINT>` , `<TOPIC-NAME>` and `<TOPIC-KEY>`) in the code snippet with names of your topic endpoint , topic name and topic key. +++ ```csharp + using Azure.Messaging.EventGrid.Namespaces; ++ // TODO: Replace the <ENDPOINT> , <TOPIC-KEY> and <TOPIC-NAME> placeholder + + var topicEndpoint = "https://namespace01.eastus-1.eventgrid.azure.net"; // Replace with the url of your event grid namespace. + var topicKey = "Enter the Topic Access Key"; + var topicName = "Enter the Topic Name"; + var subscription = "Enter the event grid subscription name"; ++ // Construct the client using an Endpoint for a namespace as well as the access key + var client = new EventGridClient(new Uri(topicEndpoint), new AzureKeyCredential(topicKey)); + + // Publish a single CloudEvent using a custom TestModel for the event data. + var @ev = new CloudEvent("employee_source", "type", new TestModel { Name = "Bob", Age = 18 }); + await client.PublishCloudEventAsync(topicName, ev); + + // Publish a batch of CloudEvents. + + public class TestModel + { + public string Name { get; set; } + public int Age { get; set; } + } + + await client.PublishCloudEventsAsync( + topicName, + new[] { + new CloudEvent("employee_source", "type", new TestModel { Name = "Tom", Age = 55 }), + new CloudEvent("employee_source", "type", new TestModel { Name = "Alice", Age = 25 })}); + + Console.WriteLine("An event has been published to the topic. Press any key to end the application."); + Console.ReadKey(); + ``` ++ ++2. Build the project, and ensure that there are no errors. +3. Run the program and wait for the confirmation message. ++ ```bash + An event has been published to the topic. Press any key to end the application. + ``` ++ > [!IMPORTANT] + > In most cases, it will take a minute or two for the role assignment to propagate in Azure. In rare cases, it may take up to **eight minutes**. If you receive authentication errors when you first run your code, wait a few moments and try again. ++4. In the Azure portal, follow these steps: + 1. Navigate to your Event Grid namespace. + 1. On the **Overview** page, select the queue in the middle pane. ++ :::image type="content" source="./media/event-grid-dotnet-get-started-events/event-grid-namespace-metrics.png" alt-text="Screenshot showing the Event Grid Namespace page in the Azure portal." lightbox="./media/event-grid-dotnet-get-started-events/event-grid-namespace-metrics.png"::: ++ ++## Pull messages from the topic ++In this section, you create a .NET console application that receives messages from the topic. ++### Create a project to receive the published CloudEvents ++1. In the Solution Explorer window, right-click the **EventGridQuickStart** solution, point to **Add**, and select **New Project**. +1. Select **Console application**, and select **Next**. +1. Enter **EventReceiver** for the **Project name**, and select **Create**. +1. In the **Solution Explorer** window, right-click **EventReceiver**, and select **Set as a Startup Project**. ++### Add the NuGet packages to the project ++1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu. +1. Run the following command to install the **Azure.Messaging.EventGrid** NuGet package: ++ ```powershell + Install-Package Azure.Messaging.EventGrid -Version 4.17.0-beta.1 + ``` ++ :::image type="content" source="./media/event-grid-dotnet-get-started-events/install-event-grid-package.png" alt-text="Screenshot showing EventReceiver project selected in the Package Manager Console."::: ++++++### Add the code to receive events from the topic ++In this section, you add code to retrieve messages from the queue. ++1. Within the `Program` class, add the following code: + + ```csharp + using System.Threading.Tasks; + using Azure; + using Azure.Messaging; + using Azure.Messaging.EventGrid.Namespaces; + + + var topicEndpoint = "https://namespace01.eastus-1.eventgrid.azure.net"; // Replace with the url of your event grid namespace. + var topicKey = "Enter the Topic Access Key"; + var topicName = "Enter the Topic Name"; + var subscription = "Enter the event grid subscription name"; ++ // Construct the client using an Endpoint for a namespace as well as the access key + var client = new EventGridClient(new Uri(topicEndpoint), new AzureKeyCredential(topicKey)); + + // Receive the published CloudEvents + ReceiveResult result = await client.ReceiveCloudEventsAsync(topicName, subscription); + + Console.WriteLine("Received Response"); + ``` ++1. Append the following methods to the end of the `Program` class. ++ ```csharp + // handle received messages. Define these variables on the top. ++ var toRelease = new List<string>(); + var toAcknowledge = new List<string>(); + var toReject = new List<string>(); ++ // Iterate through the results and collect the lock tokens for events we want to release/acknowledge/result ++ foreach (ReceiveDetails detail in result.Value) + { + CloudEvent @event = detail.Event; + BrokerProperties brokerProperties = detail.BrokerProperties; + Console.WriteLine(@event.Data.ToString()); ++ // The lock token is used to acknowledge, reject or release the event + Console.WriteLine(brokerProperties.LockToken); + + // If the event is from the "employee_source" and the name is "Bob", we are not able to acknowledge it yet, so we release it + if (@event.Source == "employee_source" && @event.Data.ToObjectFromJson<TestModel>().Name == "Bob") + { + toRelease.Add(brokerProperties.LockToken); + } + // acknowledge other employee_source events + else if (@event.Source == "employee_source") + { + toAcknowledge.Add(brokerProperties.LockToken); + } + // reject all other events + else + { + toReject.Add(brokerProperties.LockToken); + } + } ++ // Release/acknowledge/reject the events ++ if (toRelease.Count > 0) + { + ReleaseResult releaseResult = await client.ReleaseCloudEventsAsync(<TOPIC-NAME>, <EVENT-SUBSCRIPTION>, toRelease); ++ // Inspect the Release result + Console.WriteLine($"Failed count for Release: {releaseResult.FailedLockTokens.Count}"); + foreach (FailedLockToken failedLockToken in releaseResult.FailedLockTokens) + { + Console.WriteLine($"Lock Token: {failedLockToken.LockToken}"); + Console.WriteLine($"Error Code: {failedLockToken.ErrorCode}"); + Console.WriteLine($"Error Description: {failedLockToken.ErrorDescription}"); + } ++ Console.WriteLine($"Success count for Release: {releaseResult.SucceededLockTokens.Count}"); + foreach (string lockToken in releaseResult.SucceededLockTokens) + { + Console.WriteLine($"Lock Token: {lockToken}"); + } + } ++ if (toAcknowledge.Count > 0) + { + AcknowledgeResult acknowledgeResult = await client.AcknowledgeCloudEventsAsync(<TOPIC-NAME>, <EVENT-SUBSCRIPTION>, toAcknowledge); ++ // Inspect the Acknowledge result + Console.WriteLine($"Failed count for Acknowledge: {acknowledgeResult.FailedLockTokens.Count}"); + foreach (FailedLockToken failedLockToken in acknowledgeResult.FailedLockTokens) + { + Console.WriteLine($"Lock Token: {failedLockToken.LockToken}"); + Console.WriteLine($"Error Code: {failedLockToken.ErrorCode}"); + Console.WriteLine($"Error Description: {failedLockToken.ErrorDescription}"); + } ++ Console.WriteLine($"Success count for Acknowledge: {acknowledgeResult.SucceededLockTokens.Count}"); + foreach (string lockToken in acknowledgeResult.SucceededLockTokens) + { + Console.WriteLine($"Lock Token: {lockToken}"); + } + } ++ if (toReject.Count > 0) + { + RejectResult rejectResult = await client.RejectCloudEventsAsync(<TOPIC-NAME>, <EVENT-SUBSCRIPTION>, toReject); ++ // Inspect the Reject result + Console.WriteLine($"Failed count for Reject: {rejectResult.FailedLockTokens.Count}"); + foreach (FailedLockToken failedLockToken in rejectResult.FailedLockTokens) + { + Console.WriteLine($"Lock Token: {failedLockToken.LockToken}"); + Console.WriteLine($"Error Code: {failedLockToken.ErrorCode}"); + Console.WriteLine($"Error Description: {failedLockToken.ErrorDescription}"); + } ++ Console.WriteLine($"Success count for Reject: {rejectResult.SucceededLockTokens.Count}"); + foreach (string lockToken in rejectResult.SucceededLockTokens) + { + Console.WriteLine($"Lock Token: {lockToken}"); + } + } ++ ``` ++## Clean up resources ++Navigate to your Event Grid namespace in the Azure portal, and select **Delete** on the Azure portal to delete the Event Grid namespace and the topic in it. |
event-grid | Publish Events Using Namespace Topics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-using-namespace-topics.md | Title: Publish and consume events or messages using namespace topics (Preview) -description: Describes the steps to publish and consume events or messages using namespace topics. + Title: Publish and consume events using namespace topics (Preview) +description: This article provides step-by-step instructions to publish events to Azure Event Grid in the CloudEvents JSON format and consume those events by using the pull delivery model. -# Publish to namespace topics and consume events (Preview) +# Publish to namespace topics and consume events in Azure Event Grid (Preview) -This article describes the steps to publish and consume events using the [CloudEvents](https://github.com/cloudevents/spec) with [JSON format](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) using namespace topics and event subscriptions. +The article provides step-by-step instructions to publish events to Azure Event Grid in the [CloudEvents JSON format](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) and consume those events by using the pull delivery model. To be specific, you'll use Azure CLI and Curl to publish events to a namespace topic in Event Grid and pull those events from an event subscription to the namespace topic. For more information about the pull delivery model, see [Pull delivery overview](pull-delivery-overview.md). [!INCLUDE [pull-preview-note](./includes/pull-preview-note.md)] -Follow the steps in this article if you need to send application events to Event Grid so that they're received by consumer clients. Consumers connect to Event Grid to read the events ([pull delivery](pull-delivery-overview.md)). - >[!NOTE] > - Namespaces, namespace topics, and event subscriptions associated to namespace topics are initially available in the following regions: East US, Central US, South Central US, West US 2, East Asia, Southeast Asia, North Europe, West Europe, UAE North-> - The Azure [CLI Event Grid extension](/cli/azure/eventgrid) does not yet support namespaces and any of the resources it contains. We will use [Azure CLI resource](/cli/azure/resource) to create Event Grid resources. +> - The Azure [CLI Event Grid extension](/cli/azure/eventgrid) doesn't yet support namespaces and any of the resources it contains. We will use [Azure CLI resource](/cli/azure/resource) to create Event Grid resources. > - Azure Event Grid namespaces currently supports Shared Access Signatures (SAS) token and access keys authentication. [!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)] Follow the steps in this article if you need to send application events to Event - This article requires version 2.0.70 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Create a resource group--The resource group is a logical collection into which Azure resources are deployed and managed. --Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. We use this resource group to contain all resources created in this article. +Create an Azure resource group with the [az group create](/cli/azure/group#az-group-create) command. You'll use this resource group to contain all resources created in this article. The general steps to use Cloud Shell to run commands are:+ - Select **Open Cloud Shell** to see an Azure Cloud Shell window on the right pane. - Copy the command and paste into the Azure Cloud Shell window. - Press ENTER to run the command. -First, set the name of the resource group on an environmental variable. +1. Declare a variable to hold the name of an Azure resource group. Specify a name for the resource group by replacing `<your-resource-group-name>` with a value you like. -```azurecli-interactive -resource_group=<your-resource-group-name> -``` + ```azurecli-interactive + resource_group="<your-resource-group-name>" + ``` +2. Create a resource group. Change the location as you see fit. -Create a resource group. Change the location as you see fit. -```azurecli-interactive -az group create --name $resource_group --location eastus -``` + ```azurecli-interactive + az group create --name $resource_group --location eastus + ``` [!INCLUDE [register-provider-cli.md](./includes/register-provider-cli.md)] ## Create a namespace An Event Grid namespace provides a user-defined endpoint to which you post your events. The following example creates a namespace in your resource group using Bash in Azure Cloud Shell. The namespace name must be unique because it's part of a DNS entry. A namespace name should meet the following rules:-- It should be between 3-50 characters-- It should be regionally unique-- Only allowed characters a-z, A-Z, 0-9 and -++- It should be between 3-50 characters. +- It should be regionally unique. +- Only allowed characters are a-z, A-Z, 0-9 and - - It shouldn't start with reserved key word prefixes like `Microsoft`, `System` or `EventGrid`. -Set the name you want to provide to your namespace on an environmental variable. +1. Declare a variable to hold the name for your Event Grid namespace. Specify a name for the namespace by replacing `<your-namespace-name>` with a value you like. -```azurecli-interactive -namespace=<your-namespace-name> -``` + ```azurecli-interactive + namespace="<your-namespace-name>" + ``` +2. Create a namespace. You may want to change the location where it's deployed. -Create a namespace. You may want to change the location where it's deployed. --```azurecli-interactive -az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --location centraluseuap --properties "{}" -``` + ```azurecli-interactive + az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --location eastus --properties "{}" + ``` ## Create a namespace topic -The topic created is used to hold all events published to the namespace endpoint. +Create a topic that's used to hold all events published to the namespace endpoint. -Set your topic name to a variable. -```azurecli-interactive -topic=<your-topic-name> -``` +1. Declare a variable to hold the name for your namespace topic. Specify a name for the namespace topic by replacing `<your-topic-name>` with a value you like. -Create your namespace topic: + ```azurecli-interactive + topic="<your-topic-name>" + ``` +2. Create your namespace topic: -```azurecli-interactive -az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type topics --name $topic --parent namespaces/$namespace --properties "{}" -``` + ```azurecli-interactive + az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type topics --name $topic --parent namespaces/$namespace --properties "{}" + ``` ## Create an event subscription Create an event subscription setting its delivery mode to *queue*, which supports [pull delivery](pull-delivery-overview.md#pull-delivery-1). For more information on all configuration options,see the latest Event Grid control plane [REST API](/rest/api/eventgrid). -Set the name of your event subscription on a variable: -```azurecli-interactive -event_subscription=<your-event-subscription-name> -``` +1. Declare a variable to hold the name for an event subscription to your namespace topic. Specify a name for the event subscription by replacing `<your-event-subscription-name>` with a value you like. ++ ```azurecli-interactive + event_subscription="<your-event-subscription-name>" + ``` +2. Create an event subscription to the namespace topic: -```azurecli-interactive -az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type eventsubscriptions --name $event_subscription --parent namespaces/$namespace/topics/$topic --properties "{ \"deliveryConfiguration\":{\"deliveryMode\":\"Queue\",\"queue\":{\"receiveLockDurationInSeconds\":300}} }" -``` + ```azurecli-interactive + az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type eventsubscriptions --name $event_subscription --parent namespaces/$namespace/topics/$topic --properties "{ \"deliveryConfiguration\":{\"deliveryMode\":\"Queue\",\"queue\":{\"receiveLockDurationInSeconds\":300}} }" + ``` ## Send events to your topic-Follow the steps in the coming sections for a simple way to test sending events to your topic. +Now, send a sample event to the namespace topic by following steps in this section. ### List namespace access keys -Get the access keys associated to the namespace created. You use one of them to authenticate when publishing events. In order to list your keys, you need the full namespace resource ID first. +1. Get the access keys associated with the namespace you created. You'll use one of them to authenticate when publishing events. To list your keys, you need the full namespace resource ID first. Get it by running the following command: -```azurecli-interactive -namespace_resource_id=$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "id" --output tsv) -``` + ```azurecli-interactive + namespace_resource_id=$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "id" --output tsv) + ``` +2. Get the first key from the namespace: -Get the first key from the namespace: + ```azurecli-interactive + key=$(az resource invoke-action --action listKeys --ids $namespace_resource_id --query "key1" --output tsv) + ``` -```azurecli-interactive -key=$(az resource invoke-action --action listKeys --ids $namespace_resource_id --query "key1" --output tsv) -``` ### Publish an event -Retrieve the namespace hostname. You use it to compose the namespace HTTP endpoint to which events are sent. Note that the following operations were first available with API version `2023-06-01-preview`. --```azurecli-interactive -publish_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic:publish?api-version=2023-06-01-preview -``` --Create a sample CloudEvents-compliant event: +1. Retrieve the namespace hostname. You'll use it to compose the namespace HTTP endpoint to which events are sent. Note that the following operations were first available with API version `2023-06-01-preview`. -```azurecli-interactive -event=' { "specversion": "1.0", "id": "'"$RANDOM"'", "type": "com.yourcompany.order.ordercreatedV2", "source" : "/mycontext", "subject": "orders/O-234595", "time": "'`date +%Y-%m-%dT%H:%M:%SZ`'", "datacontenttype" : "application/json", "data":{ "orderId": "O-234595", "url": "https://yourcompany.com/orders/o-234595"}} ' -``` + ```azurecli-interactive + publish_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic:publish?api-version=2023-06-01-preview + ``` +2. Create a sample [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) compliant event: -The `data` element is the payload of your event. Any well-formed JSON can go in this field. For more information on properties (also known as context attributes) that can go in an event, see the [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specifications. + ```azurecli-interactive + event=' { "specversion": "1.0", "id": "'"$RANDOM"'", "type": "com.yourcompany.order.ordercreatedV2", "source" : "/mycontext", "subject": "orders/O-234595", "time": "'`date +%Y-%m-%dT%H:%M:%SZ`'", "datacontenttype" : "application/json", "data":{ "orderId": "O-234595", "url": "https://yourcompany.com/orders/o-234595"}} ' + ``` -CURL is a utility that sends HTTP requests. In this article, use CURL to send the event to the topic. + The `data` element is the payload of your event. Any well-formed JSON can go in this field. For more information on properties (also known as context attributes) that can go in an event, see the [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specifications. +3. Use CURL to send the event to the topic. CURL is a utility that sends HTTP requests. -```azurecli-interactive -curl -X POST -H "Content-Type: application/cloudevents+json" -H "Authorization:SharedAccessKey $key" -d "$event" $publish_operation_uri -``` + ```azurecli-interactive + curl -X POST -H "Content-Type: application/cloudevents+json" -H "Authorization:SharedAccessKey $key" -d "$event" $publish_operation_uri + ``` ### Receive the event -You receive events from Event Grid using an endpoint that refers to an event subscription. Compose that endpoint with the following command: +You receive events from Event Grid using an endpoint that refers to an event subscription. -```azurecli-interactive -receive_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic/eventsubscriptions/$event_subscription:receive?api-version=2023-06-01-preview -``` +1. Compose that endpoint by running the following command: -Submit a request to consume the event: + ```azurecli-interactive + receive_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic/eventsubscriptions/$event_subscription:receive?api-version=2023-06-01-preview + ``` +2. Submit a request to consume the event: -```azurecli-interactive -curl -X POST -H "Content-Type: application/json" -H "Authorization:SharedAccessKey $key" -d "$event" $receive_operation_uri -``` -### Acknowledge an event --After you receive an event, you pass that event to your application for processing. Once you have successfully processed your event, you no longer need that event to be in your event subscription. To instruct Event Grid to delete the event, you **acknowledge** it using its lock token that you got on the receive operation's response. In the previous step, you should have received a response that includes a `brokerProperties` object with a `lockToken` property. Copy the lock token value and set it on an environment variable: --```azurecli-interactive -lockToken=<paste-the-lock-token-here> -``` --Now, build the acknowledge operation payload, which specifies the lock token for the event you want to be acknowledged. + ```azurecli-interactive + curl -X POST -H "Content-Type: application/json" -H "Authorization:SharedAccessKey $key" -d "$event" $receive_operation_uri + ``` -```azurecli-interactive -acknowledge_request_payload=' { "lockTokens": ["'$lockToken'"]} ' -``` +### Acknowledge an event -Proceed with building the string with the acknowledge operation URI: +After you receive an event, you pass that event to your application for processing. Once you have successfully processed your event, you no longer need that event to be in your event subscription. To instruct Event Grid to delete the event, you **acknowledge** it using its lock token that you got on the receive operation's response. -```azurecli-interactive -acknowledge_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic/eventsubscriptions/$event_subscription:acknowledge?api-version=2023-06-01-preview -``` +1. In the previous step, you should have received a response that includes a `brokerProperties` object with a `lockToken` property. Copy the lock token value and set it on an environment variable: -Finally, submit a request to acknowledge the event received: + ```azurecli-interactive + lockToken="<paste-the-lock-token-here>" + ``` +2. Now, build the acknowledge operation payload, which specifies the lock token for the event you want to be acknowledged. -```azurecli-interactive -curl -X POST -H "Content-Type: application/json" -H "Authorization:SharedAccessKey $key" -d "$acknowledge_request_payload" $acknowledge_operation_uri -``` + ```azurecli-interactive + acknowledge_request_payload=' { "lockTokens": ["'$lockToken'"]} ' + ``` +3. Proceed with building the string with the acknowledge operation URI: -If the acknowledge operation is executed before the lock token expires (300 seconds as set when we created the event subscription), you should see a response like the following example: + ```azurecli-interactive + acknowledge_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic/eventsubscriptions/$event_subscription:acknowledge?api-version=2023-06-01-preview + ``` +4. Finally, submit a request to acknowledge the event received: -```json -{"succeededLockTokens":["CiYKJDQ4NjY5MDEyLTk1OTAtNDdENS1BODdCLUYyMDczNTYxNjcyMxISChDZae43pMpE8J8ovYMSQBZS"],"failedLockTokens":[]} -``` + ```azurecli-interactive + curl -X POST -H "Content-Type: application/json" -H "Authorization:SharedAccessKey $key" -d "$acknowledge_request_payload" $acknowledge_operation_uri + ``` + + If the acknowledge operation is executed before the lock token expires (300 seconds as set when we created the event subscription), you should see a response like the following example: + + ```json + {"succeededLockTokens":["CiYKJDQ4NjY5MDEyLTk1OTAtNDdENS1BODdCLUYyMDczNTYxNjcyMxISChDZae43pMpE8J8ovYMSQBZS"],"failedLockTokens":[]} + ``` + +## Next steps +To learn more about pull delivery model, see [Pull delivery overview](pull-delivery-overview.md). |
event-hubs | Event Hubs Auto Inflate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-auto-inflate.md | Title: Automatically scale up throughput units in Azure Event Hubs description: Enable Auto-inflate on a namespace to automatically scale up throughput units (standard tier). Previously updated : 06/13/2022 Last updated : 07/28/2023 # Automatically scale up Azure Event Hubs throughput units (standard tier) -Azure Event Hubs is a highly scalable data streaming platform. As such, Event Hubs usage often increases after starting to use the service. Such usage requires increasing the predetermined [throughput units (TUs)](event-hubs-scalability.md#throughput-units) to scale Event Hubs and handle larger transfer rates. The **Auto-inflate** feature of Event Hubs automatically scales up by increasing the number of TUs, to meet usage needs. Increasing TUs prevents throttling scenarios, in which: -* Data ingress rates exceed set TUs -* Data egress request rates exceed set TUs +When you create a standard tier Event Hubs namespace, you specify the number of [throughput units (TUs)](event-hubs-scalability.md#throughput-units). These TUs may not be enough when the usage goes up later. When that happens, you could manually increase the number of TUs assigned to the namespace. However, it's better to have Event Hubs automatically increase (inflate) TUs based on the workload. -The Event Hubs service increases the throughput when load increases beyond the minimum threshold, without any requests failing with ServerBusy errors. +The **Auto-inflate** feature of Event Hubs automatically scales up by increasing the number of TUs, to meet usage needs. Increasing TUs prevents throttling scenarios where data ingress or data egress rates exceed the rates allowed by the TUs assigned to the namespace. The Event Hubs service increases the throughput when load increases beyond the minimum threshold, without any requests failing with ServerBusy errors. > [!NOTE] > The auto-inflate feature is currently supported only in the standard tier. ## How Auto-inflate works in standard tier+ Event Hubs traffic is controlled by TUs (standard tier). For the limits such as ingress and egress rates per TU, see [Event Hubs quotas and limits](event-hubs-quotas.md). Auto-inflate enables you to start small with the minimum required TUs you choose. The feature then scales automatically to the maximum limit of TUs you need, depending on the increase in your traffic. Auto-inflate provides the following benefits: - An efficient scaling mechanism to start small and scale up as you grow. Event Hubs traffic is controlled by TUs (standard tier). For the limits such as - More control over scaling, because you control when and how much to scale. > [!NOTE]-> Auto-inflate does not *automatically* scale down the number of TUs when ingress or egress rates drop below the limits. +> Auto-inflate doesn't **automatically scale down** the number of TUs when ingress or egress rates drop below the limits. ## Enable Auto-inflate on a namespace-You can enable or disable Auto-inflate on a standard tier Event Hubs namespace by using either [Azure portal](https://portal.azure.com) or an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventhub/eventhubs-create-namespace-and-enable-inflate). -> [!NOTE] -> Basic tier Event Hubs namespaces do not support Auto-inflate. +You can enable or disable Auto-inflate on a standard tier Event Hubs namespace by using either [Azure portal](https://portal.azure.com) or an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventhub/eventhubs-create-namespace-and-enable-inflate). ## Use Azure portal+ In the Azure portal, you can enable the feature when creating a standard Event Hubs namespace or after the namespace is created. You can also set TUs for the namespace and specify maximum limit of TUs You can enable the Auto-inflate feature **when creating an Event Hubs namespace**. The following image shows you how to enable the auto-inflate feature for a standard tier namespace and configure TUs to start with and the maximum number of TUs. With this option enabled, you can start small with your TUs and scale up as your usage needs increase. The upper limit for inflation doesn't immediately affect pricing, which depends on the number of TUs used per hour. You can enable the Auto-inflate feature during an Azure Resource Manager templat `isAutoInflateEnabled` property to **true** and set `maximumThroughputUnits` to 10. For example: ```json-"resources": [ +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "namespaceName": { + "defaultValue": "fabrikamehubns", + "type": "String" + } + }, + "variables": {}, + "resources": [ {- "apiVersion": "2017-04-01", + "type": "Microsoft.EventHub/namespaces", + "apiVersion": "2022-10-01-preview", "name": "[parameters('namespaceName')]",- "type": "Microsoft.EventHub/Namespaces", - "location": "[variables('location')]", + "location": "East US", "sku": { "name": "Standard",- "tier": "Standard" + "tier": "Standard", + "capacity": 1 }, "properties": {+ "minimumTlsVersion": "1.2", + "publicNetworkAccess": "Enabled", + "disableLocalAuth": false, + "zoneRedundant": true, "isAutoInflateEnabled": true,- "maximumThroughputUnits": 10 - }, - "resources": [ - { - "apiVersion": "2017-04-01", - "name": "[parameters('eventHubName')]", - "type": "EventHubs", - "dependsOn": [ - "[concat('Microsoft.EventHub/namespaces/', parameters('namespaceName'))]" - ], - "properties": {}, - "resources": [ - { - "apiVersion": "2017-04-01", - "name": "[parameters('consumerGroupName')]", - "type": "ConsumerGroups", - "dependsOn": [ - "[parameters('eventHubName')]" - ], - "properties": {} - } - ] - } - ] + "maximumThroughputUnits": 10, + "kafkaEnabled": true + } }- ] + ] +} ``` For the complete template, see the [Create Event Hubs namespace and enable inflate](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventhub/eventhubs-create-namespace-and-enable-inflate) template on GitHub. For the complete template, see the [Create Event Hubs namespace and enable infla ## Next steps -You can learn more about Event Hubs by visiting the following links: --* [Event Hubs overview](./event-hubs-about.md) +To learn more about Event Hubs, see [Event Hubs overview](./event-hubs-about.md) |
event-hubs | Event Hubs Get Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-get-connection-string.md | Title: Get connection string - Azure Event Hubs | Microsoft Docs + Title: Get connection string - Azure Event Hubs description: This article provides instructions for getting a connection string that clients can use to connect to Azure Event Hubs. Previously updated : 06/21/2022 Last updated : 07/28/2023 # Get an Event Hubs connection string To communicate with an event hub in a namespace, you need a connection string fo The connection string for a namespace has the following components embedded within it, -* Fully qualified domain name of the Event Hubs namespace you created (it includes the Event Hubs namespace name followed by servicebus.windows.net) +* Fully qualified domain name of the Event Hubs namespace you created (it includes the Event Hubs namespace name followed by `servicebus.windows.net`) * Name of the shared access key * Value of the shared access key This section gives you steps for getting a connection string to a specific event 1. On the **Event Hubs Namespace** page, select the event hub in the bottom pane. 1. On the **Event Hubs instance** page, select **Shared access policies** on the left menu. -1. There's no default policy created for an event hub. Create a policy with **Manage**, **Send, or **Listen** access. +1. There's no default policy created for an event hub. Create a policy with **Manage**, **Send**, or **Listen** access. 1. Select the policy from the list. 1. Select the **copy** button next to the **Connection string-primary key** field. |
event-hubs | Event Hubs Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quotas.md | Title: Quotas and limits - Azure Event Hubs | Microsoft Docs description: This article provides limits and quotas for Azure Event Hubs. For example, number of namespaces per subscription, number of event hubs per namespace. Previously updated : 06/17/2022 Last updated : 07/28/2023 # Azure Event Hubs quotas and limits |
event-hubs | Schema Registry Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-overview.md | Title: Use Azure Schema Registry from Apache Kafka and other apps description: This article provides an overview of Schema Registry support by Azure Event Hubs and how it can be used from your Apache Kafka and other apps. Previously updated : 05/04/2022 Last updated : 07/28/2023 In many event streaming and messaging scenarios, the event or message payload co An event producer uses a schema to serialize event payload and publish it to an event broker such as Event Hubs. Event consumers read event payload from the broker and deserialize it using the same schema. So, both producers and consumers can validate the integrity of the data with the same schema. ## What is Azure Schema Registry? **Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). With schema-driven serialization frameworks like Apache Avro, moving serialization metadata into shared schemas can also help with **reducing the per-message overhead**. It's because each message doesn't need to have the metadata (type information and field names) as it's the case with tagged formats such as JSON. - > [!NOTE] +> [!NOTE] > The feature isn't available in the **basic** tier. Having schemas stored alongside the events and inside the eventing infrastructure ensures that the metadata that's required for serialization or deserialization is always in reach and schemas can't be misplaced. |
firewall | Logs And Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/logs-and-metrics.md | The following metrics are available for Azure Firewall: If your firewall is running into SNAT port exhaustion, you should add at least five public IP address. This increases the number of SNAT ports available. For more information, see [Azure Firewall features](features.md#multiple-public-ip-addresses). -- **AZFW Latency Probe (Preview)** - Estimates Azure Firewall average latency.+- **AZFW Latency Probe** - Estimates Azure Firewall average latency. Unit: m/s |
global-secure-access | Concept Private Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-private-access.md | Title: Learn about Microsoft Entra Private Access -description: Learn Microsoft Entra Private Access works. +description: Learn about how Microsoft Entra Private Access secures access to your private corporate resources through the creation of Quick Access and Global Secure Access apps. Previously updated : 06/20/2023 Last updated : 07/27/2023 |
global-secure-access | Concept Remote Network Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-remote-network-connectivity.md | Title: Global Secure Access (preview) remote network connectivity -description: Learn about remote network connectivity in Global Secure Access (preview). +description: Learn how remote network connectivity in Global Secure Access (preview) allows users to connect to your corporate network from a remote location, such as a branch office. Previously updated : 06/01/2023 Last updated : 07/27/2023 |
global-secure-access | Concept Traffic Forwarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-traffic-forwarding.md | Title: Global Secure Access (preview) traffic forwarding profiles -description: Learn about the traffic forwarding profiles for Global Secure Access (preview). +description: Learn about how traffic forwarding profiles for Global Secure Access (preview) streamlines how you route traffic through your network. |
global-secure-access | Concept Universal Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-universal-conditional-access.md | Title: Learn about Universal Conditional Access through Global Secure Access -description: Conditional Access concepts. +description: Learn about how Microsoft Entra Internet Access and Microsoft Entra Private Access secures access to your resources through Conditional Access. Previously updated : 06/21/2023 Last updated : 07/27/2023 |
global-secure-access | How To Compliant Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-compliant-network.md | Title: Enable compliant network check with Conditional Access -description: Require known compliant network locations with Conditional Access. +description: Learn how to require known compliant network locations in order to connect to your secured resources with Conditional Access. Previously updated : 07/07/2023 Last updated : 07/27/2023 |
global-secure-access | How To Configure Per App Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-configure-per-app-access.md | Title: How to configure Per-app Access using Global Secure Access applications -description: Learn how to configure Per-app Access using Global Secure Access applications for Microsoft Entra Private Access +description: Learn how to configure per-app access to your private, internal resources using Global Secure Access applications for Microsoft Entra Private Access. Previously updated : 07/18/2023 Last updated : 07/27/2023 |
global-secure-access | How To Configure Quick Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-configure-quick-access.md | Title: How to configure Quick Access for Global Secure Access -description: Learn how to configure Quick Access for Microsoft Entra Private Access. +description: Learn how to specify the internal resources to secure with Microsoft Entra Private Access using a Quick Access app. Previously updated : 07/18/2023 Last updated : 07/27/2023 |
global-secure-access | How To Create Remote Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-create-remote-networks.md | Title: How to create remote networks for Global Secure Access (preview) -description: Learn how to create remote networks for Global Secure Access (preview). +description: Learn how to create remote networks, such as branch office locations, for Global Secure Access (preview). Previously updated : 06/29/2023 Last updated : 07/27/2023 |
global-secure-access | How To Get Started With Global Secure Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-get-started-with-global-secure-access.md | Title: Get started with Global Secure Access (preview) -description: Get started with Global Secure Access (preview) for Microsoft Entra. +description: Configure the main components of Microsoft Entra Internet Access and Microsoft Entra Private Access, which make up Global Secure Access, Microsoft's Security Service Edge solution. Previously updated : 07/11/2023 Last updated : 07/27/2023 |
global-secure-access | How To Install Windows Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-install-windows-client.md | Title: The Global Secure Access Client for Windows (preview) -description: Install the Global Secure Access Client for Windows to enable client connectivity. +description: Install the Global Secure Access Client for Windows to enable connectivity to Microsoft's Security Edge Solutions, Microsoft Entra Internet Access and Microsoft Entra Private Access. Previously updated : 06/23/2023 Last updated : 07/27/2023 |
global-secure-access | How To Source Ip Restoration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-source-ip-restoration.md | Title: Enable source IP restoration with the Global Secure Access preview -description: Learn how to enable source IP restoration to ensure source IP match in downstream resources. +description: Learn how to enable source IP restoration to ensure the source IP matches in downstream resources. Previously updated : 06/09/2023 Last updated : 07/27/2023 |
global-secure-access | How To Universal Tenant Restrictions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-universal-tenant-restrictions.md | Title: Global Secure Access (preview) and universal tenant restrictions -description: What are universal tenant restrictions +description: Learn about how Global Secure Access (preview) secures access to your corporate network by restricting access to external tenants. Previously updated : 06/09/2023 Last updated : 07/27/2023 |
global-secure-access | Overview What Is Global Secure Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/overview-what-is-global-secure-access.md | Title: What is Global Secure Access (preview)? -description: Learn how Global Secure Access (preview) provides control and visibility to users and devices both inside and outside of a traditional office. +description: Learn how Microsoft's Security Service Edge solution, Global Secure Access (preview), provides network access control and visibility to users and devices inside and outside a traditional office. Previously updated : 06/23/2023 Last updated : 07/27/2023 Microsoft Entra Internet Access and Microsoft Entra Private Access comprise Micr ![Diagram of the Global Secure Access solution, illustrating how identities and remote networks can connect to Microsoft 365, private, and public resources through the service.](media/overview-what-is-global-secure-access/global-secure-access-diagram.png) +## Global Secure Access is Microsoft's Security Service Edge solution + Microsoft Entra Internet Access and Microsoft Entra Private Access - coupled with Microsoft Defender for Cloud Apps, our SaaS-security focused Cloud Access Security Broker (CASB) - are uniquely built as a solution that converges network, identity, and endpoint access controls so you can secure access to any app or resource, from anywhere. With the addition of these Global Secure Access products, Microsoft Entra simplifies access policy management and enables access orchestration for employees, business partners, and digital workloads. You can continuously monitor and adjust user access in real time if permissions or risk level changes. The Global Secure Access features streamline the roll-out and management of the access control capabilities with a unified portal. These features are delivered from Microsoft's Wide Area Network, spanning 140+ countries and 190+ network edge locations. This private network, which is one of the largest in the world, enables organizations to optimally connect users and devices to public and private resources seamlessly and securely. For a list of the current points of presence, see [Global Secure Access points of presence article](reference-points-of-presence.md). |
governance | Nz Ism Restricted 3 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nz-ism-restricted-3-5.md | + + Title: Regulatory Compliance details for NZ ISM Restricted v3.5 +description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Last updated : 07/20/2023++++# Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative ++The following article details how the Azure Policy Regulatory Compliance built-in initiative +definition maps to **compliance domains** and **controls** in NZ ISM Restricted v3.5. +For more information about this compliance standard, see +[NZ ISM Restricted v3.5](https://www.nzism.gcsb.govt.nz/ism-document). To understand +_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and +[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md). ++The following mappings are to the **NZ ISM Restricted v3.5** controls. Many of the controls +are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete +initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. +Then, find and select the **New Zealand ISM Restricted v3.5** Regulatory Compliance built-in +initiative definition. ++> [!IMPORTANT] +> Each control below is associated with one or more [Azure Policy](../overview.md) definitions. +> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the +> control; however, there often is not a one-to-one or complete match between a control and one or +> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions +> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In +> addition, the compliance standard includes controls that aren't addressed by any Azure Policy +> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your +> overall compliance status. The associations between compliance domains, controls, and Azure Policy +> definitions for this compliance standard may change over time. To view the change history, see the +> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/NZ_ISM_Restricted_v3_5.json). ++## Access Control and Passwords ++### 16.4.30 Privileged Access Management ++**ID**: NZISM Security Benchmark AC-11 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | +|[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | +|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | ++### 16.5.10 Authentication ++**ID**: NZISM Security Benchmark AC-13 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) | ++### 16.6.8 Logging Requirements ++**ID**: NZISM Security Benchmark AC-17 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) | ++### 16.6.9 Events to be logged ++**ID**: NZISM Security Benchmark AC-18 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) | +|[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) | +|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | +|[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) | +|[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVm.json) | +|[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) | +|[Log connections should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e442) |This policy helps audit any PostgreSQL databases in your environment without log_connections setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogConnections_Audit.json) | +|[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Azure Kubernetes Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F245fc9df-fa96-4414-9a0b-3738c2f7341c) |Azure Kubernetes Service's resource logs can help recreate activity trails when investigating security incidents. Enable it to make sure the logs will exist when needed |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/Kubernetes_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) | +|[Resource logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) | +|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) | ++### 16.6.12 Event log protection ++**ID**: NZISM Security Benchmark AC-19 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) | ++### 16.1.32 System User Identitfication ++**ID**: NZISM Security Benchmark AC-2 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | +|[Function apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | +|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ++### 16.1.35 Methods for system user identification and authentication ++**ID**: NZISM Security Benchmark AC-3 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | ++### 16.1.46 Suspension of access ++**ID**: NZISM Security Benchmark AC-5 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | ++### 16.3.5 Use of Privileged Accounts ++**ID**: NZISM Security Benchmark AC-9 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | ++## Cryptography ++### 17.5.7 Authentication mechanisms ++**ID**: NZISM Security Benchmark CR-10 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | ++### 17.9.25 Contents of KMPs ++**ID**: NZISM Security Benchmark CR-15 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) | +|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) | +|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) | ++### 17.1.52 Data Recovery ++**ID**: NZISM Security Benchmark CR-2 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Key vaults should have deletion protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. You can prevent permanent data loss by enabling purge protection and soft delete. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. Keep in mind that key vaults created after September 1st 2019 have soft-delete enabled by default. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) | +|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) | ++### 17.1.53 Reducing storage and physical transfer requirements ++**ID**: NZISM Security Benchmark CR-3 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | +|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) | +|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) | +|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) | +|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | +|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) | +|[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) | +|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncrypted_Deny.json) | +|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | +|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) | +|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) | +|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | ++### 17.2.24 Using RSA ++**ID**: NZISM Security Benchmark CR-5 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Certificates using RSA cryptography should have the specified minimum key size](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcee51871-e572-4576-855c-047c820360f0) |Manage your organizational compliance requirements by specifying a minimum key size for RSA certificates stored in your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_RSA_MinimumKeySize.json) | ++### 17.4.16 Using TLS ++**ID**: NZISM Security Benchmark CR-8 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) | +|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | +|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) | ++## Gateway security ++### 19.1.11 Using Gateways ++**ID**: NZISM Security Benchmark GS-2 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[All authorization rules except RootManageSharedAccessKey should be removed from Service Bus namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1817ec0-a368-432a-8057-8371e17ac6ee) |Service Bus clients should not use a namespace level access policy that provides access to all queues and topics in a namespace. To align with the least privilege security model, you should create access policies at the entity level for queues and topics to provide access to only the specific entity |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditNamespaceAccessRules_Audit.json) | +|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | +|[Azure Key Vault Managed HSM should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc39ba22d-4428-4149-b981-70acb31fc383) |Malicious deletion of an Azure Key Vault Managed HSM can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge Azure Key Vault Managed HSM. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted Azure Key Vault Managed HSM. No one inside your organization or Microsoft will be able to purge your Azure Key Vault Managed HSM during the soft delete retention period. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_Recoverable_Audit.json) | +|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | +|[Storage account keys should not be expired](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F044985bb-afe1-42cd-8a36-9d5d42424537) |Ensure the user storage account keys are not expired when key expiration policy is set, for improving security of account keys by taking action when the keys are expired. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountKeysExpired_Restrict.json) | ++### 19.1.12 Configuration of Gateways ++**ID**: NZISM Security Benchmark GS-3 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) | +|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | +|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | +|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | ++### 19.1.23 Testing of Gateways ++**ID**: NZISM Security Benchmark GS-5 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | ++## Infrastructure ++### 10.8.35 Security Architecture ++**ID**: NZISM Security Benchmark INF-9 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | +|[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) | +|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | +|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) | +|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F45e05259-1eb5-4f70-9574-baf73e9d219b) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit_V2.json) | +|[Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a15ec92-a229-4763-bb14-0ea34a568f8d) |Azure Policy Add-on for Kubernetes service (AKS) extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_AzurePolicyAddOn_Audit.json) | +|[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | +|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | +|[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) | +|[Private endpoint connections on Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F009a0c92-f5b4-4776-9b66-4ed2b4775563) |Private endpoint connections allow secure communication by enabling private connectivity to Batch accounts without a need for public IP addresses at the source or destination. Learn more about private endpoints in Batch at [https://docs.microsoft.com/azure/batch/private-connectivity](../../../batch/private-connectivity.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_PrivateEndpoints_AuditIfNotExists.json) | +|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) | +|[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) | +|[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) | +|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | +|[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) | +|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) | ++## Information Security Incidents ++### 7.1.7 Preventing and detecting information security incidents ++**ID**: NZISM Security Benchmark ISI-2 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | +|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | +|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | ++## Information security monitoring ++### 6.2.5 Conducting vulnerability assessments ++**ID**: NZISM Security Benchmark ISM-3 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | ++### 6.2.6 Resolving vulnerabilities ++**ID**: NZISM Security Benchmark ISM-4 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) | +|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | ++### 6.4.5 Availability requirements ++**ID**: NZISM Security Benchmark ISM-7 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | ++## Network security ++### 18.3.19 Content of a Denial of Service (DoS) response plan ++**ID**: NZISM Security Benchmark NS-5 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) | ++### 18.4.7 Intrusion Detection and Prevention strategy (IDS/IPS) ++**ID**: NZISM Security Benchmark NS-7 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Connection throttling should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5345bb39-67dc-4960-a1bf-427e16b9a0bd) |This policy helps audit any PostgreSQL databases in your environment without Connection throttling enabled. This setting enables temporary connection throttling per IP for too many invalid password login failures. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_ConnectionThrottling_Enabled_Audit.json) | ++### 18.4.8 IDS/IPSs on gateways ++**ID**: NZISM Security Benchmark NS-8 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | +|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | +|[Web Application Firewall (WAF) should use the specified mode for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12430be1-6cc8-4527-a9a8-e3d38f250096) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayMode_Audit.json) | +|[Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F425bea59-a659-4cbb-8d31-34499bd030b8) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Azure Front Door Service. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Mode_Audit.json) | ++## Product Security ++### 12.4.4 Patching vulnerabilities in products ++**ID**: NZISM Security Benchmark PRS-5 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | +|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | ++## Physical Security ++### 8.3.5 Network infrastructure in unsecure areas ++**ID**: NZISM Security Benchmark PS-4 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | +|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | ++## Software security ++### 14.1.8 Developing hardened SOEs ++**ID**: NZISM Security Benchmark SS-2 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | +|[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | +|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) | ++### 14.1.9 Maintaining hardened SOEs ++**ID**: NZISM Security Benchmark SS-3 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure API for FHIR should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ee56206-5dd1-42ab-b02d-8aae8b1634ce) |Azure API for FHIR should have at least one approved private endpoint connection. Clients in a virtual network can securely access resources that have private endpoint connections through private links. For more information, visit: [https://aka.ms/fhir-privatelink](https://aka.ms/fhir-privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_PrivateLink_Audit.json) | +|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](../../../security-center/security-center-services.md#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](../../../security-center/security-center-endpoint-protection.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssues_Audit.json) | +|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) | +|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | +|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | +|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[5.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockHostNamespace.json) | +|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ReadOnlyRootFileSystem.json) | +|[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) | +|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[8.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/IngressHttpsOnly.json) | +|[Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F423dd1ba-798e-40e4-9c4d-b6902674b423) |Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockAutomountToken.json) | +|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) | +|[Kubernetes clusters should not grant CAP_SYS_ADMIN security capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd2e7ea85-6b44-4317-a0be-1b951587f626) |To reduce the attack surface of your containers, restrict CAP_SYS_ADMIN Linux capabilities. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[5.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerDisallowedSysAdminCapability.json) | +|[Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockDefaultNamespace.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | +|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | +|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ++### 14.2.4 Application Whitelisting ++**ID**: NZISM Security Benchmark SS-5 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | ++### 14.5.8 Web applications ++**ID**: NZISM Security Benchmark SS-9 +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) | +|[App Service apps should have authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95bccee9-a7f8-4bec-9ee9-62c3473701fc) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the web app, or authenticate those that have tokens before they reach the web app. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_WebApp_Audit.json) | +|[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | +|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | +|[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) | +|[App Service apps should use latest 'HTTP Version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c122334-9d20-4eb8-89ea-ac9a705b74ae) |Periodically, newer versions are released for HTTP either due to security flaws or to include additional functionality. Using the latest HTTP version for web apps to take advantage of security fixes, if any, and/or new functionalities of the newer version. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_HTTP_Latest.json) | +|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | +|[Function apps should have authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc75248c1-ea1d-4a9c-8fc9-29a6aabd5da8) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_functionapp_Audit.json) | +|[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) | +|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | +|[Function apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | +|[Function apps should use latest 'HTTP Version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe2c1c086-2d84-4019-bff3-c44ccd95113c) |Periodically, newer versions are released for HTTP either due to security flaws or to include additional functionality. Using the latest HTTP version for web apps to take advantage of security fixes, if any, and/or new functionalities of the newer version. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_HTTP_Latest.json) | ++## Next steps ++Additional articles about Azure Policy: ++- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. +- See the [initiative definition structure](../concepts/initiative-definition-structure.md). +- Review other examples at [Azure Policy samples](./index.md). +- Review [Understanding policy effects](../concepts/effects.md). +- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). |
governance | Rbi Itf Banks 2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md | + + Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 +description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Last updated : 07/20/2023++++# Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative ++The following article details how the Azure Policy Regulatory Compliance built-in initiative +definition maps to **compliance domains** and **controls** in Reserve Bank of India IT Framework for Banks v2016. +For more information about this compliance standard, see +[Reserve Bank of India IT Framework for Banks v2016](https://rbidocs.rbi.org.in/rdocs/notification/PDFs/NT41893F697BC1D57443BB76AFC7AB56272EB.PDF). To understand +_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and +[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md). ++The following mappings are to the **Reserve Bank of India IT Framework for Banks v2016** controls. Many of the controls +are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete +initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. +Then, find and select the **[Preview]: Reserve Bank of India - IT Framework for Banks** Regulatory Compliance built-in +initiative definition. ++> [!IMPORTANT] +> Each control below is associated with one or more [Azure Policy](../overview.md) definitions. +> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the +> control; however, there often is not a one-to-one or complete match between a control and one or +> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions +> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In +> addition, the compliance standard includes controls that aren't addressed by any Azure Policy +> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your +> overall compliance status. The associations between compliance domains, controls, and Azure Policy +> definitions for this compliance standard may change over time. To view the change history, see the +> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/RBI_ITF_Banks_v2016.json). ++## Authentication Framework For Customers ++### Authentication Framework For Customers-9.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | +|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | +|[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | ++### Authentication Framework For Customers-9.3 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | +|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | +|[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | ++## Network Management And Security ++### Network Inventory-4.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | +|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_FlowLog_TrafficAnalytics_Audit.json) | ++### Network Device Configuration Management-4.3 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) | +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) | +|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | +|[Azure firewall policy should enable TLS inspection within application rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa58ac66d-92cb-409c-94b8-8e48d7a96596) |Enabling TLS inspection is recommended for all application rules to detect, alert, and mitigate malicious activity in HTTPS. To learn more about TLS inspection with Azure Firewall, visit [https://aka.ms/fw-tlsinspect](https://aka.ms/fw-tlsinspect) |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ACAT_FirewallPolicy_EnbaleTlsForAllAppRules_Audit.json) | +|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | +|[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | +|[Web Application Firewall (WAF) should enable all firewall rules for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F632d3993-e2c0-44ea-a7db-2eca131f356d) |Enabling all Web Application Firewall (WAF) rules strengthens your application security and protects your web applications against common vulnerabilities. To learn more about Web Application Firewall (WAF) with Application Gateway, visit [https://aka.ms/waf-ag](https://aka.ms/waf-ag) |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ACAT_WAF_AppGatewayAllRulesEnabled_Audit.json) | ++### Anomaly Detection-4.7 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | +|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | ++### Security Operation Centre-4.9 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | +|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | ++### Perimeter Protection And Detection-4.10 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | ++## Preventing Execution Of Unauthorised Software ++### Software Inventory-2.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | ++### Authorised Software Installation-2.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | ++### Security Update Management-2.3 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | +|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | +|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | ++## Patch/Vulnerability & Change Management ++### Patch/Vulnerability & Change Management-7.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | +|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | +|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | ++### Patch/Vulnerability & Change Management-7.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | +|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | +|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | ++### Patch/Vulnerability & Change Management-7.6 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | +|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | +|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | +|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | ++### Patch/Vulnerability & Change Management-7.7 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) | +|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | +|[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | +|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | +|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) | +|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | +|[Azure Key Vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to key vault, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |[parameters('audit_effect')] |[1.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVault_Should_Use_PrivateEndpoint_Audit.json) | +|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F45e05259-1eb5-4f70-9574-baf73e9d219b) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit_V2.json) | +|[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | +|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | +|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | +|[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) | +|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) | +|[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) | +|[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) | +|[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | +|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | +|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | +|[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) | +|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) | ++## Maintenance, Monitoring, And Analysis Of Audit Logs ++### Maintenance, Monitoring, And Analysis Of Audit Logs-16.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[All flow log resources should be in enabled state](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) | +|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | +|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) | +|[Log duration should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e8f3) |This policy helps audit any PostgreSQL databases in your environment without log_duration setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDuration_Audit.json) | +|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_FlowLog_TrafficAnalytics_Audit.json) | ++### Maintenance, Monitoring, And Analysis Of Audit Logs-16.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Log Analytics extension should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) | +|[\[Preview\]: Log Analytics extension should be installed on your Windows Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd69b1763-b96d-40b8-a2d9-ca31e9fd0d3e) |This policy audits Windows Azure Arc machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Windows_LogAnalytics_Audit.json) | +|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) | +|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) | ++### Maintenance, Monitoring, And Analysis Of Audit Logs-16.3 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | +|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | +|[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) | +|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | ++## Secure Configuration ++### Secure Configuration-5.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | +|[Microsoft Defender for Azure Cosmos DB should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fadbe85b5-83e6-4350-ab58-bf3a4f736e5e) |Microsoft Defender for Azure Cosmos DB is an Azure-native layer of security that detects attempts to exploit databases in your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitations of your database through compromised identities or malicious insiders. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_Azure_Cosmos_DB_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ++### Secure Configuration-5.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Hotpatch should be enabled for Windows Server Azure Edition VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d02d2f7-e38b-4bdc-96f3-adc0a8726abc) |Minimize reboots and install updates quickly with hotpatch. Learn more at [https://docs.microsoft.com/azure/automanage/automanage-hotpatch](../../../automanage/automanage-hotpatch.md) |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automanage/HotpatchShouldBeEnabledforWindowsServerAzureEditionVMs.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | ++## Secure Mail And Messaging Systems ++### Secure Mail And Messaging Systems-10.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | +|[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) | +|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) | +|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) | +|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | +|[Function apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | +|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | +|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) | ++### Secure Mail And Messaging Systems-10.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | +|[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) | +|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) | +|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) | +|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | +|[Function apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | +|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | +|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) | ++## User Access Control / Management ++### User Access Control / Management-8.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | +|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | +|[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | +|[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ++### User Access Control / Management-8.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ++### User Access Control / Management-8.3 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | ++### User Access Control / Management-8.4 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | +|[Function apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | +|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | ++### User Access Control / Management-8.5 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | +|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | +|[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | +|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | ++### User Access Control / Management-8.8 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) | +|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ++## Vulnerability Assessment And Penetration Test And Red Team Exercises ++### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | ++### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | ++### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.4 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | ++## Risk Based Transaction Monitoring ++### Risk Based Transaction Monitoring-20.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | ++## Metrics ++### Metrics-21.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) | +|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | +|[Azure Key Vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to key vault, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |[parameters('audit_effect')] |[1.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVault_Should_Use_PrivateEndpoint_Audit.json) | +|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) | +|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) | +|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | +|[Key vaults should have deletion protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. You can prevent permanent data loss by enabling purge protection and soft delete. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. Keep in mind that key vaults created after September 1st 2019 have soft-delete enabled by default. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) | +|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) | +|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) | +|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) | +|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncrypted_Deny.json) | +|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | +|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) | ++### Metrics-21.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Hotpatch should be enabled for Windows Server Azure Edition VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d02d2f7-e38b-4bdc-96f3-adc0a8726abc) |Minimize reboots and install updates quickly with hotpatch. Learn more at [https://docs.microsoft.com/azure/automanage/automanage-hotpatch](../../../automanage/automanage-hotpatch.md) |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automanage/HotpatchShouldBeEnabledforWindowsServerAzureEditionVMs.json) | ++## Audit Log Settings ++### Audit Log Settings-17.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | +|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) | +|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) | +|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | +|[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) | ++## Anti-Phishing ++### Anti-Phishing-14.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) | +|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) | +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | +|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | +|[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) | +|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | +|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) | +|[Azure File Sync should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | +|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | +|[Azure Key Vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to key vault, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |[parameters('audit_effect')] |[1.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVault_Should_Use_PrivateEndpoint_Audit.json) | +|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F45e05259-1eb5-4f70-9574-baf73e9d219b) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit_V2.json) | +|[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | +|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | +|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | +|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | +|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | +|[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) | +|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) | +|[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) | +|[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) | +|[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | +|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | +|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | +|[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) | +|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) | ++## Advanced Real-Timethreat Defenceand Management ++### Advanced Real-Timethreat Defenceand Management-13.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Guest Attestation extension should be installed on supported Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F672fe5a1-2fcd-42d7-b85d-902b6e28c6ff) |Install Guest Attestation extension on supported Linux virtual machines to allow Azure Security Center to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment applies to Trusted Launch and Confidential Linux virtual machines. |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLinuxGAExtOnVm_Audit.json) | +|[\[Preview\]: Guest Attestation extension should be installed on supported Linux virtual machines scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa21f8c92-9e22-4f09-b759-50500d1d2dda) |Install Guest Attestation extension on supported Linux virtual machines scale sets to allow Azure Security Center to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment applies to Trusted Launch and Confidential Linux virtual machine scale sets. |AuditIfNotExists, Disabled |[5.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLinuxGAExtOnVmss_Audit.json) | +|[\[Preview\]: Guest Attestation extension should be installed on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb4d9c2-f88f-4069-bee0-dba239a57b09) |Install Guest Attestation extension on supported virtual machines to allow Azure Security Center to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment applies to Trusted Launch and Confidential Windows virtual machines. |AuditIfNotExists, Disabled |[4.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallWindowsGAExtOnVm_Audit.json) | +|[\[Preview\]: Guest Attestation extension should be installed on supported Windows virtual machines scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff655e522-adff-494d-95c2-52d4f6d56a42) |Install Guest Attestation extension on supported virtual machines scale sets to allow Azure Security Center to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment applies to Trusted Launch and Confidential Windows virtual machine scale sets. |AuditIfNotExists, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallWindowsGAExtOnVmss_Audit.json) | +|[\[Preview\]: Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97566dd7-78ae-4997-8b36-1c7bfe0d8121) |Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel and kernel drivers will be allowed to run. This assessment applies to Trusted Launch and Confidential Windows virtual machines. |Audit, Disabled |[4.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableWindowsSB_Audit.json) | +|[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) | +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | +|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) | +|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | +|[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | +|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) | +|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](../../../security-center/security-center-services.md#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](../../../security-center/security-center-endpoint-protection.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssues_Audit.json) | +|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) | +|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | +|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | +|[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | +|[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) | +|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | +|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | +|[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) | +|[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) | +|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | +|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | +|[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) | ++### Advanced Real-Timethreat Defenceand Management-13.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](../../../defender-for-cloud/defender-for-containers-enable.md). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) | +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | +|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | +|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md) |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) | +|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](../../../security-center/security-center-services.md#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](../../../security-center/security-center-endpoint-protection.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssues_Audit.json) | +|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) | +|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | +|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | +|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ++### Advanced Real-Timethreat Defenceand Management-13.3 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | +|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | ++### Advanced Real-Timethreat Defenceand Management-13.4 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | +|[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) | +|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | +|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | +|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | +|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) | +|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | +|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) | +|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | +|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) | +|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) | +|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | +|[Function apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | +|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | +|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) | +|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | +|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncrypted_Deny.json) | +|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | +|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) | +|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | +|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | +|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) | ++## Application Security Life Cycle (Aslc) ++### Application Security Life Cycle (Aslc)-6.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | ++### Application Security Life Cycle (Aslc)-6.3 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | ++### Application Security Life Cycle (Aslc)-6.4 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) | +|[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | +|[Application Insights components should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc02227-0cb6-4e11-8f53-eb0b22eab7e8) |Improve Application Insights security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs of this component. Learn more at [https://aka.ms/AzMonPrivateLink#configure-application-insights](https://aka.ms/AzMonPrivateLink#configure-application-insights). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_NetworkAccessEnabled_Deny.json) | +|[Application Insights components should block non-Azure Active Directory based ingestion.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F199d5677-e4d9-4264-9465-efe1839c06bd) |Enforcing log ingestion to require Azure Active Directory authentication prevents unauthenticated logs from an attacker which could lead to incorrect status, false alerts, and incorrect logs stored in the system. |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_DisableLocalAuth_Deny.json) | +|[Application Insights components with Private Link enabled should use Bring Your Own Storage accounts for profiler and debugger.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0c4bd2e8-8872-4f37-a654-03f6f38ddc76) |To support private link and customer-managed key policies, create your own storage account for profiler and debugger. Learn more in [https://docs.microsoft.com/azure/azure-monitor/app/profiler-bring-your-own-storage](../../../azure-monitor/app/profiler-bring-your-own-storage.md) |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/AppInsightsComponents_ForceCustomerStorageForProfiler_Deny.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | +|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](../../../azure-monitor/platform/customer-managed-keys.md). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) | +|[Function apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | +|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) | ++### Application Security Life Cycle (Aslc)-6.6 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | ++### Application Security Life Cycle (Aslc)-6.7 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | +|[Web Application Firewall (WAF) should enable all firewall rules for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F632d3993-e2c0-44ea-a7db-2eca131f356d) |Enabling all Web Application Firewall (WAF) rules strengthens your application security and protects your web applications against common vulnerabilities. To learn more about Web Application Firewall (WAF) with Application Gateway, visit [https://aka.ms/waf-ag](https://aka.ms/waf-ag) |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ACAT_WAF_AppGatewayAllRulesEnabled_Audit.json) | +|[Web Application Firewall (WAF) should use the specified mode for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12430be1-6cc8-4527-a9a8-e3d38f250096) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayMode_Audit.json) | ++## Data Leak Prevention Strategy ++### Data Leak Prevention Strategy-15.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[All flow log resources should be in enabled state](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](../../../security-center/security-center-services.md#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](../../../security-center/security-center-endpoint-protection.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssues_Audit.json) | +|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) | +|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | +|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | +|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | +|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ++### Data Leak Prevention Strategy-15.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | +|[Storage accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2982f36-99f2-4db5-8eff-283140c09693) |To improve the security of Storage Accounts, ensure that they aren't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://aka.ms/storageaccountpublicnetworkaccess](https://aka.ms/storageaccountpublicnetworkaccess). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StoragePublicNetworkAccess_AuditDeny.json) | +|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | ++### Data Leak Prevention Strategy-15.3 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](../../../security-center/security-center-services.md#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](../../../security-center/security-center-endpoint-protection.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssues_Audit.json) | +|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) | +|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | +|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | +|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ++## Forensics ++### Forensics-22.1 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) | ++## Incident Response & Management ++### Responding To Cyber-Incidents:-19.2 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | ++### Recovery From Cyber - Incidents-19.4 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | ++### Recovery From Cyber - Incidents-19.5 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) | ++### Recovery From Cyber - Incidents-19.6 ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | ++### Recovery From Cyber - Incidents-19.6b ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | ++### Recovery From Cyber - Incidents-19.6c ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | ++### Recovery From Cyber - Incidents-19.6e ++**ID**: ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | ++## Next steps ++Additional articles about Azure Policy: ++- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. +- See the [initiative definition structure](../concepts/initiative-definition-structure.md). +- Review other examples at [Azure Policy samples](./index.md). +- Review [Understanding policy effects](../concepts/effects.md). +- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). |
governance | Rbi Itf Nbfc 2017 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md | + + Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC +description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Last updated : 07/20/2023++++# Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative ++The following article details how the Azure Policy Regulatory Compliance built-in initiative +definition maps to **compliance domains** and **controls** in Reserve Bank of India - IT Framework for NBFC. +For more information about this compliance standard, see +[Reserve Bank of India - IT Framework for NBFC](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=10999&Mode=0#C1). To understand +_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and +[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md). ++The following mappings are to the **Reserve Bank of India - IT Framework for NBFC** controls. Many of the controls +are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete +initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. +Then, find and select the **[Preview]: Reserve Bank of India - IT Framework for NBFC** Regulatory Compliance built-in +initiative definition. ++> [!IMPORTANT] +> Each control below is associated with one or more [Azure Policy](../overview.md) definitions. +> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the +> control; however, there often is not a one-to-one or complete match between a control and one or +> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions +> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In +> addition, the compliance standard includes controls that aren't addressed by any Azure Policy +> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your +> overall compliance status. The associations between compliance domains, controls, and Azure Policy +> definitions for this compliance standard may change over time. To view the change history, see the +> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/RBI_ITF_NBFC_v2017.json). ++## IT Governance ++### IT Governance-1 ++**ID**: RBI IT Framework 1 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb893a29-21bb-418c-a157-e99480ec364c) |Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. Vulnerability CVE-2019-9946 has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+ |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UpgradeVersion_KubernetesService_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | +|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | +|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | ++### IT Governance-1.1 ++**ID**: RBI IT Framework 1.1 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) | ++## IT Policy ++### IT Policy-2 ++**ID**: RBI IT Framework 2 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | ++## Information and Cyber Security ++### Information Security-3 ++**ID**: RBI IT Framework 3 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | ++### Identification and Classification of Information Assets-3.1 ++**ID**: RBI IT Framework 3.1.a ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ++### Segregation of Functions-3.1 ++**ID**: RBI IT Framework 3.1.b ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97566dd7-78ae-4997-8b36-1c7bfe0d8121) |Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel and kernel drivers will be allowed to run. This assessment applies to Trusted Launch and Confidential Windows virtual machines. |Audit, Disabled |[4.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableWindowsSB_Audit.json) | +|[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) | +|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | +|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | +|[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | +|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | ++### Role based Access Control-3.1 ++**ID**: RBI IT Framework 3.1.c ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | +|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | +|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | +|[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | +|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | +|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | ++### Maker-checker-3.1 ++**ID**: RBI IT Framework 3.1.f ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | +|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | +|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | +|[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | +|[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | +|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | ++### Trails-3.1 ++**ID**: RBI IT Framework 3.1.g ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | +|[Activity log should be retained for at least one year](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb02aacc0-b073-424e-8298-42b22829ee0a) |This policy audits the activity log if the retention is not set for 365 days or forever (retention days set to 0). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLogRetention_365orGreater.json) | +|[All flow log resources should be in enabled state](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) | +|[Application Insights components should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc02227-0cb6-4e11-8f53-eb0b22eab7e8) |Improve Application Insights security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs of this component. Learn more at [https://aka.ms/AzMonPrivateLink#configure-application-insights](https://aka.ms/AzMonPrivateLink#configure-application-insights). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_NetworkAccessEnabled_Deny.json) | +|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | +|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | +|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | +|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | +|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | +|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md) |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) | +|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) | +|[Azure Monitor Logs clusters should be created with infrastructure-encryption enabled (double encryption)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea0dfaed-95fb-448c-934e-d6e713ce393d) |To ensure secure data encryption is enabled at the service level and the infrastructure level with two different encryption algorithms and two different keys, use an Azure Monitor dedicated cluster. This option is enabled by default when supported at the region, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview](../../../azure-monitor/platform/customer-managed-keys.md#customer-managed-key-overview). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKDoubleEncryptionEnabled_Deny.json) | +|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](../../../azure-monitor/platform/customer-managed-keys.md). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) | +|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](../../../azure-monitor/platform/customer-managed-keys.md). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) | +|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | +|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) | +|[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) | +|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) | +|[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVm.json) | +|[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) | +|[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) | +|[Log Analytics workspaces should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6c53d030-cc64-46f0-906d-2bc061cd1334) |Improve workspace security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs on this workspace. Learn more at [https://aka.ms/AzMonPrivateLink#configure-log-analytics](https://aka.ms/AzMonPrivateLink#configure-log-analytics). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_NetworkAccessEnabled_Deny.json) | +|[Log Analytics Workspaces should block non-Azure Active Directory based ingestion.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe15effd4-2278-4c65-a0da-4d6f6d1890e2) |Enforcing log ingestion to require Azure Active Directory authentication prevents unauthenticated logs from an attacker which could lead to incorrect status, false alerts, and incorrect logs stored in the system. |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_DisableLocalAuth_Deny.json) | +|[Log checkpoints should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e43d) |This policy helps audit any PostgreSQL databases in your environment without log_checkpoints setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogCheckpoint_Audit.json) | +|[Log connections should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e442) |This policy helps audit any PostgreSQL databases in your environment without log_connections setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogConnections_Audit.json) | +|[Log duration should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e8f3) |This policy helps audit any PostgreSQL databases in your environment without log_duration setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDuration_Audit.json) | +|[Logic Apps Integration Service Environment should be encrypted with customer-managed keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fafeaf6-7927-4059-a50a-8eb2a7a6f2b5) |Deploy into Integration Service Environment to manage encryption at rest of Logic Apps data using customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_ISEWithCustomerManagedKey_AuditDeny.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_FlowLog_TrafficAnalytics_Audit.json) | +|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) | +|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) | +|[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | +|[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) | ++### Public Key Infrastructure (PKI)-3.1 ++**ID**: RBI IT Framework 3.1.h ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) | +|[App Configuration should use a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F967a4b4b-2da9-43c1-b7d0-f98d0d74d0b1) |Customer-managed keys provide enhanced data protection by allowing you to manage your encryption keys. This is often required to meet compliance requirements. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/CustomerManagedKey_Audit.json) | +|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | +|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) | +|[App Service Environment should have internal encryption enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb74e86f-d351-4b8d-b034-93da7391c01f) |Setting InternalEncryption to true encrypts the pagefile, worker disks, and internal network traffic between the front ends and workers in an App Service Environment. To learn more, refer to [https://docs.microsoft.com/azure/app-service/environment/app-service-app-service-environment-custom-settings#enable-internal-encryption](../../../app-service/environment/app-service-app-service-environment-custom-settings.md#enable-internal-encryption). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_HostingEnvironment_InternalEncryption_Audit.json) | +|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | +|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) | +|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | +|[Azure Key Vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to key vault, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |[parameters('audit_effect')] |[1.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVault_Should_Use_PrivateEndpoint_Audit.json) | +|[Azure Monitor Logs clusters should be created with infrastructure-encryption enabled (double encryption)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea0dfaed-95fb-448c-934e-d6e713ce393d) |To ensure secure data encryption is enabled at the service level and the infrastructure level with two different encryption algorithms and two different keys, use an Azure Monitor dedicated cluster. This option is enabled by default when supported at the region, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview](../../../azure-monitor/platform/customer-managed-keys.md#customer-managed-key-overview). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKDoubleEncryptionEnabled_Deny.json) | +|[Disk encryption should be enabled on Azure Data Explorer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff4b53539-8df9-40e4-86c6-6b607703bd4e) |Enabling disk encryption helps protect and safeguard your data to meet your organizational security and compliance commitments. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Data%20Explorer/ADX_disk_encrypted.json) | +|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) | +|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) | +|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | +|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | +|[Infrastructure encryption should be enabled for Azure Database for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3a58212a-c829-4f13-9872-6371df2fd0b4) |Enable infrastructure encryption for Azure Database for MySQL servers to have higher level of assurance that the data is secure. When infrastructure encryption is enabled, the data at rest is encrypted twice using FIPS 140-2 compliant Microsoft managed keys. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_InfrastructureEncryption_Audit.json) | +|[Infrastructure encryption should be enabled for Azure Database for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F24fba194-95d6-48c0-aea7-f65bf859c598) |Enable infrastructure encryption for Azure Database for PostgreSQL servers to have higher level of assurance that the data is secure. When infrastructure encryption is enabled, the data at rest is encrypted twice using FIPS 140-2 compliant Microsoft managed keys |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_InfrastructureEncryption_Audit.json) | +|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) | +|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) | +|[Key vaults should have deletion protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. You can prevent permanent data loss by enabling purge protection and soft delete. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. Keep in mind that key vaults created after September 1st 2019 have soft-delete enabled by default. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) | +|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) | +|[Managed disks should use a specific set of disk encryption sets for the customer-managed key encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd461a302-a187-421a-89ac-84acdb4edc04) |Requiring a specific set of disk encryption sets to be used with managed disks give you control over the keys used for encryption at rest. You are able to select the allowed encrypted sets and all others are rejected when attached to a disk. Learn more at [https://aka.ms/disks-cmk](https://aka.ms/disks-cmk). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ManagedDiskEncryptionSetsAllowed_Deny.json) | +|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) | +|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) | +|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](../../../azure-monitor/platform/customer-managed-keys.md#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) | +|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | +|[Storage account encryption scopes should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5ec538c-daa0-4006-8596-35468b9148e8) |Use customer-managed keys to manage the encryption at rest of your storage account encryption scopes. Customer-managed keys enable the data to be encrypted with an Azure key-vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about storage account encryption scopes at [https://aka.ms/encryption-scopes-overview](https://aka.ms/encryption-scopes-overview). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_EncryptionScopesShouldUseCMK_Audit.json) | +|[Storage account encryption scopes should use double encryption for data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbfecdea6-31c4-4045-ad42-71b9dc87247d) |Enable infrastructure encryption for encryption at rest of your storage account encryption scopes for added security. Infrastructure encryption ensures that your data is encrypted twice. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageEncryptionScopesShouldUseDoubleEncryption_Audit.json) | +|[Storage accounts should have infrastructure encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4733ea7b-a883-42fe-8cac-97454c2a9e4a) |Enable infrastructure encryption for higher level of assurance that the data is secure. When infrastructure encryption is enabled, data in a storage account is encrypted twice. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountInfrastructureEncryptionEnabled_Audit.json) | +|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) | +|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) | +|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | ++### Vulnerability Management-3.3 ++**ID**: RBI IT Framework 3.3 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | +|[Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb893a29-21bb-418c-a157-e99480ec364c) |Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. Vulnerability CVE-2019-9946 has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+ |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UpgradeVersion_KubernetesService_Audit.json) | +|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) | +|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | +|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | +|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | +|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | +|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | +|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | +|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) | ++### Digital Signatures-3.8 ++**ID**: RBI IT Framework 3.8 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) | +|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) | +|[Certificates should be issued by the specified integrated certificate authority](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e826246-c976-48f6-b03e-619bb92b3d82) |Manage your organizational compliance requirements by specifying the Azure integrated certificate authorities that can issue certificates in your key vault such as Digicert or GlobalSign. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_Issuers_SupportedCAs.json) | +|[Certificates should use allowed key types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1151cede-290b-4ba0-8b38-0ad145ac888f) |Manage your organizational compliance requirements by restricting the key types allowed for certificates. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_AllowedKeyTypes.json) | +|[Certificates using elliptic curve cryptography should have allowed curve names](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd78111f-4953-4367-9fd5-7e08808b54bf) |Manage the allowed elliptic curve names for ECC Certificates stored in key vault. More information can be found at [https://aka.ms/akvpolicy](https://aka.ms/akvpolicy). |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_EC_AllowedCurveNames.json) | +|[Certificates using RSA cryptography should have the specified minimum key size](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcee51871-e572-4576-855c-047c820360f0) |Manage your organizational compliance requirements by specifying a minimum key size for RSA certificates stored in your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_RSA_MinimumKeySize.json) | +|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | ++## IT Operations ++### IT Operations-4.2 ++**ID**: RBI IT Framework 4.2 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | ++### IT Operations-4.4 ++**ID**: RBI IT Framework 4.4.a ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | ++### MIS For Top Management-4.4 ++**ID**: RBI IT Framework 4.4.b ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | ++## IS Audit ++### Policy for Information System Audit (IS Audit)-5 ++**ID**: RBI IT Framework 5 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[All flow log resources should be in enabled state](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) | +|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | +|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[IP firewall rules on Azure Synapse workspaces should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F56fd377d-098c-4f02-8406-81eb055902b8) |Removing all IP firewall rules improves security by ensuring your Azure Synapse workspace can only be accessed from a private endpoint. This configuration audits creation of firewall rules that allow public network access on the workspace. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceFirewallRules_Audit.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | +|[Web Application Firewall (WAF) should use the specified mode for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12430be1-6cc8-4527-a9a8-e3d38f250096) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayMode_Audit.json) | +|[Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F425bea59-a659-4cbb-8d31-34499bd030b8) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Azure Front Door Service. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Mode_Audit.json) | ++### Coverage-5.2 ++**ID**: RBI IT Framework 5.2 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) | ++## Business Continuity Planning ++### Business Continuity Planning (BCP) and Disaster Recovery-6 ++**ID**: RBI IT Framework 6 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) | +|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) | +|[\[Preview\]: Recovery Services vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11e3da8c-1d68-4392-badd-0ff3c43ab5b0) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links for Azure Site Recovery at: [https://aka.ms/HybridScenarios-PrivateLink](https://aka.ms/HybridScenarios-PrivateLink) and [https://aka.ms/AzureToAzure-PrivateLink](https://aka.ms/AzureToAzure-PrivateLink). |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Site%20Recovery/RecoveryServices_SiteRecovery_PrivateEndpoint_Audit.json) | +|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | +|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) | +|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) | ++### Recovery strategy / Contingency Plan-6.2 ++**ID**: RBI IT Framework 6.2 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) | +|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) | +|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | +|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) | +|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) | ++### Recovery strategy / Contingency Plan-6.3 ++**ID**: RBI IT Framework 6.3 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) | +|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) | +|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | +|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) | +|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) | ++### Recovery strategy / Contingency Plan-6.4 ++**ID**: RBI IT Framework 6.4 ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) | +|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) | +|[\[Preview\]: Recovery Services vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11e3da8c-1d68-4392-badd-0ff3c43ab5b0) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links for Azure Site Recovery at: [https://aka.ms/HybridScenarios-PrivateLink](https://aka.ms/HybridScenarios-PrivateLink) and [https://aka.ms/AzureToAzure-PrivateLink](https://aka.ms/AzureToAzure-PrivateLink). |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Site%20Recovery/RecoveryServices_SiteRecovery_PrivateEndpoint_Audit.json) | +|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | ++## Next steps ++Additional articles about Azure Policy: ++- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. +- See the [initiative definition structure](../concepts/initiative-definition-structure.md). +- Review other examples at [Azure Policy samples](./index.md). +- Review [Understanding policy effects](../concepts/effects.md). +- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). |
hdinsight | Hdinsight 5X Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-5x-component-versioning.md | Title: Open-source components and versions - Azure HDInsight 5.x description: Learn about the open-source components and versions in Azure HDInsight 5.x. Previously updated : 05/11/2023 Last updated : 07/27/2023 # HDInsight 5.x component versions All upgraded cluster shapes are supported as part of HDInsight 5.1. The following table lists the versions of open-source components that are associated with HDInsight 5.x. -| Component | HDInsight 5.1 | HDInsight 5.0 | -|||| +| Component | HDInsight 5.1 |HDInsight 5.0| +|||-| | Apache Spark | 3.3.1 ** | 3.1.3 | | Apache Hive | 3.1.2 ** | 3.1.2 | | Apache Kafka | 3.2.0 ** | 2.4.1 | | Apache Hadoop | 3.3.4 ** | 3.1.1 | | Apache Tez | 0.9.1 ** | 0.9.1 |-| Apache Ranger | 2.3.0 * | 1.1.0 | +| Apache Ranger | 2.3.0 ** | 1.1.0 | | Apache HBase | 2.4.11 ** | 2.1.6 |-| Apache Oozie | 5.2.1 * | 4.3.1 | +| Apache Oozie | 5.2.1 ** | 4.3.1 | | Apache ZooKeeper | 3.6.3 ** | 3.4.6 | | Apache Livy | 0.5. ** | 0.5 |-| Apache Ambari | 2.7.0 ** | 2.7.0 | +| Apache Ambari | 2.7.3 ** | 2.7.3 | | Apache Zeppelin | 0.10.1 ** | 0.8.0 | | Apache Phoenix | 5.1.2 ** | - | -\* Under development or planned - ** Preview > [!NOTE]-> Enterprise Security Package (ESP) isn't supported for HDInsight 5.1 clusters. +> We have discontinued Sqoop and Pig add-ons from HDInsight 5.1 version. ### Spark versions supported in Azure HDInsight Azure HDInsight supports the following Apache Spark versions. To learn how to migrate from Spark 2.4 to Spark 3.x, see the [migration guide on the Spark website](https://spark.apache.org/docs/latest/migration-guide.html). -## HDInsight 5.0 --On June 1, 2022, we started rolling out a new version of HDInsight: version 5.0. This version is backward compatible with HDInsight 4.0. All new open-source releases will be added as incremental releases on HDInsight 5.0. --### Spark ---If you're using the Azure user interface to create a Spark cluster for HDInsight, the dropdown list contains an additional version along with the older version: Spark 3.1 (HDI 5.0). This version is a renamed version of Spark 3.1 (HDI 4.0), and it's backward compatible. --This is only a UI-level change. It doesn't affect anything for existing users and for users who are already using the Azure Resource Manager template (ARM template) to build their clusters. --For backward compatibility, Resource Manager supports creating Spark 3.1 with the HDInsight 4.0 and 5.0 versions, which map to the same versions for Spark 3.1 (HDI 5.0). --The Spark 3.1 (HDI 5.0) cluster comes with Hive Warehouse Connector (HWC) 2.0, which works well together with the Interactive Query (HDI 5.0) cluster. --### Interactive Query ---If you're creating an Interactive Query cluster, the dropdown list contains another version: Interactive Query 3.1 (HDI 5.0). If you're going to use the Spark 3.1 version along with Hive (which requires ACID support via HWC), you need to select this version. --### Kafka --The current ARM template supports HDInsight 5.0 for Kafka 2.4.1. --HDInsight 5.0 is supported for the Kafka cluster type and component version 2.4. --We fixed the ARM template issue. --### Upcoming version upgrades --The HDInsight team is working on upgrading other open-source components: --* ESP cluster support for all cluster shapes -* Oozie 5.2.1 -* HWC 2.1 - ## Next steps * [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) |
hdinsight | Hdinsight Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md | Title: Open-source components and versions - Azure HDInsight description: Learn about the open-source components and versions in Azure HDInsight. Previously updated : 03/16/2023 Last updated : 07/27/2023 # Azure HDInsight versions This table lists the versions of HDInsight that are available in the Azure porta | HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability | | | | | | | | | | [HDInsight 5.1](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |Feb 27, 2023 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |-| [HDInsight 5.0](./hdinsight-5x-component-versioning.md#hdinsight-50) |Ubuntu 18.0.4 LTS |July 01, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes | | [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced | Not announced |Yes | **Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You may not be able to create clusters from the Azure portal. |
hdinsight | Hdinsight Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md | description: Archived release notes for Azure HDInsight. Get development tips an Previously updated : 05/11/2023 Last updated : 07/28/2023 # Archived release notes Last updated 05/11/2023 Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/Azure/HDInsight/releases). +## Release date: May 08, 2023 ++This release applies to HDInsight 4.x and 5.x HDInsight release is available to all regions over several days. This release is applicable for image number **2304280205**. [How to check the image number?](./view-hindsight-cluster-image-version.md) ++HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions. ++**OS versions** ++* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 +* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 ++For workload specific versions, see ++* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md) +* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md) ++![Icon showing update with text.](media/hdinsight-release-notes/new-icon-for-updated.png) ++1. Azure HDInsight 5.1 updated with ++ 1. Apache HBase 2.4.11 + 1. Apache Phoenix 5.1.2 + 1. Apache Hive 3.1.2 + 1. Apache Spark 3.3.1 + 1. Apache Tez 0.9.1 + 1. Apache Zeppelin 0.10.1 + 1. Apache Livy 0.5 + 1. Apache Kafka 3.2.0 ++ > [!NOTE] + > * All components are integrated with Hadoop 3.3.4 & ZK 3.6.3 + > * All above upgraded components are now available in non-ESP clusters for public preview. ++![Icon showing new features with text.](media/hdinsight-release-notes/new-icon-for-new-feature.png) ++1. **Enhanced Autoscale for HDInsight** ++ Azure HDInsight has made notable improvements stability and latency on Autoscale, The essential changes include improved feedback loop for scaling decisions, significant improvement on latency for scaling and support for recommissioning the decommissioned nodes, Learn [more](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/enhanced-autoscale-capabilities-in-hdinsight-clusters/ba-p/3811271) about the enhancements, how to custom configure and migrate your cluster to enhanced autoscale. The enhanced Autoscale capability is available effective 17 May, 2023 across all supported regions. + +1. **Azure HDInsight ESP for Apache Kafka 2.4.1 is now Generally Available**. ++ Azure HDInsight ESP for Apache Kafka 2.4.1 has been in public preview since April 2022. After notable improvements in CVE fixes and stability, Azure HDInsight ESP Kafka 2.4.1 now becomes generally available and ready for production workloads, learn the detail about the [how to configure](./domain-joined/apache-domain-joined-run-kafka.md) and [migrate](./kafk). ++1. **Quota Management for HDInsight** ++ HDInsight currently allocates quota to customer subscriptions at a regional level. The cores allocated to customers are generic and not classified at a VM family level (For example, Dv2, Ev3, Eav4, etc.). + + HDInsight introduced an improved view, which provides a detail and classification of quotas for family-level VMs, this feature allows customers to view current and remaining quotas for a region at the VM family level. With the enhanced view, customers have richer visibility, for planning quotas, and a better user experience. This feature is currently available on HDInsight 4.x and 5.x for East US EUAP region. Other regions to follow later. ++ For more information, see [Cluster capacity planning in Azure HDInsight | Microsoft Learn](./hdinsight-capacity-planning.md#view-quota-management-for-hdinsight) + +![Icon showing new regions added with text.](media/hdinsight-release-notes/new-icon-for-new-regions-added.png) ++* Poland Central ++## Coming soon ++* The max length of cluster name changes to 45 from 59 characters, to improve the security posture of clusters. +* Cluster permissions for secure storage + * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account. +* In-line quota update. + * Request quotas increase directly from the My Quota page, which is a direct API call, which is faster. If the API call fails, then customers need to create a new support request for quota increase. +* HDInsight Cluster Creation with Custom VNets. + * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this would be a mandatory check to avoid cluster creation failures. +* Basic and Standard A-series VMs Retirement. + * On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before 31 August 2024. +* Non-ESP ABFS clusters [Cluster Permissions for World Readable] + * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates. + ## Release date: February 28, 2023 -This release applies to HDInsight 4.0. and 5.0, 5.1. HDInsight release will be available to all regions over several days. This release is applicable for image number **2302250400**. [How to check the image number?](./view-hindsight-cluster-image-version.md) +This release applies to HDInsight 4.0. and 5.0, 5.1. HDInsight release is be available to all regions over several days. This release is applicable for image number **2302250400**. [How to check the image number?](./view-hindsight-cluster-image-version.md) HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions. For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-ver End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For more information, see [Spark versions supported in Azure HDInsight](./hdinsight-40-component-versioning.md#spark-versions-supported-in-azure-hdinsight) -## Coming soon +## What's next * Autoscale * Autoscale with improved latency and several improvements * Cluster name change limitation - * The max length of cluster name will be changed to 45 from 59 in Public, Mooncake and Fairfax. + * The max length of cluster name changes to 45 from 59 in Public, Mooncake and Fairfax. * Cluster permissions for secure storage * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account. * Non-ESP ABFS clusters [Cluster Permissions for World Readable] * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates. * Open-source upgrades- * Apache Spark 3.3.0 and Hadoop 3.3.4 are under development on HDInsight 5.1 and will include several significant new features, performance and other improvements. + * Apache Spark 3.3.0 and Hadoop 3.3.4 are under development on HDInsight 5.1 and includes several significant new features, performance and other improvements. > [!NOTE] > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md). HDInsight uses safe deployment practices, which involve gradual region deploymen HDInsight cluster comes with pre-defined disk space based on SKU. This space may not be sufficient in large job scenarios. -This new feature allows you to add more disks in cluster, which will be used as node manager local directory. Add number of disks to worker nodes during HIVE and Spark cluster creation, while the selected disks will be part of node managerΓÇÖs local directories. +This new feature allows you to add more disks in cluster, which will be used as node manager local directory. Add number of disks to worker nodes during HIVE and Spark cluster creation, while the selected disks are be part of node managerΓÇÖs local directories. > [!NOTE] > The added disks are only configured for node manager local directories. HDInsight is compatible with Apache HIVE 3.1.2. Due to a bug in this release, th ## Release date: 06/03/2022 -This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days. +This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region over several days. ### Release highlights HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a ## Release date: 03/10/2022 -This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days. +This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region over several days. The OS versions for this release are: - HDInsight 4.0: Ubuntu 18.04.5 Starting from March 01, 2022, HDInsight will only support manual scale for HBase ## Release date: 12/27/2021 -This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days. +This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region over several days. The OS versions for this release are: - HDInsight 4.0: Ubuntu 18.04.5 LTS HDInsight 4.0 image has been updated to mitigate Log4j vulnerability as describe ## Release date: 07/27/2021 -This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. The OS versions for this release are: - HDInsight 3.6: Ubuntu 16.04.7 LTS No other action is needed from you. The price correction will only apply for usa ## Release date: 06/02/2021 -This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. The OS versions for this release are: - HDInsight 3.6: Ubuntu 16.04.7 LTS You can find the current component versions for HDInsight 4.0 and HDInsight 3.6 ## Release date: 02/05/2021 -This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### Dav4-series support No component version change for this release. You can find the current component ## Release date: 11/18/2020 -This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### Auto key rotation for customer managed key encryption at rest No component version change for this release. You can find the current component ## Release date: 11/09/2020 -This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### HDInsight Identity Broker (HIB) is now GA No component version change for this release. You can find the current component ## Release date: 10/08/2020 -This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### HDInsight private clusters with no public IP and Private link (Preview) No component version change for this release. You can find the current component ## Release date: 09/28/2020 -This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### Autoscale for Interactive Query with HDInsight 4.0 is now generally available No component version change for this release. You can find the current component ## Release date: 08/09/2020 -This release applies only for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies only for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### Support for SparkCruise An issue has been fixed in the Azure portal, where users were experiencing an er ## Release date: 07/13/2020 -This release applies both for HDInsight 3.6 and 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies both for HDInsight 3.6 and 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### Support for Customer Lockbox for Microsoft Azure No component version change for this release. You can find the current component ## Release date: 06/11/2020 -This release applies both for HDInsight 3.6 and 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies both for HDInsight 3.6 and 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### Moving to Azure virtual machine scale sets There's an issue for Hive Warehouse Connector in this release. The fix will be i ## Release date: 01/09/2020 -This release applies both for HDInsight 3.6 and 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. +This release applies both for HDInsight 3.6 and 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see following changes, wait for the release being live in your region in several days. ### New features #### TLS 1.2 enforcement |
hdinsight | Hdinsight Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md | description: Latest release notes for Azure HDInsight. Get development tips and Previously updated : 05/12/2023 Last updated : 07/28/2023 # Azure HDInsight release notes Subscribe to the [HDInsight Release Notes](./subscribe-to-hdi-release-notes-rep To subscribe, click the “watch” button in the banner and watch out for [HDInsight Releases](https://github.com/Azure/HDInsight/releases). -## Release date: May 08, 2023 +## Release date: July 25, 2023 -This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2304280205**. [How to check the image number?](./view-hindsight-cluster-image-version.md) +This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2307201242**. [How to check the image number?](./view-hindsight-cluster-image-version.md) HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions. HDInsight uses safe deployment practices, which involve gradual region deploymen * HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 * HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4+* HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4 For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md) * [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md) -![Icon showing update with text.](media/hdinsight-release-notes/new-icon-for-updated.png) +## ![Icon showing Whats new.](./media/hdinsight-release-notes/whats-new.svg) What's new +* HDInsight 5.1 is now supported with ESP cluster. +* Upgraded version of Ranger 2.3.0 and Oozie 5.2.1 are now part of HDInsight 5.1 +* The Spark 3.3.1 (HDInsight 5.1) cluster comes with Hive Warehouse Connector (HWC) 2.1, which works together with the Interactive Query (HDInsight 5.1) cluster. -1. Azure HDInsight 5.1 updated with +## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon - 1. Apache HBase 2.4.11 - 1. Apache Phoenix 5.1.2 - 1. Apache Hive 3.1.2 - 1. Apache Spark 3.3.1 - 1. Apache Tez 0.9.1 - 1. Apache Zeppelin 0.10.1 - 1. Apache Livy 0.5 - 1. Apache Kafka 3.2.0 -- > [!NOTE] - > * All components are integrated with Hadoop 3.3.4 & ZK 3.6.3 - > * All above upgraded components are now available in non-ESP clusters for public preview. --![Icon showing new features with text.](media/hdinsight-release-notes/new-icon-for-new-feature.png) --1. **Enhanced Autoscale for HDInsight** -- Azure HDInsight has made notable improvements stability and latency on Autoscale, The essential changes include improved feedback loop for scaling decisions, significant improvement on latency for scaling and support for recommissioning the decommissioned nodes, Learn [more](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/enhanced-autoscale-capabilities-in-hdinsight-clusters/ba-p/3811271) about the enhancements, how to custom configure and migrate your cluster to enhanced autoscale. The enhanced Autoscale capability is available effective 17 May, 2023 across all supported regions. - -1. **Azure HDInsight ESP for Apache Kafka 2.4.1 is now Generally Available**. -- Azure HDInsight ESP for Apache Kafka 2.4.1 has been in public preview since April 2022. After notable improvements in CVE fixes and stability, Azure HDInsight ESP Kafka 2.4.1 now becomes generally available and ready for production workloads, learn the detail about the [how to configure](./domain-joined/apache-domain-joined-run-kafka.md) and [migrate](./kafk). --1. **Quota Management for HDInsight** -- HDInsight currently allocates quota to customer subscriptions at a regional level. The cores allocated to customers are generic and not classified at a VM family level (For example, Dv2, Ev3, Eav4, etc.). - - HDInsight introduced an improved view, which provides a detail and classification of quotas for family-level VMs, this feature allows customers to view current and remaining quotas for a region at the VM family level. With the enhanced view, customers have richer visibility, for planning quotas, and a better user experience. This feature is currently available on HDInsight 4.x and 5.x for East US EUAP region. Other regions to follow later. -- For more information, see [Cluster capacity planning in Azure HDInsight | Microsoft Learn](./hdinsight-capacity-planning.md#view-quota-management-for-hdinsight) - -![Icon showing new regions added with text.](media/hdinsight-release-notes/new-icon-for-new-regions-added.png) --* Poland Central --## Coming soon --* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. +* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. Customers need to plan for the updates before 30 September 2023. * Cluster permissions for secure storage * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account. * In-line quota update. * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the API call fails, then customers need to create a new support request for quota increase. * HDInsight Cluster Creation with Custom VNets.- * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs will need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this would be a mandatory check to avoid cluster creation failures. + * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before 30 September 2023.  * Basic and Standard A-series VMs Retirement. * On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before 31 August 2024.-* Non-ESP ABFS clusters [Cluster Permissions for World Readable] - * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates. +* Non-ESP ABFS clusters [Cluster Permissions for Word Readable] + * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before 30 September 2023.  If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). |
healthcare-apis | Configure Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md | The FHIR service supports the `$export` operation [specified by HL7](https://hl7 Ensure you are granted with application role - 'FHIR Data exporter role' prior to configuring export. To understand more on application roles, see [Authentication and Authorization for FHIR service](../../healthcare-apis/authentication-authorization.md). -Below are three steps in setting up the `$export` operation for the FHIR service- +Three steps in setting up the `$export` operation for the FHIR service- - Enable a managed identity for the FHIR service. - Configure a new or existing Azure Data Lake Storage Gen2 (ADLS Gen2) account and give permission for the FHIR service to access the account. Now you're ready to configure the FHIR service by setting the ADLS Gen2 account ## Specify the storage account for FHIR service export -The final step is to specify the ADLS Gen2 account that the FHIR service will use when exporting data. +The final step is to specify the ADLS Gen2 account that the FHIR service uses when exporting data. > [!NOTE] > In the storage account, if you haven't assigned the **Storage Blob Data Contributor** role to the FHIR service, the `$export` operation will fail. Under the **Exceptions** section, select the box **Allow Azure services on the t :::image type="content" source="media/export-data/exceptions.png" alt-text="Allow trusted Microsoft services to access this storage account."::: -Next, run the following PowerShell command to install the `Az.Storage` PowerShell module in your local environment. This will allow you to configure your Azure storage account(s) using PowerShell. +Next, run the following PowerShell command to install the `Az.Storage` PowerShell module in your local environment. This allows you to configure your Azure storage account(s) using PowerShell. ```PowerShell Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force Now, use the PowerShell command below to set the selected FHIR service instance as a trusted resource for the storage account. Make sure that all listed parameters are defined in your PowerShell environment. -Note that you'll need to run the `Add-AzStorageAccountNetworkRule` command as an administrator in your local environment. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md). +You'll need to run the `Add-AzStorageAccountNetworkRule` command as an administrator in your local environment. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md). ```PowerShell $subscription="xxx" $resourceId = "/subscriptions/$subscription/resourceGroups/$resourceGroupName/pr Add-AzStorageAccountNetworkRule -ResourceGroupName $resourceGroupName -Name $storageaccountName -TenantId $tenantId -ResourceId $resourceId ``` -After running this command, in the **Firewall** section under **Resource instances** you will see **2 selected** in the **Instance name** dropdown list. These are the names of the workspace instance and FHIR service instance that you just registered as Microsoft Trusted Resources. +After running this command, in the **Firewall** section under **Resource instances** you will see **2 selected** in the **Instance name** dropdown list. These are the names of the workspace instance and FHIR service instance that you registered as Microsoft Trusted Resources. :::image type="content" source="media/export-data/storage-networking-2.png" alt-text="Screenshot of Azure Storage Networking Settings with resource type and instance names." lightbox="media/export-data/storage-networking-2.png"::: -You're now ready to securely export FHIR data to the storage account. Note that the storage account is on selected networks and isn't publicly accessible. To securely access the files, you can enable [private endpoints](../../storage/common/storage-private-endpoints.md) for the storage account. --### Allowing specific IP addresses from other Azure regions to access the Azure storage account --In the Azure portal, go to the ADLS Gen2 account and select the **Networking** blade. - -Select **Enabled from selected virtual networks and IP addresses**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to allow access from the internet or your on-premises networks. You can find the IP address in the table below for the Azure region where the FHIR service is provisioned. --|**Azure Region** |**Public IP Address** | -|:-|:-| -| Australia East | 20.53.44.80 | -| Canada Central | 20.48.192.84 | -| Central US | 52.182.208.31 | -| East US | 20.62.128.148 | -| East US 2 | 20.49.102.228 | -| East US 2 EUAP | 20.39.26.254 | -| Germany North | 51.116.51.33 | -| Germany West Central | 51.116.146.216 | -| Japan East | 20.191.160.26 | -| Korea Central | 20.41.69.51 | -| North Central US | 20.49.114.188 | -| North Europe | 52.146.131.52 | -| South Africa North | 102.133.220.197 | -| South Central US | 13.73.254.220 | -| Southeast Asia | 23.98.108.42 | -| Switzerland North | 51.107.60.95 | -| UK South | 51.104.30.170 | -| UK West | 51.137.164.94 | -| West Central US | 52.150.156.44 | -| West Europe | 20.61.98.66 | -| West US 2 | 40.64.135.77 | +You're now ready to securely export FHIR data to the storage account. -> [!NOTE] -> The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure the ACR firewall](configure-settings-convert-data.md#step-6-configure-the-azure-container-registry-firewall-for-secure-access). --### Allowing specific IP addresses to access the Azure storage account in the same region --The configuration process for IP addresses in the same region is just like above except a specific IP address range in Classless Inter-Domain Routing (CIDR) format is used instead (i.e., 100.64.0.0/10). The reason why the IP address range (100.64.0.0 ΓÇô 100.127.255.255) must be specified is because an IP address for the FHIR service will be allocated each time an `$export` request is made. +The storage account is on selected networks and isn't publicly accessible. To securely access the files, you can enable [private endpoints](../../storage/common/storage-private-endpoints.md) for the storage account. -> [!NOTE] -> It is possible that a private IP address within the range of 10.0.2.0/24 may be used, but there is no guarantee that the `$export` operation will succeed in such a case. You can retry if the `$export` request fails, but until an IP address within the range of 100.64.0.0/10 is used, the request will not succeed. This network behavior for IP address ranges is by design. The alternative is to configure the storage account in a different region. ## Next steps |
healthcare-apis | Fhir Rest Api Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md | After you've found the record you want to restore, use the `PUT` operation to re > [!NOTE] > There is no time-based expiration for history/soft delete data. The only way to remove history/soft deleted data is with a hard delete or the purge history operation. + ## Patch and Conditional Patch Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three ways to patch resources: JSON Patch, XML Patch, and FHIRPath Patch. The FHIR Service support both JSON Patch and FHIRPath Patch along with Conditional JSON Patch and Conditional FHIRPath Patch (which allows you to patch a resource based on a search criteria instead of a resource ID). To walk through some examples, refer to the sample [FHIRPath Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http) and the [JSON Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/JsonPatchRequests.http) for each approach. For additional details, read the [HL7 documentation for patch operations with FHIR](https://www.hl7.org/fhir/http.html#patch). |
healthcare-apis | Device Messages Through Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md | You complete the steps by using Visual Studio Code with the Azure IoT Hub extens > [!NOTE] > In this device-to-cloud (D2C) example, *cloud* is the IoT hub in the Azure IoT Hub that receives the device message. Azure IoT Hub supports two-way communications. To set up a cloud-to-device (C2D) scenario, select **Send C2D Message to Device Cloud**. - :::image type="content" source="media\device-messages-through-iot-hub\select-device-to-cloud-message.png" alt-text="Screenshot that shows Visual Studio Code with the Azure IoT Hub extension and the Send D2C Message to IoT Hub option selected." lightbox="media\device-messages-through-iot-hub\select-device-to-cloud-message.png"::: + :::image type="content" source="media\device-messages-through-iot-hub\select-d2c-message.png" alt-text="Screenshot that shows Visual Studio Code with the Azure IoT Hub extension and the Send D2C Message to IoT Hub option selected." lightbox="media\device-messages-through-iot-hub\select-d2c-message.png"::: 7. In **Send D2C Messages**, select or enter the following values: You complete the steps by using Visual Studio Code with the Azure IoT Hub extens 8. To begin the process of sending a test message to your IoT hub, select **Send**. - :::image type="content" source="media\device-messages-through-iot-hub\select-device-to-cloud-message-options.png" alt-text="Screenshot that shows Visual Studio Code with the Azure IoT Hub extension with the device message options selected." lightbox="media\device-messages-through-iot-hub\select-device-to-cloud-message-options.png"::: + :::image type="content" source="media\device-messages-through-iot-hub\select-d2c-message-options.png" alt-text="Screenshot that shows Visual Studio Code with the Azure IoT Hub extension with the device message options selected." lightbox="media\device-messages-through-iot-hub\select-d2c-message-options.png"::: After you select **Send**, it might take up to five minutes for the FHIR resources to be available in the FHIR service. You complete the steps by using Visual Studio Code with the Azure IoT Hub extens > > Example: >- > :::image type="content" source="media\device-messages-through-iot-hub\iot-hub-enriched-device-message.png" alt-text="Screenshot of an Azure IoT Hub enriched device message." lightbox="media\device-messages-through-iot-hub\iot-hub-enriched-device-message.png"::: + > :::image type="content" source="media\device-messages-through-iot-hub\iot-hub-enriched-message.png" alt-text="Screenshot of an Azure IoT Hub enriched device message." lightbox="media\device-messages-through-iot-hub\iot-hub-enriched-message.png"::: > > `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. This example assumes your MedTech service is in a **Create** mode. The **Resolution type** for this tutorial set to **Create**. For more information on the **Destination properties**: **Create** and **Lookup**, see [Configure the Destination tab](deploy-manual-portal.md#configure-the-destination-tab). |
healthcare-apis | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md | -The MedTech service was built to help customers that were dealing with the challenge of gaining relevant insights from device data coming in from multiple and diverse sources. No matter the device or structure, the MedTech service normalizes that device data into a common format, allowing the end user to then easily capture trends, run analytics, and build Artificial Intelligence (AI) models. In the enterprise healthcare setting, the MedTech service is used in the context of remote patient monitoring, virtual health, and clinical trials. +The MedTech service is built to help customers that are dealing with the challenge of gaining relevant insights from device data coming in from multiple and diverse sources. No matter the device or structure, the MedTech service normalizes that device data into a common format, allowing the end user to then easily capture trends, run analytics, and build Artificial Intelligence (AI) models. In the enterprise healthcare setting, the MedTech service is used in the context of remote patient monitoring, virtual health, and clinical trials. The following video presents an overview of the MedTech service: > Useful options could include: * Choose data terms that work best for your organization and provide consistency in device data ingestion. -* Customize, edit, test, and troubleshoot MedTech service device and FHIR destination mappings with the [Mapping debugger](how-to-use-mapping-debugger.md) tool. +* Customize, edit, test, and troubleshoot MedTech service device and FHIR destination mappings with the [Mapping debugger](how-to-use-mapping-debugger.md). ### Scalable |
iot-develop | Quickstart Devkit Espressif Esp32 Freertos Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-espressif-esp32-freertos-iot-hub.md | If you experience issues building the device code, flashing the device, or conne For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). -## Clean up resources --If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources. --> [!IMPORTANT] -> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. --To delete a resource group by name: --1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created. -- ```azurecli-interactive - az group delete --name MyResourceGroup - ``` --1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted. -- ```azurecli-interactive - az group list - ``` ## Next steps |
iot-develop | Quickstart Devkit Microchip Atsame54 Xpro Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro-iot-hub.md | If you experience issues building the device code, flashing the device, or conne For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). -## Clean up resources --If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources. --> [!IMPORTANT] -> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. --To delete a resource group by name: --1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created. -- ```azurecli-interactive - az group delete --name MyResourceGroup - ``` --1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted. -- ```azurecli-interactive - az group list - ``` - ## Next steps |
iot-develop | Quickstart Devkit Mxchip Az3166 Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md | If you experience issues building the device code, flashing the device, or conne For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). -## Clean up resources --If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources. --> [!IMPORTANT] -> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. --To delete a resource group by name: --1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created. -- ```azurecli-interactive - az group delete --name MyResourceGroup - ``` --1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted. -- ```azurecli-interactive - az group list - ``` ## Next steps |
iot-develop | Quickstart Devkit Nxp Mimxrt1060 Evk Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md | If you experience issues building the device code, flashing the device, or conne For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). -## Clean up resources --If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources. --> [!IMPORTANT] -> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. --To delete a resource group by name: --1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created. -- ```azurecli-interactive - az group delete --name MyResourceGroup - ``` --1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted. -- ```azurecli-interactive - az group list - ``` - ## Next steps |
iot-develop | Quickstart Devkit Renesas Rx65n Cloud Kit Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit-iot-hub.md | If you experience issues building the device code, flashing the device, or conne For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). -## Clean up resources --If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources. --> [!IMPORTANT] -> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. --To delete a resource group by name: --1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created. -- ```azurecli-interactive - az group delete --name MyResourceGroup - ``` --1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted. -- ```azurecli-interactive - az group list - ``` - ## Next steps |
iot-develop | Quickstart Devkit Stm B L475e Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-iot-hub.md | If you experience issues building the device code, flashing the device, or conne For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). -## Clean up resources --If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources. --> [!IMPORTANT] -> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. --To delete a resource group by name: --1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created. -- ```azurecli-interactive - az group delete --name MyResourceGroup - ``` --1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted. -- ```azurecli-interactive - az group list - ``` - ## Next steps |
iot-develop | Quickstart Devkit Stm B L4s5i Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md | If you experience issues building the device code, flashing the device, or conne For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). -## Clean up resources --If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources. --> [!IMPORTANT] -> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. --To delete a resource group by name: --1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created. -- ```azurecli-interactive - az group delete --name MyResourceGroup - ``` --1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted. -- ```azurecli-interactive - az group list - ``` - ## Next steps |
iot-develop | Quickstart Devkit Stm B U585i Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md | If you experience issues building the device code, flashing the device, or conne For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). -## Clean up resources --If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources. --> [!IMPORTANT] -> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. --To delete a resource group by name: --1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created. -- ```azurecli-interactive - az group delete --name MyResourceGroup - ``` --1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted. -- ```azurecli-interactive - az group list - ``` - ## Next steps |
iot-edge | How To Configure Api Proxy Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-api-proxy-module.md | Another use case for the API proxy module is to enable IoT Edge devices in lower This scenario uses the [Azure Blob Storage on IoT Edge](https://azuremarketplace.microsoft.com/marketplace/apps/azure-blob-storage.edge-azure-blob-storage) module at the top layer to handle blob creation and upload. +In a nested scenario, up to five layers are supported. Each upstream IoT Edge device in the nested hierarchy requires the *Azure Blob Storage on IoT Edge* module. For a sample multi-layer deployment, see the [Azure IoT Edge for Industrial IoT](https://github.com/Azure-Samples/iot-edge-for-iiot) sample. + Configure the following modules at the **top layer**: * An Azure Blob Storage on IoT Edge module. |
iot-edge | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/security.md | Title: Security framework - Azure IoT Edge | Microsoft Docs -description: Learn about the security, authentication, and authorization standards that were used to develop Azure IoT Edge and should be considered as you design your solution + Title: Security framework for Azure IoT Edge +description: Learn about the security, authentication, and authorization standards used to develop Azure IoT Edge for you to consider in your solution design. Previously updated : 08/30/2019 Last updated : 07/27/2023 For more information, see [Azure IoT Edge certificate usage](iot-edge-certs.md). ## Authorization -The principle of least privilege says that users and components of a system should have access only to the minimum set of resources and data needed to perform their roles. Devices, modules, and actors should access only the resources and data within their permission scope, and only when it is architecturally allowable. Some permissions are configurable with sufficient privileges and others are architecturally enforced. For example, some modules may be authorized to connect to Azure IoT Hub. However, there is no reason why a module in one IoT Edge device should access the twin of a module in another IoT Edge device. +The principle of least privilege says that users and components of a system should have access only to the minimum set of resources and data needed to perform their roles. Devices, modules, and actors should access only the resources and data within their permission scope, and only when it's architecturally allowable. Some permissions are configurable with sufficient privileges and others are architecturally enforced. For example, some modules may be authorized to connect to Azure IoT Hub. However, there's no reason why a module in one IoT Edge device should access the twin of a module in another IoT Edge device. Other authorization schemes include certificate signing rights and role-based access control (RBAC). Static attestation verifies the integrity of all software on a device during pow ### Runtime attestation -Once a system has completed a secure boot process, well-designed systems should detect attempts to inject malware and take proper countermeasures. Malware attacks may target the system's ports and interfaces. If malicious actors have physical access to a device, they may tamper with the device itself or use side-channel attacks to gain access. Such malcontent, whether malware or unauthorized configuration changes, can't be detected by static attestation because it is injected after the boot process. Countermeasures offered or enforced by the deviceΓÇÖs hardware help to ward off such threats. The security framework for IoT Edge explicitly calls for extensions that combat runtime threats. +Once a system has completed a secure boot process, well-designed systems should detect attempts to inject malware and take proper countermeasures. Malware attacks may target the system's ports and interfaces. If malicious actors have physical access to a device, they may tamper with the device itself or use side-channel attacks to gain access. Such malcontent, whether malware or unauthorized configuration changes, can't be detected by static attestation because it's injected after the boot process. Countermeasures offered or enforced by the device's hardware help to ward off such threats. The security framework for IoT Edge explicitly calls for extensions that combat runtime threats. ### Software attestation All healthy systems, including intelligent edge systems, need patches and upgrad ## Hardware root of trust -For many intelligent edge devices, especially devices that can be physically accessed by potential malicious actors, hardware security is the last defense for protection. Tamper resistant hardware is crucial for such deployments. Azure IoT Edge encourages secure silicon hardware vendors to offer different flavors of hardware root of trust to accommodate various risk profiles and deployment scenarios. Hardware trust may come from common security protocol standards like Trusted Platform Module (ISO/IEC 11889) and Trusted Computing GroupΓÇÖs Device Identifier Composition Engine (DICE). Secure enclave technologies like TrustZones and Software Guard Extensions (SGX) also provide hardware trust. +For many intelligent edge devices, especially devices that can be physically accessed by potential malicious actors, hardware security is the last defense for protection. Tamper resistant hardware is crucial for such deployments. Azure IoT Edge encourages secure silicon hardware vendors to offer different flavors of hardware root of trust to accommodate various risk profiles and deployment scenarios. Hardware trust may come from common security protocol standards like Trusted Platform Module (ISO/IEC 11889) and Trusted Computing Group's Device Identifier Composition Engine (DICE). Secure enclave technologies like TrustZones and Software Guard Extensions (SGX) also provide hardware trust. ## Certification To help customers make informed decisions when procuring Azure IoT Edge devices for their deployment, the IoT Edge framework includes certification requirements. Foundational to these requirements are certifications pertaining to security claims and certifications pertaining to validation of the security implementation. For example, a security claim certification means that the IoT Edge device uses secure hardware known to resist boot attacks. A validation certification means that the secure hardware was properly implemented to offer this value in the device. In keeping with the principle of simplicity, the framework tries to keep the burden of certification minimal. +## Encryption at rest ++Encryption at rest provides data protection for stored data. Attacks against data at-rest include attempts to get physical access to the hardware where the data is stored, and then compromise the contained data. You can use storage encryption to protect data stored on the device. Linux has several options for encryption at rest. Choose the option that best fits your needs. For Windows, [Windows BitLocker](/windows/security/operating-system-security/data-protection/bitlocker) is the recommended option for encryption at rest. + ## Extensibility With IoT technology driving different types of business transformations, security should evolve in parallel to address emerging scenarios. The Azure IoT Edge security framework starts with a solid foundation on which it builds in extensibility into different dimensions to include: |
iot-hub | Iot Hub Devguide Messages D2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md | In addition to device telemetry, message routing also enables sending non-teleme * Device job lifecycle events * Digital twin change events * Device connection state events-* MQTT broker messages For example, if a route is created with the data source set to **Device Twin Change Events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with the data source set to **Device Lifecycle Events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with the data source set to **Digital Twin Change Events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **Device Connection State Events**, IoT Hub sends a message indicating whether the device was connected or disconnected. To learn how to create message routes, see: * [Create and delete routes and endpoints by using the Azure portal](./how-to-routing-portal.md) * [Create and delete routes and endpoints by using the Azure CLI](./how-to-routing-azure-cli.md)++ |
key-vault | How To Configure Key Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md | Using the Azure Policy service, you can govern the key lifecycle and ensure that - Define the scope of the policy by choosing the subscription and resource group over which the policy will be enforced. Select by clicking the three-dot button at on **Scope** field. - Select the name of the policy definition: "[Keys should have a rotation policy ensuring that their rotation is scheduled within the specified number of days after creation. ](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd8cf8476-a2ec-4916-896e-992351803c44)"- - Go to the **Parameters** tab at the top of the page and define the desired effect of the policy (Audit, or Disabled). + - Go to the **Parameters** tab at the top of the page. + - Set **The maximum days to rotate** parameter to desired number of days for example, 730. + - Define the desired effect of the policy (Audit, or Disabled). 1. Fill out any additional fields. Navigate the tabs clicking on **Previous** and **Next** buttons at the bottom of the page. 1. Select **Review + create** 1. Select **Create** |
key-vault | Javascript Developer Guide Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-get-started.md | if (key?.name) { ## Next steps -* [Create a key](javascript-developer-guide-create-update-rotate-key.md) +* [Create a key](javascript-developer-guide-create-update-rotate-key.md) |
kinect-dk | About Azure Kinect Dk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/about-azure-kinect-dk.md | The Azure Kinect DK development environment consists of the following multiple S - Sensor SDK for low-level sensor and device access. - Body Tracking SDK for tracking bodies in 3D.-- Speech Cognitive Services SDK for enabling microphone access and Azure cloud-based speech services.+- Azure AI Speech SDK for enabling microphone access and Azure cloud-based speech services. In addition, Cognitive Vision services can be used with the device RGB camera. The following body-tracking features are available on the accompanying SDK: - Body Tracker has a viewer tool to track bodies in 3D. -## Speech Cognitive services SDK +<a name='speech-cognitive-services-sdk'></a> ++## Azure AI Speech SDK The Speech SDK enables Azure-connected speech services. The following [Azure Cognitive Vision Services](https://azure.microsoft.com/serv - [Content moderator](https://azure.microsoft.com/services/cognitive-services/content-moderator/) - [Custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/) -Services evolve and improve constantly, so remember to check regularly for new or additional [Cognitive services](https://azure.microsoft.com/services/cognitive-services/) to improve your application. For an early look on emerging new services, check out the [Cognitive services labs](https://labs.cognitive.microsoft.com/). +Services evolve and improve constantly, so remember to check regularly for new or additional [Azure AI services](https://azure.microsoft.com/services/cognitive-services/) to improve your application. For an early look on emerging new services, check out the [Azure AI services labs](https://labs.cognitive.microsoft.com/). ## Azure Kinect hardware requirements The Azure Kinect DK integrates Microsoft's latest sensor technology into single You now have an overview of Azure Kinect DK. The next step is to dive in and set it up! > [!div class="nextstepaction"]->[Quickstart: Set up Azure Kinect DK](set-up-azure-kinect-dk.md) +>[Quickstart: Set up Azure Kinect DK](set-up-azure-kinect-dk.md) |
kinect-dk | Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/support.md | For quick and reliable answers on your technical product questions from Microsof ### Development Azure Kinect on Azure -Azure subscribers can create and manage support requests in the Azure portal. One-on-one development support for Body Tracking, Sensor SDK, Speech device SDK, or Azure Cognitive Services is available for Azure subscribers with an [Azure Support Plan](https://azure.microsoft.com/support/plans/) associated with their subscription. +Azure subscribers can create and manage support requests in the Azure portal. One-on-one development support for Body Tracking, Sensor SDK, Speech device SDK, or Azure AI services is available for Azure subscribers with an [Azure Support Plan](https://azure.microsoft.com/support/plans/) associated with their subscription. - Have an [Azure Support Plan](https://azure.microsoft.com/support/plans/) associated with your Azure subscription? Sign in to the [Azure portal](https://portal.azure.com) to submit an incident. - Need an Azure Subscription? [Azure subscription options](https://azure.microsoft.com/pricing/purchase-options/) will provide more information about different options. |
kinect-dk | Windows Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/windows-comparison.md | The Azure Kinect SDK feature set is different from Kinect for Windows v2, as det | Body Tracking | BodyFrame | Body Tracking SDK | | | BodyIndexFrame | Body Tracking SDK | | Coordinate Mapping|CoordinateMapper| [Sensor SDK - Image transformations](use-image-transformation.md) |-|Face Tracking | FaceFrame | [Cognitive -| Speech Recognition | N/A | [Cognitive +|Face Tracking | FaceFrame | [Azure AI +| Speech Recognition | N/A | [Azure AI Speech](https://azure.microsoft.com/services/cognitive-services/directory/speech/) | ## Next steps |
lab-services | How To Use Shared Image Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-shared-image-gallery.md | Here are the couple of scenarios supported by this feature: - Image must be replicated to the same region as the lab plan. ## Save an image to a compute gallery--After a compute gallery is attached, an educator can save an image to the compute gallery so that it can be reused by other educators. +> [!IMPORTANT] +> Images can only be saved from labs that were created in the same region as their lab plan. 1. On the **Template** page for the lab, select **Export to Azure Compute Gallery** on the toolbar. To learn about how to set up a compute gallery by attaching and detaching it to To explore other options for bringing custom images to compute gallery outside of the context of a lab, see [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md). -For more information about compute galleries in general, see [Azure Compute Gallery overview](../virtual-machines/shared-image-galleries.md). +For more information about compute galleries in general, see [Azure Compute Gallery overview](../virtual-machines/shared-image-galleries.md). |
lighthouse | Onboard Customer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/onboard-customer.md | To onboard a customer's tenant, it must have an active Azure subscription. When - The tenant ID of the customer's tenant (which will have resources managed by the service provider). - The subscription IDs for each specific subscription in the customer's tenant that will be managed by the service provider (or that contains the resource group(s) that will be managed by the service provider). -If you don't know the ID for a tenant, you can [retrieve it by using the Azure portal, Azure PowerShell, or Azure CLI](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md). +If you don't know the ID for a tenant, you can [retrieve it by using the Azure portal, Azure PowerShell, or Azure CLI](/azure/active-directory/fundamentals/how-to-find-tenant). If you [create your template in the Azure portal](#create-your-template-in-the-azure-portal), your tenant ID is provided automatically. You don't need to know the customer's tenant or subscription details in order to create your template in the Azure portal. However, if you plan to onboard one or more resource groups in the customer's tenant (rather than the entire subscription), you'll need to know the names of each resource group. |
load-balancer | Create Custom Http Health Probe Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/create-custom-http-health-probe-howto.md | description: Learn to create a custom HTTP/HTTPS health probe for Azure Load Bal + Last updated 05/22/2023 When no longer needed, delete the resource group, load balancer, and all related > [!div class="nextstepaction"] > [Manage health probes for Azure Load Balancer using the Azure portal](manage-probes-how-to.md)- |
load-testing | Resource Supported Azure Resource Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-supported-azure-resource-types.md | This section lists the Azure resource types that Azure Load Testing supports for * Azure Application Insights * Azure Batch Service * Azure Cache for Redis-* Azure Cognitive Services +* Azure AI services * Azure Container Apps * Azure Container Instances * Azure Cosmos DB |
logic-apps | Logic Apps Examples And Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-examples-and-scenarios.md | Last updated 03/07/2023 # Common scenarios, examples, tutorials, and walkthroughs for Azure Logic Apps -[Azure Logic Apps](logic-apps-overview.md) helps you orchestrate and integrate different services by providing hundreds of prebuilt and ready-to-use connectors, ranging from SQL Server and SAP to Azure Cognitive Services. Azure Logic Apps is "serverless", so you don't have to worry about scale or instances. All you have to do is define a workflow with a trigger and the actions that the workflow performs. The underlying platform handles scale, availability, and performance. Azure Logic Apps is especially useful for use cases and scenarios where you need to coordinate actions across multiple systems and services. +[Azure Logic Apps](logic-apps-overview.md) helps you orchestrate and integrate different services by providing hundreds of prebuilt and ready-to-use connectors, ranging from SQL Server and SAP to Azure AI services. Azure Logic Apps is "serverless", so you don't have to worry about scale or instances. All you have to do is define a workflow with a trigger and the actions that the workflow performs. The underlying platform handles scale, availability, and performance. Azure Logic Apps is especially useful for use cases and scenarios where you need to coordinate actions across multiple systems and services. To help you learn about the capabilities and patterns that Azure Logic Apps supports, this guide describes common starting points, examples, and scenarios. Azure Logic Apps integrates with many services, such as Azure Functions, Azure A * [Call Azure Functions from Azure Logic Apps](../logic-apps/logic-apps-azure-functions.md) * [Tutorial: Call or trigger logic app workflows by using Azure Functions and Azure Service Bus](../logic-apps/logic-apps-scenario-function-sb-trigger.md) * [Tutorial: Create a streaming customer insights dashboard with Azure Logic Apps and Azure Functions](../logic-apps/logic-apps-scenario-social-serverless.md)-* [Tutorial: Create a function that integrates with Azure Logic Apps and Azure Cognitive Services to analyze Twitter post sentiment](../azure-functions/functions-twitter-email.md) +* [Tutorial: Create a function that integrates with Azure Logic Apps and Azure AI services to analyze Twitter post sentiment](../azure-functions/functions-twitter-email.md) * [Tutorial: Build an AI-powered social dashboard by using Power BI and Azure Logic Apps](/shows/) * [Tutorial: Monitor virtual machine changes by using Azure Event Grid and Logic Apps](../event-grid/monitor-virtual-machine-changes-logic-app.md) * [Tutorial: IoT remote monitoring and notifications with Azure Logic Apps connecting your IoT hub and mailbox](../iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md) |
logic-apps | Logic Apps Limits And Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md | By default, the HTTP action and APIConnection actions follow the [standard async | Inbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. <p><p>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | |||||| +<a name="content-storage-size-limits"></a> ++### Content storage limits ++| Name | Multi-tenant | Single-tenant | Notes | +||--||-| +| Request trigger (inbound) - Content storage limit per 5-minute rolling interval per workflow | 3145728 KB | None | This limit applies only to the storage content size for inbound requests received by the Request trigger. | + <a name="message-size-limits"></a> ### Messages |
logic-apps | Logic Apps Scenario Social Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-social-serverless.md | so that you can better understand the sentiments expressed. ## Analyze tweet text To detect the sentiment behind some text, -you can use [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/). +you can use [Azure AI services](https://azure.microsoft.com/services/cognitive-services/). 1. In workflow designer, under the trigger, choose **New step**. you can use [Azure Cognitive Services](https://azure.microsoft.com/services/cogn 3. Select the **Detect Sentiment** action. -4. If prompted, provide a valid Cognitive Services +4. If prompted, provide a valid Azure AI services key for the Text Analytics service. 5. Under **Request Body**, select the **Tweet Text** check the [Azure quickstart template repository](https://github.com/Azure/azure- <!-- Image References --> [1]: ./media/logic-apps-scenario-social-serverless/twitter.png-[2]: ./media/logic-apps-scenario-social-serverless/function.png +[2]: ./media/logic-apps-scenario-social-serverless/function.png |
machine-learning | Concept Retrieval Augmented Generation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-retrieval-augmented-generation.md | -# Retrieval Augmented Generation using Azure Machine Learning prompt flow +# Retrieval Augmented Generation using Azure Machine Learning prompt flow (preview) + Retrieval Augmented Generation (RAG) is a feature that enables an LLM to utilize your own data for generating responses. Traditionally, a base model is trained with point-in-time data to ensure its eff RAG allows businesses to achieve customized solutions while maintaining data relevance and optimizing costs. By adopting RAG, companies can use the reasoning capabilities of LLMs, utilizing their existing models to process and generate responses based on new data. RAG facilitates periodic data updates without the need for fine-tuning, thereby streamlining the integration of LLMs into businesses. Benefits of adopting RAG in your LLMs:-Adds a fact checking component on your existing models -Train your model on up to date data without needed fine-tuning -Train on your business specific data +* Adds a fact checking component on your existing models +* Train your model on up to date data without needed fine-tuning +* Train on your business specific data Drawbacks without RAG:-Models may return more incorrect knowledge -Data is trained on a broader range of data. More intensive training resources are required to fine-tune your model +* Models may return more incorrect knowledge +* Data is trained on a broader range of data. More intensive training resources are required to fine-tune your model ## Technical overview of using RAG on Large Language Models (LLMs) -RAG is a feature that enables you to harness the power of LLMs with your own data. Enabling an LLM to access custom data involves the following steps. Firstly, the large data should be chunked into manageable pieces. Secondly, the chunks need to be converted into a searchable format. Thirdly, the converted data should be stored in a location that allows efficient access. Additionally, it's important to store relevant metadata for citations or references when the LLM provides responses. +RAG is a feature that enables you to harness the power of LLMs with your own data. Enabling an LLM to access custom data involves the following steps. First, the large data should be chunked into manageable pieces. Second, the chunks need to be converted into a searchable format. Third, the converted data should be stored in a location that allows efficient access. Additionally, it's important to store relevant metadata for citations or references when the LLM provides responses. :::image type="content" source="./media/concept-retrieval-augmented-generation/retrieval-augmented-generation-walkthrough.png" alt-text="Screenshot of a diagram of the technical overview of an LLM walking through rag steps." lightbox="./media/concept-retrieval-augmented-generation/retrieval-augmented-generation-walkthrough.png"::: Let us look at the diagram in more detail. * Data chunking: The data in your source needs to be converted to plain text. For example, word documents or PDFs need to be cracked open and converted to text. The text is then chunked into smaller pieces. -* Converting the text to vectors: called embeddings2. Vectors are numerical representations of concepts converted to number sequences, which make it easy for computers to understand the relationships between those concepts. +* Converting the text to vectors: called embeddings. Vectors are numerical representations of concepts converted to number sequences, which make it easy for computers to understand the relationships between those concepts. * Links between source data and embeddings: this information is stored as metadata on the chunks created which are then used to assist the LLMs to generate citations while generating responses. Let us look at the diagram in more detail. RAG in Azure Machine Learning is enabled by integration with Azure OpenAI Service, with support for Azure Cognitive Search, and OSS offerings tools and frameworks such as LangChain. -To implement RAG, a few key requirements must be met. Firstly, data should be formatted in a manner that allows efficient searchability before sending it to the LLM, which ultimately reduces token consumption. To ensure the effectiveness of RAG, it's also important to regularly update your data on a periodic basis. Furthermore, having the capability to evaluate the output from the LLM using your data enables you to measure the efficacy of your techniques. Azure Machine Learning not only allows you to get started easily on these aspects, but also enables you to improve and productionize RAG. Azure Machine Learning offers: +To implement RAG, a few key requirements must be met. First, data should be formatted in a manner that allows efficient searchability before sending it to the LLM, which ultimately reduces token consumption. To ensure the effectiveness of RAG, it's also important to regularly update your data on a periodic basis. Furthermore, having the capability to evaluate the output from the LLM using your data enables you to measure the efficacy of your techniques. Azure Machine Learning not only allows you to get started easily on these aspects, but also enables you to improve and productionize RAG. Azure Machine Learning offers: * Samples for starting RAG-based Q&A scenarios. * Wizard-based UI experience to create and manage data and incorporate it into prompt flows. |
machine-learning | Concept Vector Stores | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vector-stores.md | -# Vector stores in Azure Machine Learning +# Vector stores in Azure Machine Learning (preview) [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] |
machine-learning | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md | Azure portal users will always find the latest image available for provisioning See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds. +## July 26, 2023 ++New DSVM offering for [Data Science VM ΓÇô Windows 2022 (Preview)](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/microsoft-dsvm.dsvm-win-2022?tab=Overview) is currently live in the marketplace. ++Version `23.06.25` ++Main changes: ++- SDK `1.51.0` + ## April 26, 2023 [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview) |
machine-learning | How To Import Data Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md | Not available. -## Import from an external database as a table data asset +## Import from an external database as a mltable data asset > [!NOTE] > The external databases can have Snowflake, Azure SQL, etc. formats. ml_client.data.import_data(data_import=data_import) A new panel opens, where you can define a **Recurrence** schedule, or a **Cron** schedule. This screenshot shows the panel for a **Recurrence** schedule: - :::image type="content" source="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" alt-text="A screenshot that shows selection of the Add schedule button."::: - + :::image type="content" source="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" alt-text="A screenshot that shows selection of the Add recurrence schedule button."::: ++ - **Name**: the unique identifier of the schedule within the workspace. + - **Description**: the schedule description. + - **Trigger**: the recurrence pattern of the schedule, which includes the following properties. + - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. + - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. Under **Recurrence**, you can specify the recurrence frequency - by minutes, hours, days, weeks, or months. + - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. + - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule will always be active until you manually disable it. + - **Tags**: the selected schedule tags. ++ > [!NOTE] + > **Start** specifies the start date and time with the timezone of the schedule. If start is omitted, the start time equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time. ++ The next screenshot shows the last screen of this process. Review your choices, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens to change your choices of values. - - **Name**: the unique identifier of the schedule within the workspace. - - **Description**: the schedule description. - - **Trigger**: the recurrence pattern of the schedule, which includes the following properties. - - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. - - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. Under **Recurrence**, you can specify the recurrence frequency - by minutes, hours, days, weeks, or months. - - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. - - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule will always be active until you manually disable it. - - **Tags**: the selected schedule tags. + :::image type="content" source="media/how-to-import-data-assets/create-snowflake-data-import-review-values-and-create.png" lightbox="media/how-to-import-data-assets/create-snowflake-data-import-review-values-and-create.png" alt-text="Screenshot that shows all parameters of the data import."::: This screenshot shows the panel for a **Cron** schedule: :::image type="content" source="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" alt-text="Screenshot that shows selection of the Add schedule button."::: --- - **Name**: the unique identifier of the schedule within the workspace. - - **Description**: the schedule description. - - **Trigger**: the recurrence pattern of the schedule, which includes the following properties. - - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. - - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. **Cron expression** allows you to specify more flexible and customized recurrence pattern. - - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. - - **End**: the schedule will become inactive after this date. By default, it's NONE, meaning that the schedule will remain active until you manually disable it. - - **Tags**: the selected schedule tags. + - **Name**: the unique identifier of the schedule within the workspace. + - **Description**: the schedule description. + - **Trigger**: the recurrence pattern of the schedule, which includes the following properties. + - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. + - **Recurrence** or **Cron expression**: select cron expression to specify the cron details. -- **(Required)** `expression` uses a standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:+ - **(Required)** `expression` uses a standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields: - `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK` + `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK` - - A single wildcard (`*`), which covers all values for the field. A `*`, in days, means all days of a month (which varies with month and year). - - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday. - - The next table lists the valid values for each field: + - A single wildcard (`*`), which covers all values for the field. A `*`, in days, means all days of a month (which varies with month and year). + - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday. + - The next table lists the valid values for each field: - | Field | Range | Comment | - |-|-|--| - | `MINUTES` | 0-59 | - | - | `HOURS` | 0-23 | - | - | `DAYS` | - | Not supported. The value is ignored and treated as `*`. | - | `MONTHS` | - | Not supported. The value is ignored and treated as `*`. | - | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. | + | Field | Range | Comment | + |-|-|--| + | `MINUTES` | 0-59 | - | + | `HOURS` | 0-23 | - | + | `DAYS` | - | Not supported. The value is ignored and treated as `*`. | + | `MONTHS` | - | Not supported. The value is ignored and treated as `*`. | + | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. | ++ - To learn more about crontab expressions, see [Crontab Expression wiki on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression). - - To learn more about crontab expressions, see [Crontab Expression wiki on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression). + > [!IMPORTANT] + > `DAYS` and `MONTH` are not supported. If you pass one of these values, it will be ignored and treated as `*`. - > [!IMPORTANT] - > `DAYS` and `MONTH` are not supported. If you pass one of these values, it will be ignored and treated as `*`. + - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. + - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule will always be active until you manually disable it. + - **Tags**: the selected schedule tags. -- (Optional) `start_time` specifies the start date and time with the timezone of the schedule. For example, `start_time: "2022-05-10T10:15:00-04:00"` means the schedule starts from 10:15:00AM on 2022-05-10 in the UTC-4 timezone. If `start_time` is omitted, the `start_time` equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time.+ > [!NOTE] + > **Start** specifies the start date and time with the timezone of the schedule. If start is omitted, the start time equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time. -The next screenshot shows the last screen of this process. Review your choices, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens to change your choices of values. + The next screenshot shows the last screen of this process. Review your choices, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens to change your choices of values. - :::image type="content" source="media/how-to-import-data-assets/create-snowflake-data-import-review-values-and-create.png" lightbox="media/how-to-import-data-assets/create-snowflake-data-import-review-values-and-create.png" alt-text="Screenshot that shows all parameters of the data import."::: + :::image type="content" source="media/how-to-import-data-assets/create-snowflake-data-import-review-values-and-create.png" lightbox="media/how-to-import-data-assets/create-snowflake-data-import-review-values-and-create.png" alt-text="Screenshot that shows all parameters of the cron data import."::: ml_client.data.import_data(data_import=data_import) :::image type="content" source="media/how-to-import-data-assets/create-data-import-add-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-add-schedule.png" alt-text="Screenshot showing selection of the Add schedule button."::: - A new panel opens, where you can define a **Recurrence** schedule, or a **Cron** schedule. This screenshot shows the panel for a **Recurrence** schedule: +1. A new panel opens, where you can define a **Recurrence** schedule, or a **Cron** schedule. This screenshot shows the panel for a **Recurrence** schedule: :::image type="content" source="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" alt-text="A screenshot showing selection of the Add schedule button.":::- - - **Name**: the unique identifier of the schedule within the workspace. - - **Description**: the schedule description. - - **Trigger**: the recurrence pattern of the schedule, which includes the following properties. - - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. - - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. Under **Recurrence**, you can specify the recurrence frequency - by minutes, hours, days, weeks, or months. - - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. - - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule will remain active until you manually disable it. - - **Tags**: the selected schedule tags. - + - **Name**: the unique identifier of the schedule within the workspace. + - **Description**: the schedule description. + - **Trigger**: the recurrence pattern of the schedule, which includes the following properties. + - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. + - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. Under **Recurrence**, you can specify the recurrence frequency - by minutes, hours, days, weeks, or months. + - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. + - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule will always be active until you manually disable it. + - **Tags**: the selected schedule tags. ++ > [!NOTE] + > **Start** specifies the start date and time with the timezone of the schedule. If start is omitted, the start time equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time. ++1. As shown in the next screenshot, review your choices at the last screen of this process, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens if you'd like to change your choices of values. ++ :::image type="content" source="media/how-to-import-data-assets/choose-s3-datastore-to-output.png" lightbox="media/how-to-import-data-assets/choose-s3-datastore-to-output.png" alt-text="Screenshot showing details of the data source to output."::: ++1. The next screenshot shows the last screen of this process. Review your choices, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens to change your choices of values. ++ :::image type="content" source="media/how-to-import-data-assets/create-s3-data-import-review-values-and-create.png" lightbox="media/how-to-import-data-assets/create-s3-data-import-review-values-and-create.png" alt-text="Screenshot showing all parameters of the data import."::: + This screenshot shows the panel for a **Cron** schedule:- - :::image type="content" source="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" alt-text="Screenshot showing the selection of the Add schedule button."::: -+ :::image type="content" source="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" alt-text="Screenshot showing the selection of the Add schedule button."::: - - **Name**: the unique identifier of the schedule within the workspace. - - **Description**: the schedule description. - - **Trigger**: the recurrence pattern of the schedule, which includes the following properties. - - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. - - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. **Cron expression** allows you to specify more flexible and customized recurrence pattern. - - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. - - **End**: the schedule will become inactive after this date. By default, it's NONE, meaning that the schedule will remain active until you manually disable it. - - **Tags**: the selected schedule tags. + - **Name**: the unique identifier of the schedule within the workspace. + - **Description**: the schedule description. + - **Trigger**: the recurrence pattern of the schedule, which includes the following properties. + - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. + - **Recurrence** or **Cron expression**: select cron expression to specify the cron details. -- **(Required)** `expression` uses a standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:+ - **(Required)** `expression` uses a standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields: - `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK` + `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK` - - A single wildcard (`*`), which covers all values for the field. A `*`, in days, means all days of a month (which varies with month and year). - - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday. - - The next table lists the valid values for each field: + - A single wildcard (`*`), which covers all values for the field. A `*`, in days, means all days of a month (which varies with month and year). + - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday. + - The next table lists the valid values for each field: - | Field | Range | Comment | - |-|-|--| - | `MINUTES` | 0-59 | - | - | `HOURS` | 0-23 | - | - | `DAYS` | - | Not supported. The value is ignored and treated as `*`. | - | `MONTHS` | - | Not supported. The value is ignored and treated as `*`. | - | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. | + | Field | Range | Comment | + |-|-|--| + | `MINUTES` | 0-59 | - | + | `HOURS` | 0-23 | - | + | `DAYS` | - | Not supported. The value is ignored and treated as `*`. | + | `MONTHS` | - | Not supported. The value is ignored and treated as `*`. | + | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. | - - To learn more about crontab expressions, see [Crontab Expression wiki on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression). + - To learn more about crontab expressions, see [Crontab Expression wiki on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression). - > [!IMPORTANT] - > `DAYS` and `MONTH` are not supported. If you pass one of these values, it will be ignored and treated as `*`. + > [!IMPORTANT] + > `DAYS` and `MONTH` are not supported. If you pass one of these values, it will be ignored and treated as `*`. -- (Optional) `start_time` specifies the start date and time with the timezone of the schedule. For example, `start_time: "2022-05-10T10:15:00-04:00"` means the schedule starts from 10:15:00AM on 2022-05-10 in the UTC-4 timezone. If `start_time` is omitted, the `start_time` equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time.--1. As shown in the next screenshot, review your choices at the last screen of this process, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens if you'd like to change your choices of values. + - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. + - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule will always be active until you manually disable it. + - **Tags**: the selected schedule tags. - :::image type="content" source="media/how-to-import-data-assets/choose-s3-datastore-to-output.png" lightbox="media/how-to-import-data-assets/choose-s3-datastore-to-output.png" alt-text="Screenshot showing details of the data source to output."::: + > [!NOTE] + > **Start** specifies the start date and time with the timezone of the schedule. If start is omitted, the start time equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time. -1. The next screenshot shows the last screen of this process. Review your choices, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens to change your choices of values. + The next screenshot shows the last screen of this process. Review your choices, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens to change your choices of values. - :::image type="content" source="media/how-to-import-data-assets/create-s3-data-import-review-values-and-create.png" lightbox="media/how-to-import-data-assets/create-s3-data-import-review-values-and-create.png" alt-text="Screenshot showing all parameters of the data import."::: + :::image type="content" source="media/how-to-import-data-assets/create-s3-data-import-review-values-and-create.png" lightbox="media/how-to-import-data-assets/create-s3-data-import-review-values-and-create.png" alt-text="Screenshot showing all parameters of the S3 cron data import."::: |
machine-learning | Python Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/python-tool.md | description: The Python Tool empowers users to offer customized code snippets as + |
machine-learning | Serp Api Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/serp-api-tool.md | description: The SerpAPI API is a Python tool that provides a wrapper to the Ser + The json representation from serpapi query. | Engine | Return Type | Output | |-|-|-| | Google | json | [Sample](https://serpapi.com/search-api#api-examples) |-| Bing | json | [Sample](https://serpapi.com/bing-search-api) | +| Bing | json | [Sample](https://serpapi.com/bing-search-api) | |
machine-learning | Tutorial Explore Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-explore-data.md | Data asset creation also creates a *reference* to the data source location, alon The next notebook cell creates the data asset. The code sample uploads the raw data file to the designated cloud storage resource. -Each time you create a data asset, you need a unique version for it. If the version already exists, you'll get an error. This code uses time to generate a unique version, each time the cell is run. +Each time you create a data asset, you need a unique version for it. If the version already exists, you'll get an error. In this code, we're using the "initial" for the first read of the data. If that version already exists, we'll skip creating it again. -You can also omit the **version** parameter, and a version number is generated for you, starting with 1 and then incrementing from there. In this tutorial, we want to refer to specific version numbers, so we create a version number instead. +You can also omit the **version** parameter, and a version number is generated for you, starting with 1 and then incrementing from there. ++In this tutorial, we use the name "initial" as the first version. The [Create production machine learning pipelines](tutorial-pipeline-python-sdk.md) tutorial will also use this version of the data, so here we are using a value that you'll see again in that tutorial. ```python from azure.ai.ml.entities import Data from azure.ai.ml.constants import AssetTypes-import time # update the 'my_path' variable to match the location of where you downloaded the data on your # local filesystem my_path = "./data/default_of_credit_card_clients.csv"-# set the version number of the data asset to the current UTC time -v1 = time.strftime("%Y.%m.%d.%H%M%S", time.gmtime()) +# set the version number of the data asset +v1 = "initial" my_data = Data( name="credit-card", my_data = Data( type=AssetTypes.URI_FILE, ) -# create data asset -ml_client.data.create_or_update(my_data) --print(f"Data asset created. Name: {my_data.name}, version: {my_data.version}") +## create data asset if it doesn't already exist: +try: + data_asset = ml_client.data.get(name="credit-card", version=v1) + print( + f"Data asset already exists. Name: {my_data.name}, version: {my_data.version}" + ) +except: + ml_client.data.create_or_update(my_data) + print(f"Data asset created. Name: {my_data.name}, version: {my_data.version}") ``` You can see the uploaded data by selecting **Data** on the left. You'll see the data is uploaded and a data asset is created: This table shows the structure of the data in the original **default_of_credit_c |X18-23 | Explanatory | Amount of previous payment (NT dollar) from April to September 2005. | |Y | Response | Default payment (Yes = 1, No = 0) | -Next, create a new _version_ of the data asset (the data automatically uploads to cloud storage): --> [!NOTE] -> -> This Python code cell sets **name** and **version** values for the data asset it creates. As a result, the code in this cell will fail if executed more than once, without a change to these values. Fixed **name** and **version** values offer a way to pass values that work for specific situations, without concern for auto-generated or randomly-generated values. +Next, create a new _version_ of the data asset (the data automatically uploads to cloud storage). For this version, we'll add a time value, so that each time this code is run, a different version number will be created. from azure.ai.ml.constants import AssetTypes import time # Next, create a new *version* of the data asset (the data is automatically uploaded to cloud storage):-v2 = v1 + "_cleaned" +v2 = "cleaned" + time.strftime("%Y.%m.%d.%H%M%S", time.gmtime()) my_path = "./data/cleaned-credit-card.parquet" # Define the data asset, and use tags to make it clear the asset can be used in training |
machine-learning | Tutorial Pipeline Python Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md | The two steps are first data preparation and second training. 1. [!INCLUDE [sign in](includes/prereq-sign-in.md)] +1. Complete the tutorial [Upload, access and explore your data](tutorial-explore-data.md) to create the data asset you need in this tutorial. Make sure you run all the code to create the initial data asset. Explore the data and revise it if you wish, but you'll only need the initial data in this tutorial. + 1. [!INCLUDE [open or create notebook](includes/prereq-open-or-create.md)] * [!INCLUDE [new notebook](includes/prereq-new-notebook.md)] * Or, open **tutorials/get-started-notebooks/pipeline.ipynb** from the **Samples** section of studio. [!INCLUDE [clone notebook](includes/prereq-clone-notebook.md)] ml_client = MLClient( resource_group_name="<RESOURCE_GROUP>", workspace_name="<AML_WORKSPACE_NAME>", )+cpu_cluster = None ``` > [!NOTE] > Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (this will happen when creating the `credit_data` data asset, two code cells from here). -## Register data from an external url --If you have been following along with the other tutorials in this series and already registered the data, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`. Then you may skip this section. To learn about data more in depth or if you would rather complete the data tutorial first, see [Upload, access and explore your data in Azure Machine Learning](tutorial-explore-data.md). --* Azure Machine Learning uses a `Data` object to register a reusable definition of data, and consume data within a pipeline. In the next section, you consume some data from web url as one example. `Data` assets from other sources can be created as well. -----```python -from azure.ai.ml.entities import Data -from azure.ai.ml.constants import AssetTypes --web_path = "https://archive.ics.uci.edu/ml/machine-learning-databases/00350/default%20of%20credit%20card%20clients.xls" --credit_data = Data( - name="creditcard_defaults", - path=web_path, - type=AssetTypes.URI_FILE, - description="Dataset for credit card defaults", - tags={"source_type": "web", "source": "UCI ML Repo"}, - version="1.0.0", -) -``` +## Access the registered data asset -This code just created a `Data` asset, ready to be consumed as an input by the pipeline that you'll define in the next sections. In addition, you can register the data to your workspace so it becomes reusable across pipelines. +Start by getting the data that you previously registered in the [Upload, access and explore your data](tutorial-explore-data.md) tutorial. +* Azure Machine Learning uses a `Data` object to register a reusable definition of data, and consume data within a pipeline. Since this is the first time that you're making a call to the workspace, you may be asked to authenticate. Once the authentication is complete, you then see the dataset registration completion message. -- ```python-credit_data = ml_client.data.create_or_update(credit_data) -print( - f"Dataset with name {credit_data.name} was registered to workspace, the dataset version is {credit_data.version}" -) +# get a handle of the data asset and print the URI +credit_data = ml_client.data.get(name="credit-card", version="initial") +print(f"Data asset URI: {credit_data.path}") ``` -In the future, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`. --## Create a compute resource to run your pipeline +## Create a compute resource to run your pipeline (Optional) > [!NOTE] > To use [serverless compute (preview)](./how-to-use-serverless-compute.md) to run this pipeline, you can skip this compute creation step and proceed directly to [create a job environment](#create-a-job-environment-for-pipeline-steps).+> To use [serverless compute (preview)](./how-to-use-serverless-compute.md) to run this pipeline, you can skip this compute creation step and proceed directly to [create a job environment](#create-a-job-environment-for-pipeline-steps). Each step of an Azure Machine Learning pipeline can use a different compute resource for running the specific job of that step. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark. except Exception: print("Creating a new cpu compute target...") # Let's create the Azure Machine Learning compute object with the intended parameters- # if you run into an out of quota error, change the size to a comparable VM that is available.\ + # if you run into an out of quota error, change the size to a comparable VM that is available. # Learn more on https://azure.microsoft.com/en-us/pricing/details/machine-learning/.- cpu_cluster = AmlCompute( name=cpu_compute_target, # Azure Machine Learning Compute is the on-demand VM service dependencies: - pip: - inference-schema[numpy-support]==1.3.0 - xlrd==2.0.1- - mlflow== 1.26.1 - - azureml-mlflow==1.42.0 + - mlflow== 2.4.1 + - azureml-mlflow==1.51.0 ``` The specification contains some usual packages, that you use in your pipeline (numpy, pip), together with some Azure Machine Learning specific packages (azureml-mlflow). pipeline_job_env = Environment( tags={"scikit-learn": "0.24.2"}, conda_file=os.path.join(dependencies_dir, "conda.yaml"), image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",- version="0.1.0", + version="0.2.0", ) pipeline_job_env = ml_client.environments.create_or_update(pipeline_job_env) def main(): print("input data:", args.data) - credit_df = pd.read_excel(args.data, header=1, index_col=0) + credit_df = pd.read_csv(args.data, header=1, index_col=0) mlflow.log_metric("num_samples", credit_df.shape[0]) mlflow.log_metric("num_features", credit_df.shape[1] - 1) First, create the *yaml* file describing the component: ```python-%%writefile {train_src_dir}/train.yaml +%%writefile {train_src_dir}/train.yml # <component> name: train_credit_defaults_model display_name: Train Credit Defaults Model Now create and register the component. Registering it allows you to re-use it i # importing the Component Package from azure.ai.ml import load_component -# Loading the component from the yaml file -train_component = load_component(source=os.path.join(train_src_dir, "train.yaml")) +# Loading the component from the yml file +train_component = load_component(source=os.path.join(train_src_dir, "train.yml")) # Now we register the component to the workspace train_component = ml_client.create_or_update(train_component) To code the pipeline, you use a specific `@dsl.pipeline` decorator that identifi Here, we used *input data*, *split ratio* and *registered model name* as input variables. We then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property. -> [!NOTE] -> To use [serverless compute (preview)](./how-to-use-serverless-compute.md), replace `compute=cpu_compute_target` with `compute="serverless"` in this code. --```pythons +```python # the dsl decorator tells the sdk that we are defining an Azure Machine Learning pipeline from azure.ai.ml import dsl, Input, Output @dsl.pipeline(- compute=cpu_compute_target, # to use serverless compute, change this to: compute="serverless" + compute=cpu_compute_target + if (cpu_cluster) + else "serverless", # "serverless" value runs pipeline on serverless compute description="E2E data_perp-train pipeline", ) def credit_defaults_pipeline( |
migrate | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md | +- Discover Azure Migrate from Operations Manager console: Operations Manager 2019 UR3 and later allows you to discover Azure Migrate from console. You can now generate a complete inventory of your on-premises environment without appliance. This can be used in Azure Migrate to assess machines at scale. [Learn more](https://support.microsoft.com/topic/discover-azure-migrate-for-operations-manager-04b33766-f824-4e99-9065-3109411ede63). - Public Preview: Upgrade your Windows OS during Migration using the Migration and modernization tool in your VMware environment. [Learn more](how-to-upgrade-windows.md). ## Update (June 2023) |
mysql | Concepts Storage Iops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-storage-iops.md | Title: Azure Database for MySQL - Flexible Server storage cops + Title: Azure Database for MySQL - Flexible Server storage iops description: This article describes the storage IOPS in Azure Database for MySQL - Flexible Server. |
mysql | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md | This article summarizes new releases and features in Azure Database for MySQL - - **Autoscale IOPS in Azure Database for MySQL - Flexible Server (General Availability)** - You can now scale IOPS on demand without having to pre-provision a certain amount of IOPS. With this feature, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs. With this feature, you pay only for the IO you use and no longer need to provision and pay for resources they aren't fully using, saving time and money. The autoscale IOPS feature eliminates the administration required to provide the best performance for Azure Database for MySQL customers at the lowest cost. [Learn more](./concepts-service-tiers-storage.md#autoscale-iops) + You can now scale IOPS on demand without having to pre-provision a certain amount of IOPS. With this feature, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs. With this feature, you pay only for the IO you use and no longer need to provision and pay for resources they aren't fully using, saving time and money. The autoscale IOPS feature eliminates the administration required to provide the best performance for Azure Database for MySQL customers at the lowest cost. [Learn more](./concepts-storage-iops.md#autoscale-iops) ## June 2023 |
mysql | Migrate Single Flexible In Place Auto Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md | Last updated 07/10/2023 -- - mvc - - devx-track-azurecli - - mode-api + # In-place automigration from Azure Database for MySQL ΓÇô Single Server to Flexible Server |
mysql | Migrate Single Flexible Mysql Import Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md | az account set --subscription <subscription id> - MySQL Import to an existing Azure MySQL Flexible Server isn't supported. The CLI command initiates the import of a new Azure MySQL Flexible Server. - If the flexible target server is provisioned as non-HA (High Availability disabled) when updating the CLI command parameters, it can later be switched to Same-Zone HA but not Zone-Redundant HA. - MySQL Import doesn't currently support Azure Database for MySQL Single Servers with Customer managed key (CMK).+- MySQL Imposer doesn't currently support Azure Database for MySQL Single Servers with Infrastructure Double Encryption. - Only instance-level import is supported. No option to import selected databases within an instance is provided. - Below items should be copied from source to target by the user post MySQL Import operation: - Server parameters iops | 500 | Number of IOPS to be allocated for the target Azure Database for My ## How long does MySQL Import take to migrate my Single Server instance? -Below is the benchmarked performance based on storage size : +Below is the benchmarked performance based on storage size. | Single Server Storage Size | MySQL Import time |- | - |:-:| - | 1 GiB | 0 min 23 secs | - | 10 GiB | 4 min 24 secs | - | 100 GiB | 10 min 29 secs | - | 500 GiB | 13 min 15 secs | - | 1 TB | 22 min 56 secs | - | 10 TB | 2 hrs 5 min 30 secs | + | - |:-:| + | 1 GiB | 0 min 23 secs | + | 10 GiB | 4 min 24 secs | + | 100 GiB | 10 min 29 secs | + | 500 GiB | 13 min 15 secs | + | 1 TB | 22 min 56 secs | + | 10 TB | 2 hrs 5 min 30 secs | + From the table above, as the storage size increases, the time required for data copying also increases, almost in a linear relationship. However, it's important to note that copy speed can be significantly impacted by network fluctuations. Therefore, the data provided here should be taken as a reference only. -Below is the benchmarked performance based on varying number of tables for 10 GiB storage size: +Below is the benchmarked performance based on varying number of tables for 10 GiB storage size. | Number of tables in Single Server instance | MySQL Import time |- | - |:-:| - | 100 | 4 min 24 secs | - | 200 | 4 min 40 secs | - | 800 | 4 min 52 secs | - | 14,400 | 17 min 41 secs | - | 28,800 | 19 min 18 secs | - | 38,400 | 22 min 50 secs | + | - |:-:| + | 100 | 4 min 24 secs | + | 200 | 4 min 40 secs | + | 800 | 4 min 52 secs | + | 14,400 | 17 min 41 secs | + | 28,800 | 19 min 18 secs | + | 38,400 | 22 min 50 secs | + As the number of files increases, each file/table in the database may become very small. This will result in a consistent amount of data being transferred, but there will be more frequent file-related operations, which may impact the performance of Mysql Import. ## Post-import steps |
network-watcher | View Network Topology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md | |
notification-hubs | Create Notification Hub Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-portal.md | In this quickstart, you create a notification hub in the Azure portal. The first In this section, you create a namespace and a hub in the namespace. ## Create a notification hub in an existing namespace In this section, you create a notification hub in an existing namespace. 1. Sign in to the [Azure portal](https://portal.azure.com).-2. Select **All services** on the left menu, search for **Notification Hub**, select **star** (`*`) next to **Notification Hub Namespaces** to add it to the **FAVORITES** section on the left menu. Select **Notification Hub Namespaces**. +1. Select **All services** on the left menu. + ![A screenshot showing select All Services for an existing namespace for a new hub.](./media/create-notification-hub-portal/select-all-services.png) - ![Azure portal - select Notification Hub Namespaces](./media/create-notification-hub-portal/select-notification-hub-namespaces-all-services.png) -3. On the **Notification Hub Namespaces** page, select your namespace from the list. +1. On the **Notification Hubs** page, select **Create** on the toolbar. - ![Select your namespace from the list](./media/create-notification-hub-portal/select-namespace.png) -4. On the **Notification Hub Namespace** page, select **Add Hub** on the toolbar. + ![A screenshot showing how to create a new notification hub in a new hub.](./media/create-notification-hub-portal/create-toolbar-button.png) - ![Notification Hub Namespaces - Add Hub button](./media/create-notification-hub-portal/add-hub-button.png) -5. On the **New Notification Hub** page, enter a name for the notification hub, and select **OK**. +1. In the **Basics** tab on the **Notification Hub** page, do the following steps: - ![New Notification Hub page -> enter a name for your hub](./media/create-notification-hub-portal/new-notification-hub-page.png) -6. Select **Notifications** (Bell icon) at the top to see the status of the deployment of the new hub. Select **X** in the right-corner to close the notification window. + 1. In **Subscription**, select the name of the Azure subscription you want to use, and then select an existing resource group, or create a new one. + 1. Choose **Select existing** and select your namespace from the drop-down list box. +A namespace contains one or more notification hubs, so type a name for the hub in **Notification Hub Details**. - ![Deployment notification](./media/create-notification-hub-portal/deployment-notification.png) -7. Refresh the **Notification Hub Namespaces** web page to see your new hub in the list. +1. Select a value from the **Location** drop-down list box. This value specifies the location in which you want to create the hub. - ![Screenshot that shows the Notification Hub Namespaces web page with the new hub in the list.](./media/create-notification-hub-portal/new-hub-in-list.png) -8. Select your **notification hub** to see the home page for your notification hub. + :::image type="content" source="./media/create-notification-hub-portal/notification-hub-details.png" alt-text="Screenshot showing notification hub details for existing namespaces." lightbox="./media/create-notification-hub-portal/notification-hub-details.png"::: - ![Screenshot that shows the home page for your notification hub.](./media/create-notification-hub-portal/hub-home-page.png) +1. Review the [**Availability Zones**](./notification-hubs-high-availability.md#zone-redundant-resiliency) option. If you chose a region that has availability zones, the check box is selected by default. Availability Zones is a paid feature, so an additional fee is added to your tier. ++ > [!NOTE] + > Availability zones, and the ability to edit cross region disaster recovery options, are public preview features. Availability Zones is available for an additional cost; however, you will not be charged while the feature is in preview. For more information, see [High availability for Azure Notification Hubs](./notification-hubs-high-availability.md). ++1. Choose a **Disaster recovery** option: **None**, **Paired recovery region**, or **Flexible recovery region**. If you choose **Paired recovery region**, the failover region is displayed. If you select **Flexible recovery region**, use the drop-down to choose from a list of recovery regions. ++ :::image type="content" source="./media/create-notification-hub-portal/availability-zones.png" alt-text="Screenshot showing availability zone details for existing namespace." lightbox="./media/create-notification-hub-portal/availability-zones.png"::: ++1. Select **Create**. ## Next steps |
operator-nexus | How To Route Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/how-to-route-policy.md | |
operator-nexus | Howto Baremetal Bmc Ssh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmc-ssh.md | -When the command runs, it executes on each bare metal machine in the Cluster. If a bare metal machine is unavailable or powered off at the time of command execution, the status of the command reflects which bare metal machines couldn't have the command executed. There's a reconciliation process that runs periodically that retries the command on any bare metal machine that wasn't available at the time of the original command. Multiple commands execute in the order received. +When the command runs, it executes on each bare metal machine in the Cluster with an active Kubernetes node. There's a reconciliation process that runs periodically that retries the command on any bare metal machine that wasn't available at the time of the original command. Also, any bare metal machine that returns to the cluster via an `az networkcloud baremetalmachine actionreimage` or `az networkcloud baremetalmachine actionreplace` command (see [BareMetal functions](./howto-baremetal-functions.md)) sends a signal causing any active keysets to be sent to the machine as soon as it returns to the cluster. Multiple commands execute in the order received. The BMCs support a maximum number of 12 users. Users are defined on a per Cluster basis and applied to each bare metal machine. Attempts to add more than 12 users results in an error. Delete a user before adding another one when 12 already exists. |
operator-nexus | Howto Baremetal Bmm Ssh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmm-ssh.md | -When the command runs, it executes on each bare metal machine in the Cluster. If a bare metal machine is unavailable or powered off at the time of command execution, the status of the command reflects which bare metal machines couldn't have the command executed. There's a reconciliation process that runs periodically that retries the command on any bare metal machine that wasn't available at the time of the original command. Multiple commands execute in the order received. +When the command runs, it executes on each bare metal machine in the Cluster with an active Kubernetes node. There's a reconciliation process that runs periodically that retries the command on any bare metal machine that wasn't available at the time of the original command. Also, any bare metal machine that returns to the cluster via an `az networkcloud baremetalmachine actionreimage` or `az networkcloud baremetalmachine actionreplace` command (see [BareMetal functions](./howto-baremetal-functions.md)) sends a signal causing any active keysets to be sent to the machine as soon as it returns to the cluster. Multiple commands execute in the order received. There's no limit to the number of users in a group. > [!CAUTION] > Notes for jump host IP addresses -- The keyset create/update process adds the jump host IP addresses to the IP tables for the Cluster. The process adds these addresses to IP tables and restricts SSH access to only those IPs.+- The keyset create/update process adds the jump host IP addresses to the IP tables for each machine in the Cluster. This restricts SSH access to be allowed only from those jump hosts. - It's important to specify the Cluster facing IP addresses for the jump hosts. These IP addresses may be different than the public facing IP address used to access the jump host. - Once added, users are able to access bare metal machines from any specified jump host IP including a jump host IP defined in another bare metal machine keyset group. - Existing SSH access remains when adding the first bare metal machine keyset. However, the keyset command limits an existing user's SSH access to the specified jump host IPs in the keyset commands.+- Currently, only IPv4 jump host addresses are supported. There is a known issue that IPv4 jump host addresses may be mis-parsed and lost if IPv6 addresses are also specified in the `--jump-hosts-allowed` argument of an `az networkcloud cluster baremetalmachinekeyset` command. Use only IPv4 addresses until IPv6 support is added. ## Prerequisites |
operator-nexus | Howto Configure Isolation Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md | Use this command to enable a management L3 isolation domain: ```azurecli az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3mgmt" --state Enable ```- |
operator-nexus | Howto Install Cli Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md | Example output: ```output Name Version -- --arcappliance 0.2.31 +arcappliance 0.2.32 monitor-control-service 0.2.0 connectedmachine 0.5.1 connectedk8s 1.3.20 k8s-extension 1.4.2-networkcloud 0.4.0.post94 +networkcloud 1.0.0b2 k8s-configuration 1.7.0-managednetworkfabric 0.1.0.post45 +managednetworkfabric 0.1.0.post49 customlocation 0.1.3 hybridaks 0.2.1-ssh 1.1.6 +ssh 2.0.1 ``` <!-- LINKS - External --> |
operator-nexus | Howto Run Instance Readiness Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-run-instance-readiness-testing.md | |
operator-nexus | Howto Use Vm Console Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-vm-console-service.md | |
postgresql | Concepts Data Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md | Azure Database for PostgreSQL - Flexible Server supports advanced [Data Recovery * The Geo-redundant backup encryption key needs to be the created in an Azure Key Vault (AKV) in the region where the Geo-redundant backup is stored * The [Azure Resource Manager (ARM) REST API](../../azure-resource-manager/management/overview.md) version for supporting Geo-redundant backup enabled CMK servers is '2022-11-01-preview'. Therefore, using [ARM templates](../../azure-resource-manager/templates/overview.md) for automation of creation of servers utilizing both encryption with CMK and geo-redundant backup features, please use this ARM API version. * Same [user managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)can't be used to authenticate for primary database Azure Key Vault (AKV) and Azure Key Vault (AKV) holding encryption key for Geo-redundant backup. To make sure that we maintain regional resiliency we recommend creating user managed identity in the same region as the geo-backups. -* As support for Geo-redundant backup with data encryption using CMK is currently in preview, there is currently no Azure CLI support for server creation with both of these features enabled. * If [Read replica database](../flexible-server/concepts-read-replicas.md) is setup to be encrypted with CMK during creation, its encryption key needs to be resident in an Azure Key Vault (AKV) in the region where Read replica database resides. [User assigned identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to authenticate against this Azure Key Vault (AKV) needs to be created in the same region. ## Limitations |
postgresql | How To Restart Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-portal.md | Follow these steps to restart your flexible server. initiated. > [!NOTE]-> Using custom RBAC role to restart server please make sure that in addition to Microsoft.DBforPostgreSQL/flexibleServers/restart/action permission this role also has Microsoft.DbforPostgreSQL/servers/read permission granted to it. +> Using custom RBAC role to restart server please make sure that in addition to Microsoft.DBforPostgreSQL/flexibleServers/restart/action permission this role also has Microsoft.DBforPostgreSQL/flexibleServers/read permission granted to it. ## Next steps - Learn about [business continuity](./concepts-business-continuity.md) |
postgresql | Videos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/videos.md | Azure Database for PostgreSQL and Azure Database for MySQL bring together commun >[!VIDEO https://learn.microsoft.com/Events/Connect/2017/T149/player] [Open in Channel 9](/Events/Connect/2017/T149) -Azure Database for PostgreSQL brings together community edition database engine and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to see in action how easy it is to create new experiences like adding Cognitive Services to your apps by virtue of being on Azure. +Azure Database for PostgreSQL brings together community edition database engine and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to see in action how easy it is to create new experiences like adding Azure AI services to your apps by virtue of being on Azure. ## How to get started with the new Azure Database for PostgreSQL service |
private-5g-core | Commission Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md | Run the following commands at the PowerShell prompt, specifying the object ID yo ```powershell Invoke-Command -Session $minishellSession -ScriptBlock {Set-HcsKubeClusterArcInfo -CustomLocationsObjectId *object ID*}--Invoke-Command -Session $minishellSession -ScriptBlock {Enable-HcsAzureKubernetesService -f} ``` -Once you've run these commands, you should see an updated option in the local UI ΓÇô **Kubernetes** becomes **Kubernetes (Preview)** as shown in the following image. +Once you've run this command, you should see an updated option in the local UI ΓÇô **Kubernetes** becomes **Kubernetes (Preview)** as shown in the following image. :::image type="content" source="media/commission-cluster/commission-cluster-kubernetes-preview.png" alt-text="Screenshot of configuration menu, with Kubernetes (Preview) highlighted."::: -Additionally, if you go to the Azure portal and navigate to your **Azure Stack Edge** resource, you should see an **Azure Kubernetes Service** option. You'll set up the Azure Kubernetes Service in [Start the cluster and set up Arc](#start-the-cluster-and-set-up-arc). +Select the **This Kubernetes cluster is for Azure Private 5G Core or SAP Digital Manufacturing Cloud workloads** checkbox. +++If you go to the Azure portal and navigate to your **Azure Stack Edge** resource, you should see an **Azure Kubernetes Service** option. You'll set up the Azure Kubernetes Service in [Start the cluster and set up Arc](#start-the-cluster-and-set-up-arc). :::image type="content" source="media/commission-cluster/commission-cluster-ase-resource.png" alt-text="Screenshot of Azure Stack Edge resource in the Azure portal. Azure Kubernetes Service (PREVIEW) is shown under Edge services in the left menu."::: The Azure Private 5G Core private mobile network requires a custom location and > [!TIP] > The commands in this section require the `k8s-extension` and `customlocation` extensions to the Azure CLI tool to be installed. If you do not already have them, a prompt will appear to install these when you run commands that require them. See [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview) for more information on automatic extension installation. -1. Sign in to the Azure CLI using Azure Cloud Shell. +1. Sign in to the Azure CLI using Azure Cloud Shell and select **Bash** from the dropdown menu. 1. Set the following environment variables using the required values for your deployment: ```azurecli- $SUBSCRIPTION_ID=<subscription ID> - $RESOURCE_GROUP_NAME=<resource group name> - $LOCATION=<deployment region, for example eastus> - $CUSTOM_LOCATION=<custom location for the AKS cluster> - $ARC_CLUSTER_RESOURCE_NAME=<resource name> - $TEMP_FILE=./tmpfile + SUBSCRIPTION_ID=<subscription ID> + RESOURCE_GROUP_NAME=<resource group name> + LOCATION=<deployment region, for example eastus> + CUSTOM_LOCATION=<custom location for the AKS cluster> + ARC_CLUSTER_RESOURCE_NAME=<resource name> + TEMP_FILE=./tmpfile ``` 1. Prepare your shell environment: The Azure Private 5G Core private mobile network requires a custom location and ``` ```azurecli- az k8s-extension create ` - --name networkfunction-operator ` - --cluster-name "$ARC_CLUSTER_RESOURCE_NAME" ` - --resource-group "$RESOURCE_GROUP_NAME" ` - --cluster-type connectedClusters ` - --extension-type "Microsoft.Azure.HybridNetwork" ` - --auto-upgrade-minor-version "true" ` - --scope cluster ` - --release-namespace azurehybridnetwork ` - --release-train preview ` + az k8s-extension create \ + --name networkfunction-operator \ + --cluster-name "$ARC_CLUSTER_RESOURCE_NAME" \ + --resource-group "$RESOURCE_GROUP_NAME" \ + --cluster-type connectedClusters \ + --extension-type "Microsoft.Azure.HybridNetwork" \ + --auto-upgrade-minor-version "true" \ + --scope cluster \ + --release-namespace azurehybridnetwork \ + --release-train preview \ --config-settings-file $TEMP_FILE ``` 1. Create the Packet Core Monitor Kubernetes extension: ```azurecli- az k8s-extension create ` - --name packet-core-monitor ` - --cluster-name "$ARC_CLUSTER_RESOURCE_NAME" ` - --resource-group "$RESOURCE_GROUP_NAME" ` - --cluster-type connectedClusters ` - --extension-type "Microsoft.Azure.MobileNetwork.PacketCoreMonitor" ` - --release-train stable ` + az k8s-extension create \ + --name packet-core-monitor \ + --cluster-name "$ARC_CLUSTER_RESOURCE_NAME" \ + --resource-group "$RESOURCE_GROUP_NAME" \ + --cluster-type connectedClusters \ + --extension-type "Microsoft.Azure.MobileNetwork.PacketCoreMonitor" \ + --release-train stable \ --auto-upgrade true ``` 1. Create the custom location: ```azurecli- az customlocation create ` - -n "$CUSTOM_LOCATION" ` - -g "$RESOURCE_GROUP_NAME" ` - --location "$LOCATION" ` - --namespace azurehybridnetwork ` - --host-resource-id "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Kubernetes/connectedClusters/$ARC_CLUSTER_RESOURCE_NAME" ` + az customlocation create \ + -n "$CUSTOM_LOCATION" \ + -g "$RESOURCE_GROUP_NAME" \ + --location "$LOCATION" \ + --namespace azurehybridnetwork \ + --host-resource-id "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Kubernetes/connectedClusters/$ARC_CLUSTER_RESOURCE_NAME" \ --cluster-extension-ids "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Kubernetes/connectedClusters/$ARC_CLUSTER_RESOURCE_NAME/providers/Microsoft.KubernetesConfiguration/extensions/networkfunction-operator" ``` |
private-5g-core | Complete Private Mobile Network Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md | You must set these up in addition to the [ports required for Azure Stack Edge (A The following tables contains the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling. -You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](/azure/databox-online/azure-stack-edge-pro-2-system-requirements#networking-port-requirements). +You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](/azure/databox-online/azure-stack-edge-gpu-system-requirements#networking-port-requirements). #### Azure Private 5G Core You must set these up in addition to the [ports required for Azure Stack Edge (A Review and apply the firewall recommendations for the following - [Azure Stack Edge](../databox-online/azure-stack-edge-gpu-system-requirements.md#url-patterns-for-firewall-rules) - [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/network-requirements.md?tabs=azure-cloud) - [Azure Network Function Manager](../network-function-manager/requirements.md)+- [Azure Stack Edge](../databox-online/azure-stack-edge-pro-2-system-requirements.md#url-patterns-for-firewall-rules) +- [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/network-requirements.md?tabs=azure-cloud) +- [Azure Network Function Manager](../network-function-manager/requirements.md) The following table contains the URL patterns for Azure Private 5G Core's outbound traffic. |
private-5g-core | Deploy Private Mobile Network With Site Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md | Title: Deploy a private mobile network and site - ARM template description: Learn how to deploy a private mobile network and site using an Azure Resource Manager template (ARM template).-+ -+ +tags: azure-resource-manager Last updated 03/23/2022 The following Azure resources are defined in the template. 1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites). - + |Field |Value | ||| |**Subscription** | Select the Azure subscription you want to use to create your private mobile network. | The following Azure resources are defined in the template. |**User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |- |**Data Network Name** | Enter the name of the data network. | + |**Data Network Name** | Enter the name of the data network. | |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | |**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.| | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. |- |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.| + |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.| 2. Select **Review + create**. 3. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation. The following Azure resources are defined in the template. - A **Packet Core Control Plane** resource representing the control plane function of the packet core instance in the site. - A **Packet Core Data Plane** resource representing the data plane function of the packet core instance in the site. - An **Attached Data Network** resource representing the site's view of the data network.- - A **Service** resource representing the default service. + - A **Service** resource representing the default service. - A **SIM Policy** resource representing the allow-all SIM policy.- - A **SIM Group** resource (if you provisioned any SIMs). + - A **SIM Group** resource (if you provisioned any SIMs). :::image type="content" source="media/create-full-private-5g-core-deployment-arm-template/full-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing the resources for a full Azure Private 5G Core deployment." lightbox="media/create-full-private-5g-core-deployment-arm-template/full-deployment-resource-group.png"::: |
private-5g-core | Deploy Private Mobile Network With Site Command Line | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-command-line.md | az mobile-network sim policy create -g <RESOURCEGROUP> -n <SIMPOLICY> --mobile-n ### Create a SIM -Use `` to create a new **SIM**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site). +Use `az mobile-network sim create` to create a new **SIM**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site). |Placeholder|Value| |-|-| |
private-link | Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md | The following tables list the Private Link services and the regions where they'r |:-|:--|:-|:--| |Azure Machine Learning | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Machine Learning.](../machine-learning/how-to-configure-private-link.md) | |Azure Bot Service | All public regions | Supported only on Direct Line App Service extension | GA </br> [Learn how to create a private endpoint for Azure Bot Service](/azure/bot-service/dl-network-isolation-concept) |-| Azure Cognitive Services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](../ai-services/cognitive-services-virtual-networks.md#use-private-endpoints) | +| Azure AI services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](../ai-services/cognitive-services-virtual-networks.md#use-private-endpoints) | | Azure Cognitive Search | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Cognitive Search](../search/service-create-private-endpoint.md) | ### Analytics |
private-link | Disable Private Endpoint Network Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-endpoint-network-policy.md | -By default, network policies are disabled for a subnet in a virtual network. To utilize network policies like User-Defined Routes and Network Security Groups support, network policy support must be enabled for the subnet. This setting is only applicable to private endpoints within the subnet. This setting affects all private endpoints within the subnet. For other resources in the subnet, access is controlled based on security rules in the network security group. +By default, network policies are disabled for a subnet in a virtual network. To use network policies like User-Defined Routes (UDRs) and Network Security Groups support, network policy support must be enabled for the subnet. This setting is only applicable to private endpoints in the subnet, and affects all private endpoints in the subnet. For other resources in the subnet, access is controlled based on security rules in the network security group. Network policies can be enabled either for Network Security Groups only, for User-Defined Routes only, or for both. -If you enable network security policies for User-Defined Routes, the /32 routes that are generated by the private endpoint and propagated to all the subnets in its own VNet and directly peered VNets will be invalidated if you have User-Defined Routing, which is useful if you want all traffic (including traffic addressed to the private endpoint) to go through a firewall, since otherwise the /32 route would bypass any other route. +If you enable network security policies for User-Defined Routes, you can use a custom address prefix equal to or larger than the VNet address space to invalidate the /32 default route propagated by the private endpoint. This can be useful if you want to ensure private endpoint connection requests go through a firewall or Virtual Appliance. Otherwise, the /32 default route would send traffic directly to the private endpoint in accordance with the [longest prefix match algorithm](../virtual-network/virtual-networks-udr-overview.md#how-azure-selects-a-route). -> [!NOTE] -> Unless you configure a UDR, the Private Endpoint Route of /32 will remain active. And for the UDR to work on all private endpoints within the subnet, you need to enable PrivateEndpointNetworkPolicies. +> [!IMPORTANT] +> To invalidate a Private Endpoint route, UDRs must have a prefix equal to or larger than the VNet address space where the Private Endpoint is provisioned. For example, a UDR default route (0.0.0.0/0) doesn't invalidate Private Endpoint routes. Network policies should be enabled in the subnet that hosts the private endpoint. -You can use the following to enable or disable the setting: +Use the following step to enable or disable network policy for private endpoints: * Azure portal * Azure PowerShell $vnet | Set-AzVirtualNetwork # [**CLI**](#tab/network-policy-cli) -Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to enable the policy. The Azure CLI only supports the values `true` or `false`, it does not allow yet to enable the policies selectively only for User-Defined Routes or Network Security Groups: +Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to enable the policy. The Azure CLI only supports the values `true` or `false`, it doesn't allow yet to enable the policies selectively only for User-Defined Routes or Network Security Groups: ```azurecli az network vnet subnet update \ |
private-link | Private Endpoint Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md | For Azure services, use the recommended zone names as described in the following | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net<br/>inference.ml.azure.com | | SignalR (Microsoft.SignalRService/SignalR) / signalR | privatelink.service.signalr.net | service.signalr.net | | Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net |-| Cognitive Services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.com <br/> privatelink.openai.azure.com | cognitiveservices.azure.com <br/> openai.azure.com | +| Azure AI services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.com <br/> privatelink.openai.azure.com | cognitiveservices.azure.com <br/> openai.azure.com | | Azure File Sync (Microsoft.StorageSync/storageSyncServices) / afs | {regionName}.privatelink.afs.azure.net | {regionName}.afs.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / portal | privatelink.adf.azure.com | adf.azure.com | For Azure services, use the recommended zone names as described in the following | Azure Relay (Microsoft.Relay/namespaces) / namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net | | Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.us </br> scm.privatelink.azurewebsites.us | azurewebsites.us </br> scm.azurewebsites.us | Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.us <br/> privatelink.adx.monitor.azure.us <br/> privatelink. oms.opinsights.azure.us <br/> privatelink.ods.opinsights.azure.us <br/> privatelink.agentsvc.azure-automation.us <br/> privatelink.blob.core.usgovcloudapi.net | monitor.azure.us <br/> adx.monitor.azure.us <br/> oms.opinsights.azure.us<br/> ods.opinsights.azure.us<br/> agentsvc.azure-automation.us <br/> blob.core.usgovcloudapi.net |-| Cognitive Services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.us | cognitiveservices.azure.us | +| Azure AI services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.us | cognitiveservices.azure.us | | Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.us | azurehdinsight.us | | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.ml.azure.us<br/>privatelink.notebooks.usgovcloudapi.net | api.ml.azure.us<br/>notebooks.usgovcloudapi.net<br/>instances.azureml.us<br/>aznbcontent.net<br/>inference.ml.azure.us | |
private-link | Private Endpoint Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md | A private-link resource is the destination target of a specified private endpoin | Azure Batch | Microsoft.Batch/batchAccounts | batchAccount, nodeManagement | | Azure Cache for Redis | Microsoft.Cache/Redis | redisCache | | Azure Cache for Redis Enterprise | Microsoft.Cache/redisEnterprise | redisEnterprise |-| Azure Cognitive Services | Microsoft.CognitiveServices/accounts | account | +| Azure AI services | Microsoft.CognitiveServices/accounts | account | | Azure Managed Disks | Microsoft.Compute/diskAccesses | managed disk | | Azure Container Registry | Microsoft.ContainerRegistry/registries | registry | | Azure Kubernetes Service - Kubernetes API | Microsoft.ContainerService/managedClusters | management | |
reliability | Availability Service By Category | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md | As mentioned previously, Azure classifies services into three categories: founda > || > | Azure API for FHIR | > | Azure Analysis Services |-> | Azure Applied AI Services | +> | Azure AI services | > | Azure Automation |-> | Azure Cognitive Services | +> | Azure AI services | > | Azure Data Share | > | Azure Databricks | > | Azure Database for MariaDB | To learn more about preview services that aren't yet in general availability and ## Next steps - [Azure services and regions that support availability zones](availability-zones-service-support.md)- |
reliability | Sovereign Cloud China | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md | Microsoft's goal for Azure in China is to match service availability in Azure. F ### AI + machine learning -This section outlines variations and considerations when using Azure Bot Service, Azure Machine Learning, and Cognitive Services. +This section outlines variations and considerations when using Azure Bot Service, Azure Machine Learning, and Azure AI services. | Product | Unsupported, limited, and/or modified features | Notes | ||--|| |Azure Machine Learning| See [Azure Machine Learning feature availability across Azure in China cloud regions](../machine-learning/reference-machine-learning-cloud-parity.md#azure-china-21vianet). | |-| Cognitive -| Cognitive +| Azure AI Speech| See [Azure AI +| Azure AI Speech|For feature variations and limitations, including API endpoints, see [Translator in sovereign clouds](../ai-services/translator/sovereign-clouds.md?tabs=china).| ### Azure AD External Identities This section outlines variations and considerations when using Azure Container A ||--|| | Azure Monitor| The Azure Monitor integration is not supported in Azure China | -### Microsoft Cost Management + Billing --This section outlines variations and considerations when using Microsoft Cost Management + Billing features and APIs. -- ### Azure China Commercial Marketplace To learn which commercial marketplace features are available for Azure China Marketplace operated by 21Vianet, as compared to the Azure global commercial marketplace, see [Feature availability for Azure China Commercial Marketplace operated by 21Vianet](/partner-center/marketplace/azure-in-china-feature-availability).+### Microsoft Cost Management + Billing ++This section outlines variations and considerations when using Microsoft Cost Management + Billing features and APIs. #### Azure Retail Rates API for China For IP rangers for Azure in China, download [Azure Datacenter IP Ranges in China | Azure Active Directory (Azure AD) | \*.onmicrosoft.com | \*.partner.onmschina.cn | | Azure AD logon | [https://login.microsoftonline.com](https://login.windows.net/) | [https://login.partner.microsoftonline.cn](https://login.chinacloudapi.cn/) | | Microsoft Graph | [https://graph.microsoft.com](https://graph.microsoft.com/) | [https://microsoftgraph.chinacloudapi.cn](https://microsoftgraph.chinacloudapi.cn/) |-| Azure Cognitive Services | `https://api.projectoxford.ai/face/v1.0` | `https://api.cognitive.azure.cn/face/v1.0` | +| Azure AI services | `https://api.projectoxford.ai/face/v1.0` | `https://api.cognitive.azure.cn/face/v1.0` | | Azure Bot Services | <\*.botframework.com> | <\*.botframework.azure.cn> | | Azure Key Vault API | \*.vault.azure.net | \*.vault.azure.cn | | Azure Container Apps Default Domain | \*.azurecontainerapps.io | No default domain is provided for external environment. The [custom domain](/azure/container-apps/custom-domains-certificates) is required. | One service administrator role is created per Azure account, and is authorized t ### Create a co-administrator Account administrators can create up to 199 co-administrator roles per subscription. This role has the same access privileges as the service administrator, but can't change the association of subscriptions to Azure directories.- |
sap | Integration Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md | Select an area for resources about how to integrate SAP and Azure in that space. | [Microsoft Power Platform](#microsoft-power-platform) | Learn about the available [out-of-the-box SAP applications](/power-automate/sap-integration/solutions) enabling your business users to achieve more with less. | | [SAP Fiori](#sap-fiori) | Increase performance and security of your SAP Fiori applications by integrating them with Azure services. | | [Azure Active Directory (Azure AD)](#azure-ad) | Ensure end-to-end SAP user authentication and authorization with Azure Active Directory. Single sign-on (SSO) and multi-factor authentication (MFA) are the foundation for a secure and seamless user experience. |-| [Azure Integration Services](#azure-integration-services) | Connect your SAP workloads with your end users, business partners, and their systems with world-class integration services. Learn about co-development efforts that enable SAP Event Mesh to exchange cloud events with Azure Event Grid, understand how you can achieve high-availability for services like SAP Cloud Integration, automate your SAP invoice processing with Logic Apps and Azure Cognitive Services and more. | +| [Azure Integration Services](#azure-integration-services) | Connect your SAP workloads with your end users, business partners, and their systems with world-class integration services. Learn about co-development efforts that enable SAP Event Mesh to exchange cloud events with Azure Event Grid, understand how you can achieve high-availability for services like SAP Cloud Integration, automate your SAP invoice processing with Logic Apps and Azure AI services and more. | | [App Development in any language including ABAP and DevOps](#app-development-in-any-language-including-abap-and-devops) | Apply best-in-class developer tooling to your SAP app developments and DevOps processes. | | [Azure Data Services](#azure-data-services) | Learn how to integrate your SAP data with Data Services like Azure Synapse Analytics, Azure Data Lake Storage, Azure Data Factory, Power BI, Data Warehouse Cloud, Analytics Cloud, which connector to choose, tune performance, efficiently troubleshoot, and more. | | [Threat Monitoring with Microsoft Sentinel for SAP](#microsoft-sentinel) | Learn how to best secure your SAP workload with Microsoft Sentinel, prevent incidents from happening and detect and respond to threats in real-time with this [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution. | Also see the following SAP resources: - [Event-driven architectures for SAP ERP with Azure](https://blogs.sap.com/2021/12/09/hey-sap-where-is-my-xbox-an-insight-into-capitalizing-on-event-driven-architectures/) - [Achieve high availability for SAP Cloud Integration (part of SAP Integration Suite) on Azure](https://blogs.sap.com/2021/09/23/black-friday-will-take-your-cpi-instance-offline-unless/)-- [Automate SAP invoice processing using Azure Logic Apps and Cognitive Services](https://blogs.sap.com/2021/02/03/your-sap-on-azure-part-26-automate-invoice-processing-using-azure-logic-apps-and-cognitive-services/)+- [Automate SAP invoice processing using Azure Logic Apps and Azure AI services](https://blogs.sap.com/2021/02/03/your-sap-on-azure-part-26-automate-invoice-processing-using-azure-logic-apps-and-cognitive-services/) ### App development in any language including ABAP and DevOps You can use the following free developer accounts to explore integration scenari - [Identify your SAP data sources - Cloud Adoption Framework](/azure/cloud-adoption-framework/scenarios/sap/sap-lza-identify-sap-data-sources) - [Explore joint reference architectures on the SAP Discovery Center](https://discovery-center.cloud.sap/search/Azure) - [Secure your SAP NetWeaver email needs with Exchange Online](./exchange-online-integration-sap-email-outbound.md)-- [Migrate your legacy SAP middleware to Azure](./expose-sap-process-orchestration-on-azure.md)+- [Migrate your legacy SAP middleware to Azure](./expose-sap-process-orchestration-on-azure.md) |
sap | Rise Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration.md | SSO against Active Directory (AD) of your Windows domain for ECS/RISE managed SA ### Microsoft Sentinel with SAP RISE -The Microsoft Sentinel solution for SAP applications allows you to monitor, detect, and respond to suspicious activities and guard your critical data against sophisticated cyberattacks for SAP systems hosted on Azure, other clouds, or on-premises infrastructure. +The [SAP RISE certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) Microsoft Sentinel solution for SAP applications allows you to monitor, detect, and respond to suspicious activities and guard your critical data against sophisticated cyberattacks for SAP systems hosted on Azure, other clouds, or on-premises infrastructure. The solution allows you to gain visibility to user activities on SAP RISE/ECS and the SAP business logic layers and apply SentinelΓÇÖs built-in content. - Use a single console to monitor all your enterprise estate including SAP instances in SAP RISE/ECS on Azure and other clouds, SAP Azure native and on-premises estate |
search | Cognitive Search Common Errors Warnings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md | If your data source has a field with a different data type than the field you're There are two cases under which you may encounter this error message, each of which should be treated differently. Follow the instructions below depending on what skill returned this error for you. -### Built-in Cognitive Service skills +### Built-in Azure AI services skills -Many of the built-in cognitive skills, such as language detection, entity recognition, or OCR, are backed by a Cognitive Service API endpoint. Sometimes there are transient issues with these endpoints and a request will time out. For transient issues, there's no remedy except to wait and try again. As a mitigation, consider setting your indexer to [run on a schedule](search-howto-schedule-indexers.md). Scheduled indexing picks up where it left off. Assuming transient issues are resolved, indexing and cognitive skill processing should be able to continue on the next scheduled run. +Many of the built-in cognitive skills, such as language detection, entity recognition, or OCR, are backed by an Azure AI services API endpoint. Sometimes there are transient issues with these endpoints and a request will time out. For transient issues, there's no remedy except to wait and try again. As a mitigation, consider setting your indexer to [run on a schedule](search-howto-schedule-indexers.md). Scheduled indexing picks up where it left off. Assuming transient issues are resolved, indexing and cognitive skill processing should be able to continue on the next scheduled run. If you continue to see this error on the same document for a built-in cognitive skill, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance, as this isn't expected. |
search | Samples Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md | Code samples from the Cognitive Search team demonstrate features and workflows. | [DotNetHowToSynonyms](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToSynonyms) | [Example: Add synonyms in C#](search-synonyms-tutorial-sdk.md) | Synonym lists are used for query expansion, providing matchable terms that are external to an index. | | [DotNetToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | [Tutorial: Index Azure SQL data](search-indexer-tutorial.md) | Shows how to configure an Azure SQL indexer that has a schedule, field mappings, and parameters. | | [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK) | [How to configure customer-managed keys for data encryption](search-security-manage-encryption-keys.md) | Shows how to create objects that are encrypted with a Customer Key. |-| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index. -| [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. | +| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index. +| [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. | | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/tutorial-ai-enrichment) | [Tutorial: AI-generated searchable content from Azure blobs](cognitive-search-tutorial-blob-dotnet.md) | Shows how to configure an indexer and skillset. | | [create-mvc-app](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/create-mvc-app) | [Tutorial: Add search to an ASP.NET Core (MVC) app](tutorial-csharp-create-mvc-app.md) | While most samples are console applications, this MVC sample uses a web page to front the sample Hotels index, demonstrating basic search, pagination, and other server-side behaviors.| The following samples are also published by the Cognitive Search team, but aren' | Samples | Repository | Description | |||-|+| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-search-services) | [azure-search-dotnet-scale](https://github.com/Azure-Samples/azure-search-dotnet-samples) | Issue a single query across multiple search services and combine the results into a single page. | | [Check storage](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/check-storage-usage/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Invokes an Azure function that checks search service storage on a schedule. | | [Export an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/export-dat) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that partitions and export a large index. | | [Backup and restore an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/index-backup-restore/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that copies an index from one service to another, and in the process, creates JSON files on your computer with the index schema and documents.| | [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/master/data-lake-gen2-acl-indexing/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Source code demonstrating indexer connections and indexing of Azure Data Lake Gen2 files and folders that are secured through Azure AD and role-based access controls. | | [Search aggregations](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/search-aggregations/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Proof-of-concept source code that demonstrates how to obtain aggregations from a search index and then filter by them. |-| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/multiple-search-services) | [azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples) | Issue a single query across multiple search services and combine the results into a single page. | | [Power Skills](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/README.md) | [azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | Source code for consumable custom skills that you can incorporate in your won solutions. | |
search | Samples Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-java.md | Code samples from the Azure SDK development team demonstrate API usage. You can | [Skillset creation](https://github.com/Azure/azure-sdk-for-jav) that are attached indexers, and that perform AI-based enrichment during indexing. | | [Load documents](https://github.com/Azure/azure-sdk-for-jav) operation. | | [Query syntax](https://github.com/Azure/azure-sdk-for-jav). |+| [Vector search](https://github.com/Azure/azure-sdk-for-jav). | ## Doc samples |
search | Samples Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-javascript.md | Code samples from the Azure SDK development team demonstrate API usage. You can | [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).| | [skillSet](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. | | [synonymMaps](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). |+| [VectorSearch](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v12-bet). | + ### TypeScript samples |
search | Samples Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md | Code samples from the Azure SDK development team demonstrate API usage. You can | [Simple query](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/search/azure-search-documents/samples/sample_simple_query.py) | Demonstrates how to set up a [basic query](search-query-overview.md). | | [Filter query](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/search/azure-search-documents/samples/sample_filter_query.py) | Demonstrates setting up a [filter expression](search-filters.md). | | [Facet query](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/search/azure-search-documents/samples/sample_facet_query.py) | Demonstrates working with [facets](search-faceted-navigation.md). |+| [Vector search](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/search/azure-search-documents/samples/sample_vector_search.py) | Demonstrates how to get embeddings from a description field and then send vector queries against the data. | ## Doc samples Code samples from the Cognitive Search team demonstrate features and workflows. | Samples | Article | ||| | [quickstart](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Quickstart/v11) | Source code for the Python portion of [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md). This article covers the basic workflow for creating, loading, and querying a search index using sample data. |-| [search-website](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-python-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| +| [search-website-functions-v4](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-python-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Tutorial-AI-Enrichment) | Source code for [Tutorial: Use Python and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob-python.md). This article shows how to create a blob indexer with a cognitive skillset, where the skillset creates and transforms raw content to make it searchable or consumable. | +## Demos ++A demo repo provides proof-of-concept source code for examples or scenarios shown in demonstrations. Demo solutions aren't designed for adaptation by customers. ++| Repository | Description | +||-| +| [**ChatGPT + Enterprise data with Azure OpenAI and Cognitive Search**](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md) | Python code showing how to use Cognitive Search with the large language models in Azure OpenAI. For background, see this Tech Community blog post: [Revolutionize your Enterprise Data with ChatGPT](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/revolutionize-your-enterprise-data-with-chatgpt-next-gen-apps-w/ba-p/3762087). | ++ > [!TIP] > Try the [Samples browser](/samples/browse/?languages=python&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language. |
search | Search Api Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-versions.md | The following table provides links to more recent SDK versions. |-|--|| | [Azure.Search.Documents 11](/dotnet/api/overview/azure/search.documents-readme) | Active | New client library from the Azure .NET SDK team, initially released July 2020. See the [Change Log](https://github.com/Azure/azure-sdk-for-net/blob/Azure.Search.Documents_11.3.0/sdk/search/Azure.Search.Documents/CHANGELOG.md) for information about minor releases. | | [Microsoft.Azure.Search 10](https://www.nuget.org/packages/Microsoft.Azure.Search/) | Retired | Released May 2019. This is the last version of the Microsoft.Azure.Search package and it's now deprecated. It's succeeded by Azure.Search.Documents. |-| [Microsoft.Azure.Management.Search 4.0.0](/dotnet/api/overview/azure/search/management/management-cognitivesearch) | Active | Targets the Management REST api-version=2020-08-01. | -| [Microsoft.Azure.Management.Search 3.0.0](https://www.nuget.org/packages/Microsoft.Azure.Management.Search/3.0.0) | Active | Targets the Management REST api-version=2015-08-19. | +| [Microsoft.Azure.Management.Search 4.0.0](https://www.nuget.org/packages/Microsoft.Azure.Management.Search/4.0.0) | Active | Targets the Management REST api-version=2020-08-01. | +| [Microsoft.Azure.Management.Search 3.0.0](https://www.nuget.org/packages/Microsoft.Azure.Management.Search/3.0.0) | Retired | Targets the Management REST api-version=2015-08-19. | ## Azure SDK for Java |
search | Search Get Started Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md | An indexer is a source-specific crawler that can read metadata and content from ### Step 2 - Skip the "Enrich content" page -The wizard supports the creation of an [AI enrichment pipeline](cognitive-search-concept-intro.md) for incorporating the Azure AI services AI algorithms into indexing. +The wizard supports the creation of an [AI enrichment pipeline](cognitive-search-concept-intro.md) for incorporating the Azure AI services algorithms into indexing. We'll skip this step for now, and move directly on to **Customize target index**. |
search | Search Howto Move Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-move-across-regions.md | The following links can help you locate more information when completing the ste + [Enable logging](monitor-azure-cognitive-search.md) -<!-- To move your Azure Cognitive Service account from one region to another, you will create an export template to move your subscription(s). After moving your subscription, you will need to move your data and recreate your service. +<!-- To move your Azure AI services account from one region to another, you will create an export template to move your subscription(s). After moving your subscription, you will need to move your data and recreate your service. In this article, you'll learn how to: |
search | Search Sku Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-tier.md | Title: Choose a pricing tier + Title: Choose a service tier -description: 'Learn about the pricing tiers (or SKUs) for Azure Cognitive Search. A search service can be provisioned at these tiers: Free, Basic, and Standard. Standard is available in various resource configurations and capacity levels.' +description: 'Learn about the service tiers (or SKUs) for Azure Cognitive Search. A search service can be provisioned at these tiers: Free, Basic, and Standard. Standard is available in various resource configurations and capacity levels.' Previously updated : 07/17/2023 Last updated : 07/27/2023 -# Choose a pricing tier for Azure Cognitive Search +# Choose a service tier for Azure Cognitive Search -Part of [creating a search service](search-create-service-portal.md) means choosing a pricing tier (or SKU) that's fixed for the lifetime of the service. In the portal, tier is specified in the **Select Pricing Tier** page when you create the service. If you're provisioning through PowerShell or Azure CLI instead, the tier is specified through the **`-Sku`** parameter +Part of [creating a search service](search-create-service-portal.md) is choosing a pricing tier (or SKU) that's fixed for the lifetime of the service. In the portal, tier is specified in the **Select Pricing Tier** page when you create the service. If you're provisioning through PowerShell or Azure CLI instead, the tier is specified through the **`-Sku`** parameter The tier you select determines: You can find out more about the various tiers on the [pricing page](https://azur ## Feature availability by tier -Most features are available on all tiers, including the free tier. In a few cases, the tier you choose will impact your ability to implement a feature. The following table describes feature constraints that are related to service tier. +Most features are available on all tiers, including the free tier. In a few cases, the tier determines the availability of a feature. The following table describes the constraints. | Feature | Limitations | ||-| The following example provides an illustration. Assume a hypothetical billing ra This billing model is based on the concept of applying the billing rate to the number *search units* (SU) used by a search service. All services are initially provisioned at one SU, but you can increase the SUs by adding either partitions or replicas to handle larger workloads. For more information, see [How to estimate costs of a search service](search-sku-manage-costs.md). +## Tier upgrade or downgrade ++There is no built-in support to upgrade or downgrade tiers. If you want to switch to a different tier, the approach is: +++ Create a new search service at the new tier.+++ Deploy your search content onto the new service. [Follow this checklist](search-howto-move-across-regions.md#prepare-and-move) to make sure you have all of the content.+++ Delete the old search service once you're sure it's no longer needed.++For large indexes that you don't want to rebuild from scratch, consider using the [backup and restore sample](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/index-backup-restore/README.md) to move them. + ## Next steps The best way to choose a pricing tier is to start with a least-cost tier, and then allow experience and testing inform your decision to keep the service or create a new one at a higher tier. For next steps, we recommend that you create a search service at a tier that can accommodate the level of testing you propose to do, and then review the following guidance for recommendations on estimating cost and capacity. |
search | Tutorial Csharp Create Load Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md | |
search | Tutorial Javascript Create Load Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md | |
search | Tutorial Multiple Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md | This tutorial uses [Azure.Search.Documents](/dotnet/api/overview/azure/search) t A finished version of the code in this tutorial can be found in the following project: -* [multiple-data-sources/v11 (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources/v11) +* [multiple-data-sources/v11 (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/multiple-data-sources/v11) -For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources/v10) on GitHub. +For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/multiple-data-sources/v10) on GitHub. ## Prerequisites |
search | Tutorial Optimize Indexing Push Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-optimize-indexing-push-api.md | -This tutorial describes how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can [download and run the sample application](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/optimize-data-indexing). This article explains the key aspects of the application and factors to consider when indexing data. +This tutorial describes how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can [download and run the sample application](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/optimize-data-indexing). This article explains the key aspects of the application and factors to consider when indexing data. This tutorial uses C# and the [.NET SDK](/dotnet/api/overview/azure/search) to perform the following tasks: The following services and tools are required for this tutorial. ## Download files -Source code for this tutorial is in the [optimzize-data-indexing/v11](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/optimize-data-indexing/v11) folder in the [Azure-Samples/azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples) GitHub repository. +Source code for this tutorial is in the [optimzize-data-indexing/v11](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/optimize-data-indexing/v11) folder in the [Azure-Samples/azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples) GitHub repository. ## Key considerations |
search | Vector Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md | You can index vector data as fields in documents alongside textual and other typ Azure Cognitive Search doesn't generate vector embeddings for your content. You need to provide the embeddings yourself by using a service such as Azure OpenAI. See [How to generate embeddings](./vector-search-how-to-generate-embeddings.md) to learn more. +Vector search does not support customer-managed keys (CMK) at this time. This means you will not be able to add vector fields to a index with CMK enabled. + ## Availability and pricing Vector search is available as part of all Cognitive Search tiers in all regions at no extra charge. |
security | Customer Lockbox Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md | The following services are generally available for Customer Lockbox: - Azure API Management - Azure App Service - Azure Cognitive Search-- Azure Cognitive Services+- Azure AI services - Azure Container Registry - Azure Data Box - Azure Data Explorer |
security | Encryption Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md | The Azure services that support each encryption model: |-|--|--|--| | **AI and Machine Learning** | | | | | Azure Cognitive Search | Yes | Yes | - |-| Azure Cognitive Services | Yes | Yes, including Managed HSM | - | +| Azure AI services | Yes | Yes, including Managed HSM | - | | Azure Machine Learning | Yes | Yes | - | | Content Moderator | Yes | Yes, including Managed HSM | - | | Face | Yes | Yes, including Managed HSM | - | |
sentinel | Create Custom Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-custom-connector.md | Use Azure Functions together with a RESTful API and various coding languages, su For examples of this method, see: -- [Connect your VMware Carbon Black Cloud Endpoint Standard to Microsoft Sentinel with Azure Function](./data-connectors/vmware-carbon-black-cloud-using-azure-function.md)+- [Connect your VMware Carbon Black Cloud Endpoint Standard to Microsoft Sentinel with Azure Function](./data-connectors/vmware-carbon-black-cloud-using-azure-functions.md) - [Connect your Okta Single Sign-On to Microsoft Sentinel with Azure Function](./data-connectors/okta-single-sign-on-using-azure-function.md) - [Connect your Proofpoint TAP to Microsoft Sentinel with Azure Function](./data-connectors/proofpoint-tap-using-azure-function.md)-- [Connect your Qualys VM to Microsoft Sentinel with Azure Function](data-connectors/qualys-vulnerability-management-using-azure-function.md)+- [Connect your Qualys VM to Microsoft Sentinel with Azure Function](data-connectors/qualys-vulnerability-management-using-azure-functions.md) - [Ingesting XML, CSV, or other formats of data](../azure-monitor/logs/create-pipeline-datacollector-api.md#ingesting-xml-csv-or-other-formats-of-data) - [Monitoring Zoom with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) (blog) - [Deploy a Function App for getting Office 365 Management API data into Microsoft Sentinel](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/O365%20Data) (Microsoft Sentinel GitHub community) |
sentinel | Data Connectors Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md | Title: Find your Microsoft Sentinel data connector | Microsoft Docs description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 06/27/2023 Last updated : 07/26/2023 Data connectors are available as part of the following offerings: ## Abnormal Security Corporation -- [AbnormalSecurity (using Azure Functions)](data-connectors/abnormalsecurity-using-azure-function.md)+- [AbnormalSecurity (using Azure Functions)](data-connectors/abnormalsecurity-using-azure-functions.md) ## Akamai Data connectors are available as part of the following offerings: ## AliCloud -- [AliCloud (using Azure Functions)](data-connectors/alicloud-using-azure-function.md)+- [AliCloud (using Azure Functions)](data-connectors/alicloud-using-azure-functions.md) ## Amazon Web Services Data connectors are available as part of the following offerings: ## Armorblox -- [Armorblox (using Azure Functions)](data-connectors/armorblox-using-azure-function.md)+- [Armorblox (using Azure Functions)](data-connectors/armorblox-using-azure-functions.md) ## Aruba Data connectors are available as part of the following offerings: ## Atlassian -- [Atlassian Confluence Audit (using Azure Functions)](data-connectors/atlassian-confluence-audit-using-azure-function.md)-- [Atlassian Jira Audit (using Azure Functions)](data-connectors/atlassian-jira-audit-using-azure-function.md)+- [Atlassian Confluence Audit (using Azure Functions)](data-connectors/atlassian-confluence-audit-using-azure-functions.md) +- [Atlassian Jira Audit (using Azure Functions)](data-connectors/atlassian-jira-audit-using-azure-functions.md) ## Auth0 -- [Auth0 Access Management (using Azure Functions)](data-connectors/auth0-access-management-using-azure-function.md)+- [Auth0 Access Management(using Azure Functions)](data-connectors/auth0-access-management-using-azure-functions.md) ## Better Mobile Security Inc. Data connectors are available as part of the following offerings: ## Bitglass -- [Bitglass (using Azure Functions)](data-connectors/bitglass-using-azure-function.md)+- [Bitglass (using Azure Functions)](data-connectors/bitglass-using-azure-functions.md) ## Blackberry Data connectors are available as part of the following offerings: ## Box -- [Box (using Azure Functions)](data-connectors/box-using-azure-function.md)+- [Box (using Azure Function)](data-connectors/box-using-azure-function.md) ## Broadcom Data connectors are available as part of the following offerings: - [Cisco Application Centric Infrastructure](data-connectors/cisco-application-centric-infrastructure.md) - [Cisco ASA](data-connectors/cisco-asa.md)-- [Cisco Duo Security (using Azure Functions)](data-connectors/cisco-duo-security-using-azure-function.md)+- [Cisco AS) +- [Cisco Duo Security (using Azure Functions)](data-connectors/cisco-duo-security-using-azure-functions.md) - [Cisco Identity Services Engine](data-connectors/cisco-identity-services-engine.md) - [Cisco Secure Email Gateway](data-connectors/cisco-secure-email-gateway.md)-- [Cisco Secure Endpoint (AMP) (using Azure Functions)](data-connectors/cisco-secure-endpoint-amp-using-azure-function.md)+- [Cisco Secure Endpoint (AMP) (using Azure Functions)](data-connectors/cisco-secure-endpoint-amp-using-azure-functions.md) - [Cisco Stealthwatch](data-connectors/cisco-stealthwatch.md) - [Cisco UCS](data-connectors/cisco-ucs.md)-- [Cisco Umbrella (using Azure Functions)](data-connectors/cisco-umbrella-using-azure-function.md)+- [Cisco Umbrella (using Azure Function)](data-connectors/cisco-umbrella-using-azure-function.md) - [Cisco Web Security Appliance](data-connectors/cisco-web-security-appliance.md) ## Cisco Systems, Inc. Data connectors are available as part of the following offerings: ## Cloudflare -- [Cloudflare (Preview) (using Azure Functions)](data-connectors/cloudflare-using-azure-function.md)+- [Cloudflare (Preview) (using Azure Functions)](data-connectors/cloudflare-using-azure-functions.md) ## Cognni Data connectors are available as part of the following offerings: ## CohesityDev -- [Cohesity (using Azure Functions)](data-connectors/cohesity-using-azure-function.md)+- [Cohesity (using Azure Functions)](data-connectors/cohesity-using-azure-functions.md) ## Contrast Security Data connectors are available as part of the following offerings: ## Crowdstrike -- [Crowdstrike Falcon Data Replicator (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator-using-azure-function.md)+- [Crowdstrike Falcon Data Replicator (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator-using-azure-functions.md) - [CrowdStrike Falcon Endpoint Protection](data-connectors/crowdstrike-falcon-endpoint-protection.md) ## Cyber Defense Group B.V. Data connectors are available as part of the following offerings: ## CyberArk - [CyberArk Enterprise Password Vault (EPV) Events](data-connectors/cyberark-enterprise-password-vault-epv-events.md)-- [CyberArkEPM](data-connectors/cyberarkepm.md)+- [CyberArkEPM (using Azure Functions)](data-connectors/cyberarkepm-using-azure-functions.md) ## CyberPion Data connectors are available as part of the following offerings: ## Cybersixgill -- [Cybersixgill Actionable Alerts (using Azure Functions)](data-connectors/cybersixgill-actionable-alerts-using-azure-function.md)+- [Cybersixgill Actionable Alerts (using Azure Functions)](data-connectors/cybersixgill-actionable-alerts-using-azure-functions.md) ## Cynerio Data connectors are available as part of the following offerings: ## Digital Shadows -- [Digital Shadows Searchlight (using Azure Functions)](data-connectors/digital-shadows-searchlight-using-azure-function.md)+- [Digital Shadows Searchlight (using Azure Functions)](data-connectors/digital-shadows-searchlight-using-azure-functions.md) ## Dynatrace Data connectors are available as part of the following offerings: ## Google -- [Google ApigeeX (using Azure Functions)](data-connectors/google-apigeex-using-azure-function.md)-- [Google Cloud Platform Cloud Monitoring (using Azure Functions)](data-connectors/google-cloud-platform-cloud-monitoring-using-azure-function.md)-- [Google Cloud Platform DNS (using Azure Functions)](data-connectors/google-cloud-platform-dns-using-azure-function.md)-- [Google Cloud Platform IAM (using Azure Functions)](data-connectors/google-cloud-platform-iam-using-azure-function.md)-- [Google Workspace (G Suite) (using Azure Functions)](data-connectors/google-workspace-g-suite-using-azure-function.md)+- [Google ApigeeX (using Azure Functions)](data-connectors/google-apigeex-using-azure-functions.md) +- [Google Cloud Platform Cloud Monitoring (using Azure Functions)](data-connectors/google-cloud-platform-cloud-monitoring-using-azure-functions.md) +- [Google Cloud Platform DNS (using Azure Functions)](data-connectors/google-cloud-platform-dns-using-azure-functions.md) +- [Google Cloud Platform IAM (using Azure Functions)](data-connectors/google-cloud-platform-iam-using-azure-functions.md) +- [Google Workspace (G Suite) (using Azure Functions)](data-connectors/google-workspace-g-suite-using-azure-functions.md) ## H.O.L.M. Security Sweden AB -- [Holm Security Asset Data (using Azure Functions)](data-connectors/holm-security-asset-data-using-azure-function.md)+- [Holm Security Asset Data (using Azure Functions)](data-connectors/holm-security-asset-data-using-azure-functions.md) ## iboss inc Data connectors are available as part of the following offerings: ## Imperva -- [Imperva Cloud WAF (using Azure Functions)](data-connectors/imperva-cloud-waf-using-azure-function.md)+- [Imperva Cloud WAF (using Azure Functions)](data-connectors/imperva-cloud-waf-using-azure-functions.md) ## Infoblox Data connectors are available as part of the following offerings: ## Insight VM / Rapid7 -- [Rapid7 Insight Platform Vulnerability Management Reports (using Azure Functions)](data-connectors/rapid7-insight-platform-vulnerability-management-reports-using-azure-function.md)+- [Rapid7 Insight Platform Vulnerability Management Reports (using Azure Functions)](data-connectors/rapid7-insight-platform-vulnerability-management-reports-using-azure-functions.md) ## ISC Data connectors are available as part of the following offerings: ## Microsoft Sentinel Community, Microsoft Corporation +- [Exchange Security Insights Online Collector (using Azure Functions)](data-connectors/exchange-security-insights-online-collector-using-azure-functions.md) - [Forcepoint CASB](data-connectors/forcepoint-casb.md) - [Forcepoint CSG](data-connectors/forcepoint-csg.md) - [Forcepoint DLP](data-connectors/forcepoint-dlp.md) Data connectors are available as part of the following offerings: ## MuleSoft -- [MuleSoft Cloudhub (using Azure Functions)](data-connectors/mulesoft-cloudhub-using-azure-function.md)+- [MuleSoft Cloudhub (using Azure Functions)](data-connectors/mulesoft-cloudhub-using-azure-functions.md) ++## NetClean Technologies AB ++- [Netclean ProActive Incidents](data-connectors/netclean-proactive-incidents.md) ## Netskope -- [Netskope (using Azure Functions)](data-connectors/netskope-using-azure-function.md)+- [Netskope (using Azure Functions)](data-connectors/netskope-using-azure-functions.md) ## Netwrix Data connectors are available as part of the following offerings: ## OneLogin -- [OneLogin IAM Platform (using Azure Functions)](data-connectors/onelogin-iam-platform-using-azure-function.md)+- [OneLogin IAM Platform (using Azure Functions)](data-connectors/onelogin-iam-platform-using-azure-functions.md) ## OpenVPN Data connectors are available as part of the following offerings: ## Oracle -- [Oracle Cloud Infrastructure (using Azure Functions)](data-connectors/oracle-cloud-infrastructure-using-azure-function.md)+- [Oracle Cloud Infrastructure (using Azure Functions)](data-connectors/oracle-cloud-infrastructure-using-azure-functions.md) - [Oracle Database Audit](data-connectors/oracle-database-audit.md) - [Oracle WebLogic Server](data-connectors/oracle-weblogic-server.md) Data connectors are available as part of the following offerings: ## Proofpoint -- [Proofpoint On Demand Email Security (using Azure Functions)](data-connectors/proofpoint-on-demand-email-security-using-azure-function.md)+- [Proofpoint On Demand Email Security (using Azure Functions)](data-connectors/proofpoint-on-demand-email-security-using-azure-functions.md) - [Proofpoint TAP (using Azure Functions)](data-connectors/proofpoint-tap-using-azure-function.md) ## Pulse Secure Data connectors are available as part of the following offerings: ## Qualys - [Qualys VM KnowledgeBase (using Azure Functions)](data-connectors/qualys-vm-knowledgebase-using-azure-function.md)-- [Qualys Vulnerability Management (using Azure Functions)](data-connectors/qualys-vulnerability-management-using-azure-function.md)+- [Qualys Vulnerability Management (using Azure Functions)](data-connectors/qualys-vulnerability-management-using-azure-functions.md) ## RedHat Data connectors are available as part of the following offerings: - [Salesforce Service Cloud (using Azure Functions)](data-connectors/salesforce-service-cloud-using-azure-function.md) +## Secure Practice ++- [MailRisk by Secure Practice (using Azure Functions)](data-connectors/mailrisk-by-secure-practice-using-azure-functions.md) + ## SecurityBridge - [SecurityBridge Threat Detection for SAP](data-connectors/securitybridge-threat-detection-for-sap.md) Data connectors are available as part of the following offerings: ## SentinelOne -- [SentinelOne (using Azure Functions)](data-connectors/sentinelone-using-azure-function.md)+- [SentinelOne (using Azure Functions)](data-connectors/sentinelone-using-azure-functions.md) ## Slack -- [Slack Audit (using Azure Functions)](data-connectors/slack-audit-using-azure-function.md)+- [Slack Audit (using Azure Functions)](data-connectors/slack-audit-using-azure-functions.md) ## Snowflake Data connectors are available as part of the following offerings: - [Trend Micro Deep Security](data-connectors/trend-micro-deep-security.md) - [Trend Micro TippingPoint](data-connectors/trend-micro-tippingpoint.md)-- [Trend Vision One (using Azure Functions)](data-connectors/trend-micro-vision-one-using-azure-function.md)+- [Trend Vision One (using Azure Functions)](data-connectors/trend-vision-one-using-azure-functions.md) ## TrendMicro Data connectors are available as part of the following offerings: ## VMware -- [VMware Carbon Black Cloud (using Azure Functions)](data-connectors/vmware-carbon-black-cloud-using-azure-function.md)+- [VMware Carbon Black Cloud (using Azure Functions)](data-connectors/vmware-carbon-black-cloud-using-azure-functions.md) - [VMware ESXi](data-connectors/vmware-esxi.md) - [VMware vCenter](data-connectors/vmware-vcenter.md) Data connectors are available as part of the following offerings: ## ZERO NETWORKS LTD - [Zero Networks Segment Audit](data-connectors/zero-networks-segment-audit.md)-- [Zero Networks Segment Audit (Function) (using Azure Functions)](data-connectors/zero-networks-segment-audit-function-using-azure-function.md)+- [Zero Networks Segment Audit (Function) (using Azure Functions)](data-connectors/zero-networks-segment-audit-function-using-azure-functions.md) ## Zimperium, Inc. Data connectors are available as part of the following offerings: ## Zoom -- [Zoom Reports (using Azure Functions)](data-connectors/zoom-reports-using-azure-function.md)+- [Zoom Reports (using Azure Functions)](data-connectors/zoom-reports-using-azure-functions.md) ## Zscaler |
sentinel | Abnormalsecurity Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/abnormalsecurity-using-azure-functions.md | + + Title: "AbnormalSecurity (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector AbnormalSecurity (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# AbnormalSecurity (using Azure Functions) connector for Microsoft Sentinel ++The Abnormal Security data connector provides the capability to ingest threat and case logs into Microsoft Sentinel using the [Abnormal Security Rest API.](https://app.swaggerhub.com/apis/abnormal-security/abx/) ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | SENTINEL_WORKSPACE_ID<br/>SENTINEL_SHARED_KEY<br/>ABNORMAL_SECURITY_REST_API_TOKEN<br/>logAnalyticsUri (optional)(add any other settings required by the Function App)Set the <code>uri</code> value to: <code><add uri value></code> | +| **Azure function app code** | https://aka.ms/sentinel-abnormalsecurity-functionapp | +| **Log Analytics table(s)** | ABNORMAL_THREAT_MESSAGES_CL<br/> ABNORMAL_CASES_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Abnormal Security](https://abnormalsecurity.com/contact) | ++## Query samples ++**All Abnormal Security Threat logs** + ```kusto +ABNORMAL_THREAT_MESSAGES_CL ++ | sort by TimeGenerated desc + ``` ++**All Abnormal Security Case logs** + ```kusto +ABNORMAL_CASES_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with AbnormalSecurity (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Abnormal Security API Token**: An Abnormal Security API Token is required. [See the documentation to learn more about Abnormal Security API](https://app.swaggerhub.com/apis/abnormal-security/abx/). **Note:** An Abnormal Security account is required +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to Abnormal Security's REST API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++**STEP 1 - Configuration steps for the Abnormal Security API** ++ [Follow these instructions](https://app.swaggerhub.com/apis/abnormal-security/abx) provided by Abnormal Security to configure the REST API integration. **Note:** An Abnormal Security account is required +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Abnormal Security data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Abnormal Security API Authorization Token, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++This method provides an automated deployment of the Abnormal Security connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-abnormalsecurity-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Microsoft Sentinel Workspace ID**, **Microsoft Sentinel Shared Key** and **Abnormal Security REST API Key**. + - The default **Time Interval** is set to pull the last five (5) minutes of data. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion. + 4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Abnormal Security data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-abnormalsecurity-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. AbnormalSecurityXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + SENTINEL_WORKSPACE_ID + SENTINEL_SHARED_KEY + ABNORMAL_SECURITY_REST_API_TOKEN + logAnalyticsUri (optional) +(add any other settings required by the Function App) +Set the `uri` value to: `<add uri value>` +>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Azure Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us.` +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/abnormalsecuritycorporation1593011233180.fe1b4806-215b-4610-bf95-965a7a65579c?tab=Overview) in the Azure Marketplace. |
sentinel | Alicloud Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/alicloud-using-azure-functions.md | + + Title: "AliCloud (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector AliCloud (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# AliCloud (using Azure Functions) connector for Microsoft Sentinel ++The [AliCloud](https://www.alibabacloud.com/product/log-service) data connector provides the capability to retrieve logs from cloud applications using the Cloud API and store events into Microsoft Sentinel through the [REST API](https://aliyun-log-python-sdk.readthedocs.io/api.html). The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | AliCloudAccessKeyId<br/>AliCloudAccessKey<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional)<br/>AliCloudProjects (optional)<br/>AliCloudWorkers (optional) | +| **Azure function app code** | https://aka.ms/sentinel-AliCloudAPI-functionapp | +| **Log Analytics table(s)** | AliCloud_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**AliCloud Events - All Activities.** + ```kusto +AliCloud + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with AliCloud (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **AliCloudAccessKeyId** and **AliCloudAccessKey** are required for making API calls. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**AliCloud**](https://aka.ms/sentinel-AliCloud-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuration steps for the AliCloud API** ++ Follow the instructions to obtain the credentials. ++1. Obtain the **AliCloudAccessKeyId** and **AliCloudAccessKey**: log in the account, click on AccessKey Management then click View Secret. +2. Save credentials for using in the data connector. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the AliCloud data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). +++++**Option 1 - Azure Resource Manager (ARM) Template** ++Use this method for automated deployment of the AliCloud data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-AliCloudAPI-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **AliCloudEnvId**, **AliCloudAppName**, **AliCloudUsername** and **AliCloudPassword** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. +++**Option 2 - Manual Deployment of Azure Functions** ++Use the following step-by-step instructions to deploy the AliCloud data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-AliCloudAPI-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. AliCloudXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + AliCloudAccessKeyId + AliCloudAccessKey + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) + AliCloudProjects (optional) + AliCloudWorkers (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-alibabacloud?tab=Overview) in the Azure Marketplace. |
sentinel | Argos Cloud Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/argos-cloud-security.md | Title: "ARGOS Cloud Security connector for Microsoft Sentinel" description: "Learn how to install the connector ARGOS Cloud Security to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 07/26/2023 Enter the information into the [ARGOS Sentinel](https://app.argos-security.io/ac New detections will automatically be forwarded. -[Learn more about the integration](https://argos-security.io/faq/) +[Learn more about the integration](https://www.argos-security.io/resources#integrations) |
sentinel | Armorblox Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/armorblox-using-azure-functions.md | + + Title: "Armorblox (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Armorblox (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Armorblox (using Azure Functions) connector for Microsoft Sentinel ++The [Armorblox](https://www.armorblox.com/) data connector provides the capability to ingest incidents from your Armorblox instance into Microsoft Sentinel through the REST API. The connector provides ability to get events which helps to examine potential security risks, and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | ArmorbloxAPIToken<br/>ArmorbloxInstanceName OR ArmorbloxInstanceURL<br/>WorkspaceID<br/>WorkspaceKey<br/>LogAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-armorblox-functionapp | +| **Log Analytics table(s)** | Armorblox_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [armorblox](https://www.armorblox.com/contact/) | ++## Query samples ++**Armorblox Incidents** + ```kusto +Armorblox_CL + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Armorblox (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Armorblox Instance Details**: **ArmorbloxInstanceName** OR **ArmorbloxInstanceURL** is required +- **Armorblox API Credentials**: **ArmorbloxAPIToken** is required +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Armorblox API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the Armorblox API** ++ Follow the instructions to obtain the API token. ++1. Log in to the Armorblox portal with your credentials. +2. In the portal, click **Settings**. +3. In the **Settings** view, click **API Keys** +4. Click **Create API Key**. +5. Enter the required information. +6. Click **Create**, and copy the API token displayed in the modal. +7. Save API token for using in the data connector. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Armorblox data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Armorblox data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-armorblox-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **ArmorbloxAPIToken**, **ArmorbloxInstanceURL** OR **ArmorbloxInstanceName**, and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Armorblox data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-armorblox-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. Armorblox). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + ArmorbloxAPIToken + ArmorbloxInstanceName OR ArmorbloxInstanceURL + WorkspaceID + WorkspaceKey + LogAnalyticsUri (optional) +> - Use LogAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/armorblox1601081599926.armorblox_sentinel_1?tab=Overview) in the Azure Marketplace. |
sentinel | Atlassian Confluence Audit Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/atlassian-confluence-audit-using-azure-functions.md | + + Title: "Atlassian Confluence Audit (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Atlassian Confluence Audit (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Atlassian Confluence Audit (using Azure Functions) connector for Microsoft Sentinel ++The [Atlassian Confluence](https://www.atlassian.com/software/confluence) Audit data connector provides the capability to ingest [Confluence Audit Records](https://support.atlassian.com/confluence-cloud/docs/view-the-audit-log/) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | ConfluenceUsername<br/>ConfluenceAccessToken<br/>ConfluenceHomeSiteName<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-confluenceauditapi-functionapp | +| **Log Analytics table(s)** | Confluence_Audit_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Confluence Audit Events - All Activities** + ```kusto +ConfluenceAudit + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Atlassian Confluence Audit (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **ConfluenceAccessToken**, **ConfluenceUsername** is required for REST API. [See the documentation to learn more about API](https://developer.atlassian.com/cloud/confluence/rest/api-group-audit/). Check all [requirements and follow the instructions](https://developer.atlassian.com/cloud/confluence/rest/intro/#auth) for obtaining credentials. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Confluence REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the Confluence API** ++ [Follow the instructions](https://developer.atlassian.com/cloud/confluence/rest/intro/#auth) to obtain the credentials. ++++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Workspace data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Confluence Audit data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-confluenceauditapi-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **ConfluenceAccessToken**, **ConfluenceUsername**, **ConfluenceHomeSiteName** (short site name part, as example HOMESITENAME from https://HOMESITENAME.atlassian.net) and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Confluence Audit data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-confluenceauditapi-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. ConflAuditXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + ConfluenceUsername + ConfluenceAccessToken + ConfluenceHomeSiteName + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-atlassianconfluenceaudit?tab=Overview) in the Azure Marketplace. |
sentinel | Atlassian Jira Audit Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/atlassian-jira-audit-using-azure-functions.md | + + Title: "Atlassian Jira Audit (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Atlassian Jira Audit (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Atlassian Jira Audit (using Azure Functions) connector for Microsoft Sentinel ++The [Atlassian Jira](https://www.atlassian.com/software/jira) Audit data connector provides the capability to ingest [Jira Audit Records](https://support.atlassian.com/jira-cloud-administration/docs/audit-activities-in-jira-applications/) events into Microsoft Sentinel through the REST API. Refer to [API documentation](https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-audit-records/) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | JiraUsername<br/>JiraAccessToken<br/>JiraHomeSiteName<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-jiraauditapi-functionapp | +| **Kusto function alias** | JiraAudit | +| **Kusto function url** | https://aka.ms/sentinel-jiraauditapi-parser | +| **Log Analytics table(s)** | Jira_Audit_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Jira Audit Events - All Activities** + ```kusto +JiraAudit + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Atlassian Jira Audit (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **JiraAccessToken**, **JiraUsername** is required for REST API. [See the documentation to learn more about API](https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-audit-records/). Check all [requirements and follow the instructions](https://developer.atlassian.com/cloud/jira/platform/rest/v3/intro/#authentication) for obtaining credentials. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Jira REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-jiraauditapi-parser) to create the Kusto functions alias, **JiraAudit** +++**STEP 1 - Configuration steps for the Jira API** ++ [Follow the instructions](https://developer.atlassian.com/cloud/jira/platform/rest/v3/intro/#authentication) to obtain the credentials. ++++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Workspace data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Jira Audit data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentineljiraauditazuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **JiraAccessToken**, **JiraUsername**, **JiraHomeSiteName** (short site name part, as example HOMESITENAME from https://HOMESITENAME.atlassian.net) and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Jira Audit data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-jiraauditapi-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. JiraAuditXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + JiraUsername + JiraAccessToken + JiraHomeSiteName + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +3. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-atlassianjiraaudit?tab=Overview) in the Azure Marketplace. |
sentinel | Auth0 Access Management Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/auth0-access-management-using-azure-functions.md | + + Title: "Auth0 Access Management(using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Auth0 Access Management(using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Auth0 Access Management(using Azure Functions) connector for Microsoft Sentinel ++The [Auth0 Access Management](https://auth0.com/access-management) data connector provides the capability to ingest [Auth0 log events](https://auth0.com/docs/api/management/v2/#!/Logs/get_logs) into Microsoft Sentinel ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | DOMAIN<br/>CLIENT_ID<br/>CLIENT_SECRET<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-Auth0AccessManagement-azuredeploy | +| **Log Analytics table(s)** | Auth0AM_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**All logs** + ```kusto +Auth0AM_CL + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Auth0 Access Management(using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **API token** is required. [See the documentation to learn more about API token](https://auth0.com/docs/secure/tokens/access-tokens/get-management-api-access-tokens-for-production) +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Auth0 Management APIs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the Auth0 Management API** ++ Follow the instructions to obtain the credentials. ++1. In Auth0 Dashboard, go to **Applications > Applications**. +2. Select your Application. This should be a "Machine-to-Machine" Application configured with at least **read:logs** and **read:logs_users** permissions. +3. Copy **Domain, ClientID, Client Secret** +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Auth0 Access Management data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Auth0 Access Management data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-Auth0AccessManagement-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the ****Domain, ClientID, Client Secret****, **AzureSentinelWorkspaceId**, **AzureSentinelSharedKey**. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Auth0 Access Management data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-Auth0AccessManagement-azuredeploy) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. Auth0AMXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + DOMAIN + CLIENT_ID + CLIENT_SECRET + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-auth0?tab=Overview) in the Azure Marketplace. |
sentinel | Bitglass Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/bitglass-using-azure-functions.md | + + Title: "Bitglass (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Bitglass (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Bitglass (using Azure Functions) connector for Microsoft Sentinel ++The [Bitglass](https://www.bitglass.com/) data connector provides the capability to retrieve security event logs of the Bitglass services and more events into Microsoft Sentinel through the REST API. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | BitglassToken<br/>BitglassServiceURL<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-bitglass-functionapp | +| **Log Analytics table(s)** | BitglassLogs_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Bitglass Events - All Activities.** + ```kusto +BitglassLogs_CL + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Bitglass (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **BitglassToken** and **BitglassServiceURL** are required for making API calls. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**Bitglass**](https://aka.ms/sentinel-bitglass-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuration steps for the Bitglass Log Retrieval API** ++ Follow the instructions to obtain the credentials. ++1. Please contact Bitglass [support](https://pages.bitglass.com/Contact.html) and obtain the **BitglassToken** and **BitglassServiceURL** ntation]. +2. Save credentials for using in the data connector. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Bitglass data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Bitglass data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-bitglass-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **BitglassToken**, **BitglassServiceURL** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Bitglass data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-bitglass-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. BitglassXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + BitglassToken + BitglassServiceURL + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-bitglass?tab=Overview) in the Azure Marketplace. |
sentinel | Braodcom Symantec Dlp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/braodcom-symantec-dlp.md | Title: "Broadcom Symantec DLP connector for Microsoft Sentinel" description: "Learn how to install the connector Broadcom Symantec DLP to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 07/26/2023 Install the Microsoft Monitoring Agent on your Linux machine and configure the m 2. Forward Symantec DLP logs to a Syslog agent Configure Symantec DLP to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent.-1. [Follow these instructions](https://techdocs.broadcom.com/content/dam/broadcom/techdocs/symantec-security-software/information-security/data-loss-prevention/generated-pdfs/Symantec_DLP_15.7_Whats_New.pdf) to configure the Symantec DLP to forward syslog +1. [Follow these instructions](https://help.symantec.com/cs/DLP15.7/DLP/v27591174_v133697641/Configuring-the-Log-to-a-Syslog-Server-action?locale=EN_US) to configure the Symantec DLP to forward syslog 2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. 3. Validate connection |
sentinel | Cisco Asa Ftd Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-asa-ftd-via-ama.md | + + Title: "Cisco ASA/FTD via AMA (Preview) connector for Microsoft Sentinel" +description: "Learn how to install the connector Cisco ASA/FTD via AMA (Preview) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Cisco ASA/FTD via AMA (Preview) connector for Microsoft Sentinel ++The Cisco ASA firewall connector allows you to easily connect your Cisco ASA logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | CommonSecurityLog<br/> | +| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ++## Query samples ++**All logs** + ```kusto +CommonSecurityLog ++ | where DeviceVendor == "Cisco" ++ | where DeviceProduct == "ASA" + + | sort by TimeGenerated + ``` ++++## Prerequisites ++To integrate with Cisco ASA/FTD via AMA (Preview) make sure you have: ++- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc) +++## Vendor installation instructions ++Enable data collection ruleΓÇï ++> Cisco ASA/FTD event logs are collected only from **Linux** agents. +++++Run the following command to install and apply the Cisco ASA/FTD collector: +++ sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoasa?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Duo Security Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-duo-security-using-azure-functions.md | + + Title: "Cisco Duo Security (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Cisco Duo Security (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Cisco Duo Security (using Azure Functions) connector for Microsoft Sentinel ++The Cisco Duo Security data connector provides the capability to ingest [authentication logs](https://duo.com/docs/adminapi#authentication-logs), [administrator logs](https://duo.com/docs/adminapi#administrator-logs), [telephony logs](https://duo.com/docs/adminapi#telephony-logs), [offline enrollment logs](https://duo.com/docs/adminapi#offline-enrollment-logs) and [Trust Monitor events](https://duo.com/docs/adminapi#trust-monitor) into Microsoft Sentinel using the Cisco Duo Admin API. Refer to [API documentation](https://duo.com/docs/adminapi) for more information. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-CiscoDuoSecurity-functionapp | +| **Log Analytics table(s)** | CiscoDuo_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**All Cisco Duo logs** + ```kusto +CiscoDuo_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Cisco Duo Security (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Cisco Duo API credentials**: Cisco Duo API credentials with permission *Grant read log* is required for Cisco Duo API. See the [documentation](https://duo.com/docs/adminapi#first-steps) to learn more about creating Cisco Duo API credentials. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Cisco Duo API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**CiscoDuo**](https://aka.ms/sentinel-CiscoDuoSecurity-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Obtaining Cisco Duo Admin API credentials** ++1. Follow [the instructions](https://duo.com/docs/adminapi#first-steps) to obtain **integration key**, **secret key**, and **API hostname**. Use **Grant read log** permission in the 4th step of [the instructions](https://duo.com/docs/adminapi#first-steps). +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Azure Blob Storage connection string and container name, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-CiscoDuoSecurity-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Cisco Duo Integration Key**, **Cisco Duo Secret Key**, **Cisco Duo API Hostname**, **Cisco Duo Log Types**, **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-CiscoDuoSecurity-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + CISCO_DUO_INTEGRATION_KEY + CISCO_DUO_SECRET_KEY + CISCO_DUO_API_HOSTNAME + CISCO_DUO_LOG_TYPES + WORKSPACE_ID + SHARED_KEY + logAnalyticsUri (Optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://WORKSPACE_ID.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoduosecurity?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Firepower Estreamer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-firepower-estreamer.md | Title: "Cisco Firepower eStreamer connector for Microsoft Sentinel" description: "Learn how to install the connector Cisco Firepower eStreamer to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 07/26/2023 Make sure to configure the machine's security according to your organization's s [Learn more >](https://aka.ms/SecureCEF)++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-estreamer?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Secure Endpoint Amp Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-secure-endpoint-amp-using-azure-functions.md | + + Title: "Cisco Secure Endpoint (AMP) (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Cisco Secure Endpoint (AMP) (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Cisco Secure Endpoint (AMP) (using Azure Functions) connector for Microsoft Sentinel ++The Cisco Secure Endpoint (formerly AMP for Endpoints) data connector provides the capability to ingest Cisco Secure Endpoint [audit logs](https://api-docs.amp.cisco.com/api_resources/AuditLog?api_host=api.amp.cisco.com&api_version=v1) and [events](https://api-docs.amp.cisco.com/api_actions/details?api_action=GET+%2Fv1%2Fevents&api_host=api.amp.cisco.com&api_resource=Event&api_version=v1) into Microsoft Sentinel. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-ciscosecureendpoint-functionapp | +| **Log Analytics table(s)** | CiscoSecureEndpoint_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**All Cisco Secure Endpoint logs** + ```kusto +CiscoSecureEndpoint_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Cisco Secure Endpoint (AMP) (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Cisco Secure Endpoint API credentials**: Cisco Secure Endpoint Client ID and API Key are required. [See the documentation to learn more about Cisco Secure Endpoint API](https://api-docs.amp.cisco.com/api_resources?api_host=api.amp.cisco.com&api_version=v1). [API domain](https://api-docs.amp.cisco.com) must be provided as well. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Cisco Secure Endpoint API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**CiscoSecureEndpoint**](https://aka.ms/sentinel-ciscosecureendpoint-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Obtaining Cisco Secure Endpoint API credentials** ++1. Follow the instructions in the [documentation](https://api-docs.amp.cisco.com/api_resources?api_host=api.amp.cisco.com&api_version=v1) to generate Client ID and API Key. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Azure Blob Storage connection string and container name, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-ciscosecureendpoint-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Cisco Secure Endpoint Api Host**, **Cisco Secure Endpoint Client Id**, **Cisco Secure Endpoint Api Key**, **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-ciscosecureendpoint-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + CISCO_SE_API_API_HOST + CISCO_SE_API_CLIENT_ID + CISCO_SE_API_KEY + WORKSPACE_ID + SHARED_KEY + logAnalyticsUri (Optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://WORKSPACE_ID.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscosecureendpoint?tab=Overview) in the Azure Marketplace. |
sentinel | Citrix Adc Former Netscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/citrix-adc-former-netscaler.md | Title: "Citrix ADC (former NetScaler) connector for Microsoft Sentinel" description: "Learn how to install the connector Citrix ADC (former NetScaler) to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 07/26/2023 # Citrix ADC (former NetScaler) connector for Microsoft Sentinel -The [Citrix ADC (former NetScaler)](https://www.citrix.com/products/citrix-adc/) data connector provides the capability to ingest Citrix ADC logs into Microsoft Sentinel. +The [Citrix ADC (former NetScaler)](https://www.citrix.com/products/citrix-adc/) data connector provides the capability to ingest Citrix ADC logs into Microsoft Sentinel. If you want to ingest Citrix WAF logs into Microsoft Sentinel, refer this [documentation](/azure/sentinel/data-connectors/citrix-waf-web-app-firewall). ## Connector attributes | Connector attribute | Description | | | |-| **Log Analytics table(s)** | CommonSecurityLog (Citrix ADC)<br/> | +| **Log Analytics table(s)** | Syslog<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | CitrixADCEvent > [!NOTE]- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CitrixADCEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Citrix%20ADC/Parsers/CitrixADCEvent.txt), this function maps Citrix ADC (former NetScaler) events to Advanced Security Information Model [ASIM](/azure/sentinel/normalization). The function usually takes 10-15 minutes to activate after solution installation/update. + > 1. This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CitrixADCEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Citrix%20ADC/Parsers/CitrixADCEvent.txt), this function maps Citrix ADC (former NetScaler) events to Advanced Security Information Model [ASIM](/azure/sentinel/normalization). The function usually takes 10-15 minutes to activate after solution installation/update. -1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine +> [!NOTE] + > 2. This parser requires a watchlist named **`Sources_by_SourceType`** with fields **`SourceType`** and **`Source`**. To create this watchlist and populate it with SourceType and Source data, follow the instructions [here](/azure/sentinel/normalization-manage-parsers#configure-the-sources-relevant-to-a-source-specific-parser). -Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. +> The SourceType value for CitrixADC is **`CitrixADC`**. -> 1. Make sure that you have Python on your machine using the following command: python -version. +1. Install and onboard the agent for Linux -> 2. You must have elevated permissions (sudo) on your machine. +Typically, you should install the agent on a different computer from the one on which the logs are generated. - Run the following command to install and apply the CEF collector: +> Syslog logs are collected only from **Linux** agents. - `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` -2. Configure Citrix ADC to use CEF logging +2. Configure the logs to be collected -At the command prompt type the following command: +Configure the facilities you want to collect and their severities. + 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. + 2. Select **Apply below configuration to my machines** and select the facilities and severities. + 3. Click **Save**. ->**set appfw settings CEFLogging on** 3. Configure Citrix ADC to forward logs via Syslog At the command prompt type the following command: 3.4 Set **Transport type** as **TCP** or **UDP** depending on your remote Syslog server configuration. -4. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --5. Secure your machine + 3.5 You can refer Citrix ADC (former NetScaler) [documentation](https://docs.netscaler.com/) for more details. -Make sure to configure the machine's security according to your organization's security policy +4. Check logs in Microsoft Sentinel +Open Log Analytics to check if the logs are received using the Syslog schema. -[Learn more >](https://aka.ms/SecureCEF) +>**NOTE:** It may take up to 15 minutes before new logs will appear in Syslog table. |
sentinel | Claroty | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/claroty.md | -The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel. +The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/continuous-threat-detection/) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel. ## Connector attributes |
sentinel | Cloudflare Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cloudflare-using-azure-functions.md | + + Title: "Cloudflare (Preview) (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Cloudflare (Preview) (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Cloudflare (Preview) (using Azure Functions) connector for Microsoft Sentinel ++The Cloudflare data connector provides the capability to ingest [Cloudflare logs](https://developers.cloudflare.com/logs/) into Microsoft Sentinel using the Cloudflare Logpush and Azure Blob Storage. Refer to [Cloudflare documentation](https://developers.cloudflare.com/logs/logpush) for more information. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-CloudflareDataConnector-functionapp | +| **Log Analytics table(s)** | Cloudflare_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Cloudflare](https://support.cloudflare.com) | ++## Query samples ++**All Cloudflare logs** + ```kusto +Cloudflare_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Cloudflare (Preview) (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Azure Blob Storage connection string and container name**: Azure Blob Storage connection string and container name where the logs are pushed to by Cloudflare Logpush. [See the documentation to learn more about creating Azure Blob Storage container.](/azure/storage/blobs/storage-quickstart-blobs-portal) +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**Cloudflare**](https://aka.ms/sentinel-CloudflareDataConnector-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuration of the Cloudflare Logpush** ++See documentation to [setup Cloudflare Logpush to Microsoft Azure](https://developers.cloudflare.com/logs/logpush/logpush-dashboard) +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Cloudflare data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Azure Blob Storage connection string and container name, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Cloudflare data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-CloudflareDataConnector-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Azure Blob Storage Container Name**, **Azure Blob Storage Connection String**, **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Cloudflare data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-CloudflareDataConnector-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. CloudflareXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + CONTAINER_NAME + AZURE_STORAGE_CONNECTION_STRING + WORKSPACE_ID + SHARED_KEY + logAnalyticsUri (Optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://WORKSPACE_ID.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cloudflare.cloudflare_sentinel?tab=Overview) in the Azure Marketplace. |
sentinel | Cohesity Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cohesity-using-azure-functions.md | + + Title: "Cohesity (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Cohesity (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Cohesity (using Azure Functions) connector for Microsoft Sentinel ++The Cohesity function apps provide the ability to ingest Cohesity Datahawk ransomware alerts into Microsoft Sentinel. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | Cohesity_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Cohesity](https://support.cohesity.com/) | ++## Query samples ++**All Cohesity logs** + ```kusto +Cohesity_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Cohesity (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Azure Blob Storage connection string and container name**: Azure Blob Storage connection string and container name +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions that connect to the Azure Blob Storage and KeyVault. This might result in additional costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/), [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) and [Azure KeyVault pricing page](https://azure.microsoft.com/pricing/details/key-vault/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Functions App. +++**STEP 1 - Get a Cohesity DataHawk API key (see troubleshooting [instruction 1](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/CohesitySecurity/Data%20Connectors/Helios2Sentinel/IncidentProducer))** +++**STEP 2 - Register Azure app ([link](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps)) and save Application (client) ID, Directory (tenant) ID, and Secret Value ([instructions](/azure/healthcare-apis/register-application)). Grant it Azure Storage (user_impersonation) permission. Also, assign the 'Microsoft Sentinel Contributor' role to the application in the appropriate subscription.** +++**STEP 3 - Deploy the connector and the associated Azure Functions**. ++Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Cohesity data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-Cohesity-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the parameters that you created at the previous steps +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cohesitydev1592001764720.cohesity_sentinel_data_connector?tab=Overview) in the Azure Marketplace. |
sentinel | Common Event Format Cef Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/common-event-format-cef-via-ama.md | Title: "Common Event Format (CEF) via AMA connector for Microsoft Sentinel" + Title: "Common Event Format (CEF) via AMA connector for Microsoft Sentinel (preview)" description: "Learn how to install the connector Common Event Format (CEF) via AMA to connect your data source to Microsoft Sentinel." Previously updated : 03/25/2023 Last updated : 07/27/2023 -# Common Event Format (CEF) via AMA connector for Microsoft Sentinel +# Common Event Format (CEF) via AMA connector for Microsoft Sentinel (preview) Common Event Format (CEF) is an industry standard format on top of Syslog messages, used by many security vendors to allow event interoperability among different platforms. By connecting your CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2223547&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci). Common Event Format (CEF) is an industry standard format on top of Syslog messag | Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog<br/> |-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | +| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | |
sentinel | Crowdstrike Falcon Data Replicator Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/crowdstrike-falcon-data-replicator-using-azure-functions.md | + + Title: "Crowdstrike Falcon Data Replicator (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Crowdstrike Falcon Data Replicator (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Crowdstrike Falcon Data Replicator (using Azure Functions) connector for Microsoft Sentinel ++The [Crowdstrike](https://www.crowdstrike.com/) Falcon Data Replicator connector provides the capability to ingest raw event data from the [Falcon Platform](https://www.crowdstrike.com/blog/tech-center/intro-to-falcon-data-replicator/) events into Microsoft Sentinel. The connector provides ability to get events from Falcon Agents which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | AWS_KEY<br/>AWS_SECRET<br/>AWS_REGION_NAME<br/>QUEUE_URL<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-CrowdstrikeReplicator-functionapp | +| **Kusto function alias** | CrowdstrikeReplicator | +| **Kusto function url** | https://aka.ms/sentinel-crowdstrikereplicator-parser | +| **Log Analytics table(s)** | CrowdstrikeReplicatorLogs_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Data Replicator - All Activities** + ```kusto +CrowdstrikeReplicator + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Crowdstrike Falcon Data Replicator (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **SQS and AWS S3 account credentials/permissions**: **AWS_SECRET**, **AWS_REGION_NAME**, **AWS_KEY**, **QUEUE_URL** is required. [See the documentation to learn more about data pulling](https://www.crowdstrike.com/blog/tech-center/intro-to-falcon-data-replicator/). To start, contact CrowdStrike support. At your request they will create a CrowdStrike managed Amazon Web Services (AWS) S3 bucket for short term storage purposes as well as a SQS (simple queue service) account for monitoring changes to the S3 bucket. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the S3 bucket to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-crowdstrikereplicator-parser) to create the Kusto functions alias, **CrowdstrikeReplicator**. +++**STEP 1 - Contact CrowdStrike support to obtain the credentials and Queue URL.** ++++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Crowdstrike Falcon Data Replicator connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Crowdstrike Falcon Data Replicator connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-CrowdstrikeReplicator-azuredeploy) +2. Select the preferred **AWS_SECRET**, **AWS_REGION_NAME**, **AWS_KEY**, **QUEUE_URL**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **AWS_SECRET**, **AWS_REGION_NAME**, **AWS_KEY**, **QUEUE_URL** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Crowdstrike Falcon Data Replicator connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-CrowdstrikeReplicator-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. CrowdstrikeReplicatorXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + AWS_KEY + AWS_SECRET + AWS_REGION_NAME + QUEUE_URL + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-crowdstrikefalconep?tab=Overview) in the Azure Marketplace. |
sentinel | Cyberarkepm Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cyberarkepm-using-azure-functions.md | + + Title: "CyberArkEPM (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector CyberArkEPM (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# CyberArkEPM (using Azure Functions) connector for Microsoft Sentinel ++The [CyberArk Endpoint Privilege Manager](https://www.cyberark.com/products/endpoint-privilege-manager/) data connector provides the capability to retrieve security event logs of the CyberArk EPM services and more events into Microsoft Sentinel through the REST API. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | CyberArkEPMUsername<br/>CyberArkEPMPassword<br/>CyberArkEPMServerURL<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-CyberArkEPMAPI-functionapp | +| **Log Analytics table(s)** | CyberArkEPM_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [CyberArk Support](https://www.cyberark.com/services-support/technical-support-contact/) | ++## Query samples ++**CyberArk EPM Events - All Activities.** + ```kusto +CyberArkEPM + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with CyberArkEPM (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **CyberArkEPMUsername**, **CyberArkEPMPassword** and **CyberArkEPMServerURL** are required for making API calls. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**CyberArkEPM**](https://aka.ms/sentinel-CyberArkEPM-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuration steps for the CyberArk EPM API** ++ Follow the instructions to obtain the credentials. ++1. Use Username and Password for your CyberArk EPM account. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the CyberArk EPM data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the CyberArk EPM data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-CyberArkEPMAPI-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **CyberArkEPMUsername**, **CyberArkEPMPassword**, **CyberArkEPMServerURL** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the CyberArk EPM data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-CyberArkEPMAPI-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. CyberArkXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + CyberArkEPMUsername + CyberArkEPMPassword + CyberArkEPMServerURL + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberark.cybr_epm_sentinel?tab=Overview) in the Azure Marketplace. |
sentinel | Cybersixgill Actionable Alerts Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cybersixgill-actionable-alerts-using-azure-functions.md | + + Title: "Cybersixgill Actionable Alerts (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Cybersixgill Actionable Alerts (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Cybersixgill Actionable Alerts (using Azure Functions) connector for Microsoft Sentinel ++Actionable alerts provide customized alerts based on configured assets ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Cybersixgill-Actionable-Alerts/Data%20Connectors/CybersixgillAlerts.zip?raw=true | +| **Log Analytics table(s)** | CyberSixgill_Alerts_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Cybersixgill](https://www.cybersixgill.com/) | ++## Query samples ++**All Alerts** + ```kusto +CyberSixgill_Alerts_CL + ``` ++++## Prerequisites ++To integrate with Cybersixgill Actionable Alerts (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **Client_ID** and **Client_Secret** are required for making API calls. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Cybersixgill API to pull Alerts into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Cybersixgill Actionable Alerts data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fgithub.com%2FAzure%2FAzure-Sentinel%2Fraw%2Fmaster%2FSolutions%2FCybersixgill-Actionable-Alerts%2FData%20Connectors%2Fazuredeploy_Connector_Cybersixgill_AzureFunction.json) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **Client ID**, **Client Secret**, **TimeInterval** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Cybersixgill Actionable Alerts data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> NOTE:You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Cybersixgill-Actionable-Alerts/Data%20Connectors/CybersixgillAlerts.zip?raw=true) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. CybersixgillAlertsXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + ClientID + ClientSecret + Polling + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us` +3. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cybersixgill1657701397011.azure-sentinel-cybersixgill-actionable-alerts?tab=Overview) in the Azure Marketplace. |
sentinel | Digital Shadows Searchlight Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/digital-shadows-searchlight-using-azure-functions.md | + + Title: "Digital Shadows Searchlight (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Digital Shadows Searchlight (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Digital Shadows Searchlight (using Azure Functions) connector for Microsoft Sentinel ++The Digital Shadows data connector provides ingestion of the incidents and alerts from Digital Shadows Searchlight into the Microsoft Sentinel using the REST API. The connector will provide the incidents and alerts information such that it helps to examine, diagnose and analyse the potential security risks and threats. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | DigitalShadowsAccountID<br/>WorkspaceID<br/>WorkspaceKey<br/>DigitalShadowsKey<br/>DigitalShadowsSecret<br/>HistoricalDays<br/>DigitalShadowsURL<br/>ClassificationFilterOperation<br/>HighVariabilityClassifications<br/>FUNCTION_NAME<br/>logAnalyticsUri (optional)(add any other settings required by the Function App)Set the <code>DigitalShadowsURL</code> value to: <code>https://api.searchlight.app/v1</code>Set the <code>HighVariabilityClassifications</code> value to: <code>exposed-credential,marked-document</code>Set the <code>ClassificationFilterOperation</code> value to: <code>exclude</code> for exclude function app or <code>include</code> for include function app | +| **Azure function app code** | https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Digital%20Shadows/Data%20Connectors/Digital%20Shadows/digitalshadowsConnector.zip | +| **Log Analytics table(s)** | DigitalShadows_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Digital Shadows](https://www.digitalshadows.com/) | ++## Query samples ++**All Digital Shadows incidents and alerts ordered by time most recently raised** + ```kusto +DigitalShadows_CL + | order by raised_t desc + ``` ++++## Prerequisites ++To integrate with Digital Shadows Searchlight (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **Digital Shadows account ID, secret and key** is required. See the documentation to learn more about API on the `https://portal-digitalshadows.com/learn/searchlight-api/overview/description`. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to a 'Digital Shadows Searchlight' to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the 'Digital Shadows Searchlight' API** ++The provider should provide or link to detailed steps to configure the 'Digital Shadows Searchlight' API endpoint so that the Azure Function can authenticate to it successfully, get its authorization key or token, and pull the appliance's logs into Microsoft Sentinel. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the 'Digital Shadows Searchlight' connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the 'Digital Shadows Searchlight' API authorization key(s) or Token, readily available. +++++**Option 1 - Azure Resource Manager (ARM) Template** ++Use this method for automated deployment of the 'Digital Shadows Searchlight' connector. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-Digitalshadows-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password**, 'and/or Other required fields'. +>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. +++**Option 2 - Manual Deployment of Azure Functions** ++ Use the following step-by-step instructions to deploy the 'Digital Shadows Searchlight' connector manually with Azure Functions. ++1. Create a Function App ++1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp). +2. Click **+ Create** at the top. +3. In the **Basics** tab, ensure Runtime stack is set to **python 3.8**. +4. In the **Hosting** tab, ensure **Plan type** is set to **'Consumption (Serverless)'**. +5.select Storage account +6. 'Add other required configurations'. +5. 'Make other preferable configuration changes', if needed, then click **Create**. ++2. Import Function App Code(Zip deployment) ++1. Install Azure CLI +2. From terminal type `az functionapp deployment source config-zip -g ResourceGroup -n FunctionApp --src Zip File` and hit enter. Set the `ResourceGroup` value to: your resource group name. Set the `FunctionApp` value to: your newly created function app name. Set the `Zip File` value to: `digitalshadowsConnector.zip`(path to your zip file). Note:- Download the zip file from the link - [Function App Code](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Digital%20Shadows/Data%20Connectors/Digital%20Shadows/digitalshadowsConnector.zip) ++3. Configure the Function App ++1. In the Function App screen, click the Function App name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following 'x (number of)' application settings individually, under Name, with their respective string values (case-sensitive) under Value: + DigitalShadowsAccountID + WorkspaceID + WorkspaceKey + DigitalShadowsKey + DigitalShadowsSecret + HistoricalDays + DigitalShadowsURL + ClassificationFilterOperation + HighVariabilityClassifications + FUNCTION_NAME + logAnalyticsUri (optional) +(add any other settings required by the Function App) +Set the `DigitalShadowsURL` value to: `https://api.searchlight.app/v1` +Set the `HighVariabilityClassifications` value to: `exposed-credential,marked-document` +Set the `ClassificationFilterOperation` value to: `exclude` for exclude function app or `include` for include function app +>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Azure Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: https://CustomerId.ods.opinsights.azure.us. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/digitalshadows1662022995707.digitalshadows_searchlight_for_sentinel?tab=Overview) in the Azure Marketplace. |
sentinel | Exchange Security Insights Online Collector Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/exchange-security-insights-online-collector-using-azure-functions.md | + + Title: "Exchange Security Insights Online Collector (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Exchange Security Insights Online Collector (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Exchange Security Insights Online Collector (using Azure Functions) connector for Microsoft Sentinel ++Connector used to push Exchange Online Security configuration for Microsoft Sentinel Analysis ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | ESIExchangeOnlineConfig_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | ++## Query samples ++**View how many Configuration entries exist on the table** + ```kusto +ESIExchangeOnlineConfig_CL + | summarize by GenerationInstanceID_g, EntryDate_s, ESIEnvironment_s + ``` ++++## Prerequisites ++To integrate with Exchange Security Insights Online Collector (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **microsoft.automation/automationaccounts permissions**: Read and write permissions to Azure Automation Account to create a it with a Runbook is required. [See the documentation to learn more about Automation Account](/azure/automation/overview). +++## Vendor installation instructions +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected. Follow the steps for each Parser to create the Kusto Functions alias : [**ExchangeConfiguration**](https://aka.ms/sentinel-ESI-ExchangeConfiguration-Online-parser) and [**ESI_ExchConfigAvailableEnvironments**](https://aka.ms/sentinel-ESI-ExchangeEnvironmentList-Online-parser) ++**STEP 1 - Parsers deployment** ++++> [!NOTE] + > This connector uses Azure Automation to connect to 'Exchange Online' to pull its Security analysis into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Automation pricing page](https://azure.microsoft.com/pricing/details/automation/) for details. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Automation** ++>**IMPORTANT:** Before deploying the 'ESI Exchange Online Security Configuration' connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Exchange Online tenant name (contoso.onmicrosoft.com), readily available. +++++**Option 1 - Azure Resource Manager (ARM) Template** ++Use this method for automated deployment of the 'ESI Exchange Online Security Configuration' connector. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-ESI-ExchangeCollector-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **Tenant Name**, 'and/or Other required fields'. +>4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. +++**Option 2 - Manual Deployment of Azure Automation** ++ Use the following step-by-step instructions to deploy the 'ESI Exchange Online Security Configuration' connector manually with Azure Automation. ++++**STEP 3 - Assign Microsoft Graph Permission and Exchange Online Permission to Managed Identity Account** ++To be able to collect Exchange Online information and to be able to retrieve User information and memberlist of admin groups, the automation account need multiple permission. +++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-esionline?tab=Overview) in the Azure Marketplace. |
sentinel | Fortinet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fortinet.md | Title: "Fortinet connector for Microsoft Sentinel" description: "Learn how to install the connector Fortinet to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 07/26/2023 CommonSecurityLog | where DeviceProduct startswith "Fortigate" - | summarize count() by DestinationIP, DestinationPortΓÇï + | summarize count() by DestinationIP, DestinationPort, TimeGeneratedΓÇï | sort by TimeGenerated ``` |
sentinel | Google Apigeex Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-apigeex-using-azure-functions.md | + + Title: "Google ApigeeX (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Google ApigeeX (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Google ApigeeX (using Azure Functions) connector for Microsoft Sentinel ++The [Google ApigeeX](https://cloud.google.com/apigee/docs) data connector provides the capability to ingest ApigeeX audit logs into Microsoft Sentinel using the GCP Logging API. Refer to [GCP Logging API documentation](https://cloud.google.com/logging/docs/reference/v2/rest) for more information. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-ApigeeXDataConnector-functionapp | +| **Log Analytics table(s)** | ApigeeX_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**All ApigeeX logs** + ```kusto +ApigeeX_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Google ApigeeX (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **GCP service account**: GCP service account with permissions to read logs is required for GCP Logging API. Also json file with service account key is required. See the documentation to learn more about [required permissions](https://cloud.google.com/iam/docs/audit-logging#audit_log_permissions), [creating service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) and [creating service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the GCP API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**ApigeeX**](https://aka.ms/sentinel-ApigeeXDataConnector-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuring GCP and obtaining credentials** ++1. Make sure that Logging API is [enabled](https://cloud.google.com/apis/docs/getting-started#enabling_apis). ++2. [Create service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) with [required permissions](https://cloud.google.com/iam/docs/audit-logging#audit_log_permissions) and [get service account key json file](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). ++3. Prepare GCP project ID where ApigeeX is located. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Azure Blob Storage connection string and container name, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-ApigeeXDataConnector-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Google Cloud Platform Project Id**, **Google Cloud Platform Credentials File Content**, **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-ApigeeXDataConnector-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + RESOURCE_NAMES + CREDENTIALS_FILE_CONTENT + WORKSPACE_ID + SHARED_KEY + logAnalyticsUri (Optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://WORKSPACE_ID.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-googleapigeex?tab=Overview) in the Azure Marketplace. |
sentinel | Google Cloud Platform Cloud Monitoring Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-cloud-platform-cloud-monitoring-using-azure-functions.md | + + Title: "Google Cloud Platform Cloud Monitoring (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Google Cloud Platform Cloud Monitoring (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Google Cloud Platform Cloud Monitoring (using Azure Functions) connector for Microsoft Sentinel ++The Google Cloud Platform Cloud Monitoring data connector provides the capability to ingest [GCP Monitoring metrics](https://cloud.google.com/monitoring/api/metrics_gcp) into Microsoft Sentinel using the GCP Monitoring API. Refer to [GCP Monitoring API documentation](https://cloud.google.com/monitoring/api/v3) for more information. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-GCPMonitorDataConnector-functionapp | +| **Log Analytics table(s)** | GCP_MONITORING_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**All GCP Monitoring logs** + ```kusto +GCP_MONITORING_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Google Cloud Platform Cloud Monitoring (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **GCP service account**: GCP service account with permissions to read Cloud Monitoring metrics is required for GCP Monitoring API (required *Monitoring Viewer* role). Also json file with service account key is required. See the documentation to learn more about [creating service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) and [creating service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the GCP API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**GCP_MONITORING**](https://aka.ms/sentinel-GCPMonitorDataConnector-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuring GCP and obtaining credentials** ++1. [Create service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) with Monitoring Viewer role and [get service account key json file](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). ++2. Prepare the list of GCP projects to get metrics from. [Learn more about GCP projects](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy). ++3. Prepare the list of [GCP metric types](https://cloud.google.com/monitoring/api/metrics_gcp) +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Azure Blob Storage connection string and container name, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-GCPMonitorDataConnector-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Google Cloud Platform Project Id List**, **Google Cloud Platform Metric Types List**, **Google Cloud Platform Credentials File Content**, **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-GCPMonitorDataConnector-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + GCP_PROJECT_ID + GCP_METRICS + GCP_CREDENTIALS_FILE_CONTENT + WORKSPACE_ID + SHARED_KEY + logAnalyticsUri (Optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://WORKSPACE_ID.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-gcpmonitoring?tab=Overview) in the Azure Marketplace. |
sentinel | Google Cloud Platform Dns Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-cloud-platform-dns-using-azure-functions.md | + + Title: "Google Cloud Platform DNS (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Google Cloud Platform DNS (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Google Cloud Platform DNS (using Azure Functions) connector for Microsoft Sentinel ++The Google Cloud Platform DNS data connector provides the capability to ingest [Cloud DNS query logs](https://cloud.google.com/dns/docs/monitoring#using_logging) and [Cloud DNS audit logs](https://cloud.google.com/dns/docs/audit-logging) into Microsoft Sentinel using the GCP Logging API. Refer to [GCP Logging API documentation](https://cloud.google.com/logging/docs/api) for more information. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-GCPDNSDataConnector-functionapp | +| **Log Analytics table(s)** | GCP_DNS_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ++## Query samples ++**All GCP DNS logs** + ```kusto +GCP_DNS_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Google Cloud Platform DNS (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **GCP service account**: GCP service account with permissions to read logs (with "logging.logEntries.list" permission) is required for GCP Logging API. Also json file with service account key is required. See the documentation to learn more about [permissions](https://cloud.google.com/logging/docs/access-control#permissions_and_roles), [creating service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) and [creating service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the GCP API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**GCPCloudDNS**](https://aka.ms/sentinel-GCPDNSDataConnector-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuring GCP and obtaining credentials** ++1. Make sure that Logging API is [enabled](https://cloud.google.com/apis/docs/getting-started#enabling_apis). ++2. [Create service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) with Logs Viewer role (or at least with "logging.logEntries.list" permission) and [get service account key json file](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). ++3. Prepare the list of GCP resources (organizations, folders, projects) to get logs from. [Learn more about GCP resources](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy). +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Azure Blob Storage connection string and container name, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-GCPDNSDataConnector-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Google Cloud Platform Resource Names**, **Google Cloud Platform Credentials File Content**, **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-GCPDNSDataConnector-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + RESOURCE_NAMES + CREDENTIALS_FILE_CONTENT + WORKSPACE_ID + SHARED_KEY + logAnalyticsUri (Optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://WORKSPACE_ID.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-gcpdns?tab=Overview) in the Azure Marketplace. |
sentinel | Google Cloud Platform Iam Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-cloud-platform-iam-using-azure-functions.md | + + Title: "Google Cloud Platform IAM (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Google Cloud Platform IAM (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Google Cloud Platform IAM (using Azure Functions) connector for Microsoft Sentinel ++The Google Cloud Platform Identity and Access Management (IAM) data connector provides the capability to ingest [GCP IAM logs](https://cloud.google.com/iam/docs/audit-logging) into Microsoft Sentinel using the GCP Logging API. Refer to [GCP Logging API documentation](https://cloud.google.com/logging/docs/api) for more information. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-GCPIAMDataConnector-functionapp | +| **Log Analytics table(s)** | GCP_IAM_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**All GCP IAM logs** + ```kusto +GCP_IAM_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Google Cloud Platform IAM (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **GCP service account**: GCP service account with permissions to read logs is required for GCP Logging API. Also json file with service account key is required. See the documentation to learn more about [required permissions](https://cloud.google.com/iam/docs/audit-logging#audit_log_permissions), [creating service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) and [creating service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the GCP API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**GCP_IAM**](https://aka.ms/sentinel-GCPIAMDataConnector-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuring GCP and obtaining credentials** ++1. Make sure that Logging API is [enabled](https://cloud.google.com/apis/docs/getting-started#enabling_apis). ++2. (Optional) [Enable Data Access Audit logs](https://cloud.google.com/logging/docs/audit/configure-data-access#config-console-enable). ++3. [Create service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) with [required permissions](https://cloud.google.com/iam/docs/audit-logging#audit_log_permissions) and [get service account key json file](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). ++4. Prepare the list of GCP resources (organizations, folders, projects) to get logs from. [Learn more about GCP resources](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy). +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Azure Blob Storage connection string and container name, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-GCPIAMDataConnector-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Google Cloud Platform Resource Names**, **Google Cloud Platform Credentials File Content**, **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-GCPIAMDataConnector-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + RESOURCE_NAMES + CREDENTIALS_FILE_CONTENT + WORKSPACE_ID + SHARED_KEY + logAnalyticsUri (Optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://WORKSPACE_ID.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-gcpiam?tab=Overview) in the Azure Marketplace. |
sentinel | Google Workspace G Suite Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-workspace-g-suite-using-azure-functions.md | + + Title: "Google Workspace (G Suite) (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Google Workspace (G Suite) (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Google Workspace (G Suite) (using Azure Functions) connector for Microsoft Sentinel ++The [Google Workspace](https://workspace.google.com/) data connector provides the capability to ingest Google Workspace Activity events into Microsoft Sentinel through the REST API. The connector provides ability to get [events](https://developers.google.com/admin-sdk/reports/v1/reference/activities) which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems, track who signs in and when, analyze administrator activity, understand how users create and share content, and more review events in your org. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-GWorkspaceReportsAPI-functionapp | +| **Log Analytics table(s)** | GWorkspace_ReportsAPI_admin_CL<br/> GWorkspace_ReportsAPI_calendar_CL<br/> GWorkspace_ReportsAPI_drive_CL<br/> GWorkspace_ReportsAPI_login_CL<br/> GWorkspace_ReportsAPI_mobile_CL<br/> GWorkspace_ReportsAPI_token_CL<br/> GWorkspace_ReportsAPI_user_accounts_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ++## Query samples ++**Google Workspace Events - All Activities** + ```kusto +GWorkspaceActivityReports + + | sort by TimeGenerated desc + ``` ++**Google Workspace Events - Admin Activity** + ```kusto +GWorkspace_ReportsAPI_admin_CL + + | sort by TimeGenerated desc + ``` ++**Google Workspace Events - Calendar Activity** + ```kusto +GWorkspace_ReportsAPI_calendar_CL + + | sort by TimeGenerated desc + ``` ++**Google Workspace Events - Drive Activity** + ```kusto +GWorkspace_ReportsAPI_drive_CL + + | sort by TimeGenerated desc + ``` ++**Google Workspace Events - Login Activity** + ```kusto +GWorkspace_ReportsAPI_login_CL + + | sort by TimeGenerated desc + ``` ++**Google Workspace Events - Mobile Activity** + ```kusto +GWorkspace_ReportsAPI_mobile_CL + + | sort by TimeGenerated desc + ``` ++**Google Workspace Events - Token Activity** + ```kusto +GWorkspace_ReportsAPI_token_CL + + | sort by TimeGenerated desc + ``` ++**Google Workspace Events - User Accounts Activity** + ```kusto +GWorkspace_ReportsAPI_user_accounts_CL + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Google Workspace (G Suite) (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **GooglePickleString** is required for REST API. [See the documentation to learn more about API](https://developers.google.com/admin-sdk/reports/v1/reference/activities). Please find the instructions to obtain the credentials in the configuration section below. You can check all [requirements and follow the instructions](https://developers.google.com/admin-sdk/reports/v1/quickstart/python) from here as well. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Google Reports API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/GoogleWorkspaceReports/Parsers/GWorkspaceActivityReports), on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. +++**STEP 1 - Ensure the prerequisites to obtain the Google Pickel String** +++++1. [Python 3 or above](https://www.python.org/downloads/) is installed. +2. The [pip package management tool](https://www.geeksforgeeks.org/download-and-install-pip-latest-version/) is available. +3. A Google Workspace domain with [API access enabled](https://support.google.com/a/answer/7281227?visit_id=637889155425319296-3895555646&rd=1). +4. A Google account in that domain with administrator privileges. +++**STEP 2 - Configuration steps for the Google Reports API** ++1. Login to Google cloud console with your Workspace Admin credentials https://console.cloud.google.com. +2. Using the search option (available at the top middle), Search for ***APIs & Services*** +3. From ***APIs & Services*** -> ***Enabled APIs & Services***, enable **Admin SDK API** for this project. + 4. Go to ***APIs & Services*** -> ***OAuth Consent Screen***. If not already configured, create a OAuth Consent Screen with the following steps: + 1. Provide App Name and other mandatory information. + 2. Add authorized domains with API Access Enabled. + 3. In Scopes section, add **Admin SDK API** scope. + 4. In Test Users section, make sure the domain admin account is added. + 5. Go to ***APIs & Services*** -> ***Credentials*** and create OAuth 2.0 Client ID + 1. Click on Create Credentials on the top and select Oauth client Id. + 2. Select Web Application from the Application Type drop down. + 3. Provide a suitable name to the Web App and add http://localhost:8081/ as one of the Authorized redirect URIs. + 4. Once you click Create, download the JSON from the pop-up that appears. Rename this file to "**credentials.json**". + 6. To fetch Google Pickel String, run the [python script](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/GoogleWorkspaceReports/Data%20Connectors/get_google_pickle_string.py) from the same folder where credentials.json is saved. + 1. When popped up for sign-in, use the domain admin account credentials to login. +>**Note:** This script is supported only on Windows operating system. + 7. From the output of the previous step, copy Google Pickle String (contained within single quotation marks) and keep it handy. It will be needed on Function App deployment step. +++**STEP 3 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Workspace data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Workspace GooglePickleString readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Google Workspace data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinelgworkspaceazuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **GooglePickleString** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Google Workspace data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-GWorkspaceReportsAPI-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. GWorkspaceXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + GooglePickleString + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +4. (Optional) Change the default delays if required. ++ > **NOTE:** The following default values for ingestion delays have been added for different set of logs from Google Workspace based on Google [documentation](https://support.google.com/a/answer/7061566). These can be modified based on environmental requirements. + Fetch Delay - 10 minutes + Calendar Fetch Delay - 6 hours + Chat Fetch Delay - 1 day + User Accounts Fetch Delay - 3 hours + Login Fetch Delay - 6 hours ++5. Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +6. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-googleworkspacereports?tab=Overview) in the Azure Marketplace. |
sentinel | Holm Security Asset Data Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/holm-security-asset-data-using-azure-functions.md | + + Title: "Holm Security Asset Data (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Holm Security Asset Data (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Holm Security Asset Data (using Azure Functions) connector for Microsoft Sentinel ++The connector provides the capability to poll data from Holm Security Center into Microsoft Sentinel. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | net_assets_CL<br/> web_assets_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Holm Security](https://support.holmsecurity.com/hc/en-us) | ++## Query samples ++**All low net assets** + ```kusto +net_assets_Cl + + | where severity_s == 'low' + ``` ++**All low web assets** + ```kusto +web_assets_Cl + + | where severity_s == 'low' + ``` ++++## Prerequisites ++To integrate with Holm Security Asset Data (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Holm Security API Token**: Holm Security API Token is required. [Holm Security API Token](https://support.holmsecurity.com/hc/en-us) +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to a Holm Security Assets to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the Holm Security API** ++ [Follow these instructions](https://support.holmsecurity.com/hc/en-us/articles/360027651591-How-do-I-set-up-an-API-token-) to create an API authentication token. +++**STEP 2 - Use the below deployment option to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Holm Security connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Holm Security API authorization Token, readily available. ++++Azure Resource Manager (ARM) Template Deployment ++**Option 1 - Azure Resource Manager (ARM) Template** ++Use this method for automated deployment of the Holm Security connector. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-holmsecurityassets-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password**, 'and/or Other required fields'. +>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/holmsecurityswedenab1639511288603.holmsecurity_sc_sentinel?tab=Overview) in the Azure Marketplace. |
sentinel | Imperva Cloud Waf Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/imperva-cloud-waf-using-azure-functions.md | + + Title: "Imperva Cloud WAF (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Imperva Cloud WAF (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Imperva Cloud WAF (using Azure Functions) connector for Microsoft Sentinel ++The [Imperva Cloud WAF](https://www.imperva.com/resources/resource-library/datasheets/imperva-cloud-waf/) data connector provides the capability to integrate and ingest Web Application Firewall events into Microsoft Sentinel through the REST API. Refer to Log integration [documentation](https://docs.imperva.com/bundle/cloud-application-security/page/settings/log-integration.htm#Download) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | ImpervaAPIID<br/>ImpervaAPIKey<br/>ImpervaLogServerURI<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Log Analytics table(s)** | ImpervaWAFCloud_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Imperva Cloud WAF Events - All Activities** + ```kusto +ImpervaWAFCloud + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Imperva Cloud WAF (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **ImpervaAPIID**, **ImpervaAPIKey**, **ImpervaLogServerURI** are required for the API. [See the documentation to learn more about Setup Log Integration process](https://docs.imperva.com/bundle/cloud-application-security/page/settings/log-integration.htm#Setuplogintegration). Check all [requirements and follow the instructions](https://docs.imperva.com/bundle/cloud-application-security/page/settings/log-integration.htm#Setuplogintegration) for obtaining credentials. Please note that this connector uses CEF log event format. [More information](https://docs.imperva.com/bundle/cloud-application-security/page/more/log-file-structure.htm#Logfilestructure) about log format. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Imperva Cloud API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Functions App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**ImpervaWAFCloud**](https://aka.ms/sentinel-impervawafcloud-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuration steps for the Log Integration** ++ [Follow the instructions](https://docs.imperva.com/bundle/cloud-application-security/page/settings/log-integration.htm#Setuplogintegration) to obtain the credentials. ++++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Functions** ++>**IMPORTANT:** Before deploying the Workspace data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Imperva Cloud WAF data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-impervawafcloud-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **ImpervaAPIID**, **ImpervaAPIKey**, **ImpervaLogServerURI** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Imperva Cloud WAF data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure functions development. ++1. Download the [Azure Functions App](https://aka.ms/sentinel-impervawafcloud-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. ImpervaCloudXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + ImpervaAPIID + ImpervaAPIKey + ImpervaLogServerURI + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +3. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-impervawafcloud?tab=Overview) in the Azure Marketplace. |
sentinel | Infoblox Cloud Data Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/infoblox-cloud-data-connector.md | Title: "Infoblox Cloud Data connector for Microsoft Sentinel" description: "Learn how to install the connector Infoblox Cloud Data to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 07/26/2023 InfobloxCDC ## Vendor installation instructions ->**IMPORTANT:** This data connector depends on a parser based on a Kusto Function to work as expected called [**InfobloxCDC**](https://aka.ms/sentinel-InfobloxCloudDataConnector-parser) which is deployed with the Microsoft Sentinel Solution. +>**IMPORTANT:** This data connector depends on a parser based on a Kusto Function to work as expected called [**InfobloxCDC**](https://aka.ms/sentinel-InfobloxCloudDataConnector-parser) which is deployed with the solution. ->**IMPORTANT:** This Sentinel data connector assumes an Infoblox Data Connector host has already been created and configured in the Infoblox Cloud Services Portal (CSP). As the [**Infoblox Data Connector**](https://docs.infoblox.com/display/BloxOneThreatDefense/Deploying+the+Data+Connector+Solution) is a feature of BloxOne Threat Defense, access to an appropriate BloxOne Threat Defense subscription is required. See this [**quick-start guide**](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-data-connector.pdf) for more information and licensing requirements. +>**IMPORTANT:** This Microsoft Sentinel data connector assumes an Infoblox Cloud Data Connector host has already been created and configured in the Infoblox Cloud Services Portal (CSP). As the [**Infoblox Cloud Data Connector**](https://docs.infoblox.com/display/BloxOneThreatDefense/Deploying+the+Data+Connector+Solution) is a feature of BloxOne Threat Defense, access to an appropriate BloxOne Threat Defense subscription is required. See this [**quick-start guide**](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-data-connector.pdf) for more information and licensing requirements. 1. Linux Syslog agent configuration Install and configure the Linux agent to collect your Common Event Format (CEF) 1.1 Select or create a Linux machine -Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. +Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Microsoft Sentinel or other clouds. 1.2 Install the CEF collector on the Linux machine Follow the steps below to configure the Infoblox CDC to send BloxOne data to Mic 2. Navigate to **Manage > Data Connector**. 3. Click the **Destination Configuration** tab at the top. 4. Click **Create > Syslog**. + - **Name**: Give the new Destination a meaningful **name**, such as **Microsoft-Sentinel-Destination**. - **Description**: Optionally give it a meaningful **description**. - **State**: Set the state to **Enabled**. - **Format**: Set the format to **CEF**. Follow the steps below to configure the Infoblox CDC to send BloxOne data to Mic - Click **Save & Close**. 5. Click the **Traffic Flow Configuration** tab at the top. 6. Click **Create**.+ - **Name**: Give the new Traffic Flow a meaningful **name**, such as **Microsoft-Sentinel-Flow**. - **Description**: Optionally give it a meaningful **description**. - **State**: Set the state to **Enabled**. - Expand the **CDC Enabled Host** section. |
sentinel | Infoblox Nios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/infoblox-nios.md | Title: "Infoblox NIOS connector for Microsoft Sentinel" description: "Learn how to install the connector Infoblox NIOS to connect your data source to Microsoft Sentinel." Previously updated : 04/18/2023 Last updated : 07/26/2023 The [Infoblox Network Identity Operating System (NIOS)](https://www.infoblox.com | | | | **Log Analytics table(s)** | Syslog (InfobloxNIOS)<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |-| **Supported by** | [Infoblox](https://www.infoblox.com/support/) | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples |
sentinel | Mailrisk By Secure Practice Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mailrisk-by-secure-practice-using-azure-functions.md | + + Title: "MailRisk by Secure Practice (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector MailRisk by Secure Practice (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# MailRisk by Secure Practice (using Azure Functions) connector for Microsoft Sentinel ++Data connector to push emails from MailRisk into Microsoft Sentinel Log Analytics. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | MailRiskEmails_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Secure Practice](https://securepractice.co/support) | ++## Query samples ++**All emails** + ```kusto +MailRiskEmails_CL ++ | sort by TimeGenerated desc + ``` ++**Emails with SPF pass** + ```kusto +MailRiskEmails_CL ++ | where spf_s == 'pass' ++ | sort by TimeGenerated desc + ``` ++**Emails with specific category** + ```kusto +MailRiskEmails_CL ++ | where Category == 'scam' ++ | sort by TimeGenerated desc + ``` ++**Emails with link urls that contain the string "microsoft"** + ```kusto +MailRiskEmails_CL ++ | sort by TimeGenerated desc ++ | mv-expand link = parse_json(links_s) ++ | where link.url contains "microsoft" + ``` ++++## Prerequisites ++To integrate with MailRisk by Secure Practice (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **API credentials**: Your Secure Practice API key pair is also needed, which are created in the [settings in the admin portal](https://manage.securepractice.co/settings/security). If you have lost your API secret, you can generate a new key pair (WARNING: Any other integrations using the old key pair will stop working). +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Secure Practice API to push logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++Please have these the Workspace ID and Workspace Primary Key (can be copied from the following), readily available. ++++Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the MailRisk data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-mailrisk-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **Secure Practice API Key**, **Secure Practice API Secret** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Manual deployment ++In the open source repository on [GitHub](https://github.com/securepractice/mailrisk-sentinel-connector) you can find instructions for how to manually deploy the data connector. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/securepracticeas1650887373770.microsoft-sentinel-solution-mailrisk?tab=Overview) in the Azure Marketplace. |
sentinel | Morphisec Utpp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/morphisec-utpp.md | Title: "Morphisec UTPP connector for Microsoft Sentinel" description: "Learn how to install the connector Morphisec UTPP to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 07/26/2023 Integrate vital insights from your security products with the Morphisec Data Con | **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser | | **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |-| **Supported by** | [Morphisec](https://support.morphisec.com/hc) | +| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) | ## Query samples |
sentinel | Mulesoft Cloudhub Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mulesoft-cloudhub-using-azure-functions.md | + + Title: "MuleSoft Cloudhub (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector MuleSoft Cloudhub (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# MuleSoft Cloudhub (using Azure Functions) connector for Microsoft Sentinel ++The [MuleSoft Cloudhub](https://www.mulesoft.com/platform/saas/cloudhub-ipaas-cloud-based-integration) data connector provides the capability to retrieve logs from Cloudhub applications using the Cloudhub API and more events into Microsoft Sentinel through the REST API. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | MuleSoftEnvId<br/>MuleSoftAppName<br/>MuleSoftUsername<br/>MuleSoftPassword<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-MuleSoftCloudhubAPI-functionapp | +| **Log Analytics table(s)** | MuleSoft_Cloudhub_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**MuleSoft Cloudhub Events - All Activities.** + ```kusto +MuleSoft_Cloudhub_CL + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with MuleSoft Cloudhub (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **MuleSoftEnvId**, **MuleSoftAppName**, **MuleSoftUsername** and **MuleSoftPassword** are required for making API calls. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**MuleSoftCloudhub**](https://aka.ms/sentinel-MuleSoftCloudhub-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuration steps for the MuleSoft Cloudhub API** ++ Follow the instructions to obtain the credentials. ++1. Obtain the **MuleSoftEnvId**, **MuleSoftAppName**, **MuleSoftUsername** and **MuleSoftPassword** using the [documentation](https://help.mulesoft.com/s/article/How-to-get-Cloudhub-application-information-using-Anypoint-Platform-API). +2. Save credentials for using in the data connector. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the MuleSoft Cloudhub data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). +++++**Option 1 - Azure Resource Manager (ARM) Template** ++Use this method for automated deployment of the MuleSoft Cloudhub data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-MuleSoftCloudhubAPI-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **MuleSoftEnvId**, **MuleSoftAppName**, **MuleSoftUsername** and **MuleSoftPassword** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. +++**Option 2 - Manual Deployment of Azure Functions** ++ Use the following step-by-step instructions to deploy the MuleSoft Cloudhub data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-MuleSoftCloudhubAPI-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. MuleSoftXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + MuleSoftEnvId + MuleSoftAppName + MuleSoftUsername + MuleSoftPassword + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-mulesoft?tab=Overview) in the Azure Marketplace. |
sentinel | Netclean Proactive Incidents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netclean-proactive-incidents.md | + + Title: "Netclean ProActive Incidents connector for Microsoft Sentinel" +description: "Learn how to install the connector Netclean ProActive Incidents to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Netclean ProActive Incidents connector for Microsoft Sentinel ++This connector uses the Netclean Webhook (required) and Logic Apps to push data into Microsoft Sentinel Log Analytics ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | Netclean_Incidents_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [NetClean](https://www.netclean.com/contact) | ++## Query samples ++**Netclean - All Activities.** + ```kusto +Netclean_Incidents_CL + | sort by TimeGenerated desc + ``` ++++## Vendor installation instructions +++> [!NOTE] + > The data connector relies on Azure Logic Apps to receive and push data to Log Analytics This might result in additional data ingestion costs. + It's possible to test this without Logic Apps or NetClean Proactive see option 2 ++++ Option 1: deploy Logic app (requires NetClean Proactive) ++1. Download and install the Logic app here: + https://portal.azure.com/#create/netcleantechnologiesab1651557549734.netcleanlogicappnetcleanproactivelogicapp) +2. Go to your newly created logic app + In your Logic app designer, click +New Step and search for ΓÇ£Azure Log Analytics Data CollectorΓÇ¥ click it and select ΓÇ£Send DataΓÇ¥ + Enter the Custom Log Name: Netclean_Incidents and a dummy value in the Json request body and click save + Go to code view on the top ribbon and scroll down to line ~100 it should start with "Body" + replace the line entirly with: + + "body": "{\n\"Hostname\":\"@{variables('machineName')}\",\n\"agentType\":\"@{triggerBody()['value']['agent']['type']}\",\n\"Identifier\":\"@{triggerBody()?['key']?['identifier']}\",\n\"type\":\"@{triggerBody()?['key']?['type']}\",\n\"version\":\"@{triggerBody()?['value']?['incidentVersion']}\",\n\"foundTime\":\"@{triggerBody()?['value']?['foundTime']}\",\n\"detectionMethod\":\"@{triggerBody()?['value']?['detectionHashType']}\",\n\"agentInformatonIdentifier\":\"@{triggerBody()?['value']?['device']?['identifier']}\",\n\"osVersion\":\"@{triggerBody()?['value']?['device']?['operatingSystemVersion']}\",\n\"machineName\":\"@{variables('machineName')}\",\n\"microsoftCultureId\":\"@{triggerBody()?['value']?['device']?['microsoftCultureId']}\",\n\"timeZoneId\":\"@{triggerBody()?['value']?['device']?['timeZoneName']}\",\n\"microsoftGeoId\":\"@{triggerBody()?['value']?['device']?['microsoftGeoId']}\",\n\"domainname\":\"@{variables('domain')}\",\n\"Agentversion\":\"@{triggerBody()['value']['agent']['version']}\",\n\"Agentidentifier\":\"@{triggerBody()['value']['identifier']}\",\n\"loggedOnUsers\":\"@{variables('Usernames')}\",\n\"size\":\"@{triggerBody()?['value']?['file']?['size']}\",\n\"creationTime\":\"@{triggerBody()?['value']?['file']?['creationTime']}\",\n\"lastAccessTime\":\"@{triggerBody()?['value']?['file']?['lastAccessTime']}\",\n\"lastWriteTime\":\"@{triggerBody()?['value']?['file']?['lastModifiedTime']}\",\n\"sha1\":\"@{triggerBody()?['value']?['file']?['calculatedHashes']?['sha1']}\",\n\"nearbyFiles_sha1\":\"@{variables('nearbyFiles_sha1s')}\",\n\"externalIP\":\"@{triggerBody()?['value']?['device']?['resolvedExternalIp']}\",\n\"domain\":\"@{variables('domain')}\",\n\"hasCollectedNearbyFiles\":\"@{variables('hasCollectedNearbyFiles')}\",\n\"filePath\":\"@{replace(triggerBody()['value']['file']['path'], '\\', '\\\\')}\",\n\"m365WebUrl\":\"@{triggerBody()?['value']?['file']?['microsoft365']?['webUrl']}\",\n\"m365CreatedBymail\":\"@{triggerBody()?['value']?['file']?['createdBy']?['graphIdentity']?['user']?['mail']}\",\n\"m365LastModifiedByMail\":\"@{triggerBody()?['value']?['file']?['lastModifiedBy']?['graphIdentity']?['user']?['mail']}\",\n\"m365LibraryId\":\"@{triggerBody()?['value']?['file']?['microsoft365']?['library']?['id']}\",\n\"m365LibraryDisplayName\":\"@{triggerBody()?['value']?['file']?['microsoft365']?['library']?['displayName']}\",\n\"m365Librarytype\":\"@{triggerBody()?['value']?['file']?['microsoft365']?['library']?['type']}\",\n\"m365siteid\":\"@{triggerBody()?['value']?['file']?['microsoft365']?['site']?['id']}\",\n\"m365sitedisplayName\":\"@{triggerBody()?['value']?['file']?['microsoft365']?['site']?['displayName']}\",\n\"m365sitename\":\"@{triggerBody()?['value']?['file']?['microsoft365']?['parent']?['name']}\",\n\"countOfAllNearByFiles\":\"@{variables('countOfAllNearByFiles')}\",\n\n}", + click save +3. Copy the HTTP POST URL +4. Go to your NetClean ProActive web console, and go to settings, Under Webhook configure a new webhook using the URL copied from step 3 + 5. Verify functionality by triggering a Demo Incident. ++ Option 2 (Testing only) ++Ingest data using a api function. please use the script found on [Send log data to Azure Monitor by using the HTTP Data Collector API](/azure/azure-monitor/logs/data-collector-api?tabs=powershell) +Replace the CustomerId and SharedKey values with your values +Replace the content in $json variable to the sample data. +Set the LogType varible to **Netclean_Incidents_CL** +Run the script ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/netcleantechnologiesab1651557549734.azure-sentinel-solution-netclean-proactive?tab=Overview) in the Azure Marketplace. |
sentinel | Netskope Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netskope-using-azure-functions.md | + + Title: "Netskope (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Netskope (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Netskope (using Azure Functions) connector for Microsoft Sentinel ++The [Netskope Cloud Security Platform](https://www.netskope.com/platform) connector provides the capability to ingest Netskope logs and events into Microsoft Sentinel. The connector provides visibility into Netskope Platform Events and Alerts in Microsoft Sentinel to improve monitoring and investigation capabilities. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | apikey<br/>workspaceID<br/>workspaceKey<br/>uri<br/>timeInterval<br/>logTypes<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Netskope/Data%20Connectors/Netskope/AzureFunctionNetskope/run.ps1 | +| **Log Analytics table(s)** | Netskope_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Netskope](https://www.netskope.com/services#support) | ++## Query samples ++**Top 10 Users** + ```kusto +Netskope + + | summarize count() by SrcUserName + + | top 10 by count_ + ``` ++**Top 10 Alerts** + ```kusto +Netskope + + | where isnotempty(AlertName) + + | summarize count() by AlertName ++ | top 10 by count_ + ``` ++++## Prerequisites ++To integrate with Netskope (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html). **Note:** A Netskope account is required +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to Netskope to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Netskope and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Netskope/Parsers/Netskope.txt), on the second line of the query, enter the hostname(s) of your Netskope device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the Netskope API** ++ [Follow these instructions](https://docs.netskope.com/en/rest-api-v1-overview.html) provided by Netskope to obtain an API Token. **Note:** A Netskope account is required +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Netskope connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Netskope API Authorization Token, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++This method provides an automated deployment of the Netskope connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-netskope-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **API Key**, and **URI**. + - Use the following schema for the `uri` value: `https://<Tenant Name>.goskope.com` Replace `<Tenant Name>` with your domain. + - The default **Time Interval** is set to pull the last five (5) minutes of data. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion. + - The default **Log Types** is set to pull all 6 available log types (`alert, page, application, audit, infrastructure, network`), remove any are not required. + - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. +6. After successfully deploying the connector, download the Kusto Function to normalize the data fields. [Follow the steps](https://aka.ms/sentinelgithubparsersnetskope) to use the Kusto function alias, **Netskope**. ++Option 2 - Manual Deployment of Azure Functions ++This method provides the step-by-step instructions to deploy the Netskope connector manually with Azure Function. +++**1. Create a Function App** ++1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**. +2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**. +3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected. +4. Make other preferrable configuration changes, if needed, then click **Create**. +++**2. Import Function App Code** ++1. In the newly created Function App, select **Functions** on the left pane and click **+ Add**. +2. Select **Timer Trigger**. +3. Enter a unique Function **Name** and modify the cron schedule, if needed. The default value is set to run the Function App every 5 minutes. (Note: the Timer trigger should match the `timeInterval` value below to prevent overlapping data), click **Create**. +4. Click on **Code + Test** on the left pane. +5. Copy the [Function App Code](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Netskope/Data%20Connectors/Netskope/AzureFunctionNetskope/run.ps1) and paste into the Function App `run.ps1` editor. +5. Click **Save**. +++**3. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following seven (7) application settings individually, with their respective string values (case-sensitive): + apikey + workspaceID + workspaceKey + uri + timeInterval + logTypes + logAnalyticsUri (optional) +> - Enter the URI that corresponds to your region. The `uri` value must follow the following schema: `https://<Tenant Name>.goskope.com` - There is no need to add subsquent parameters to the Uri, the Function App will dynamically append the parameteres in the proper format. +> - Set the `timeInterval` (in minutes) to the default value of `5` to correspond to the default Timer Trigger of every `5` minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion. +> - Set the `logTypes` to `alert, page, application, audit, infrastructure, network` - This list represents all the avaliable log types. Select the log types based on logging requirements, seperating each by a single comma. +> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. +5. After successfully deploying the connector, download the Kusto Function to normalize the data fields. [Follow the steps](https://aka.ms/sentinelgithubparsersnetskope) to use the Kusto function alias, **Netskope**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/netskope.netskope_mss?tab=Overview) in the Azure Marketplace. |
sentinel | Nxlog Dns Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-dns-logs.md | Title: "NXLog DNS Logs connector for Microsoft Sentinel" description: "Learn how to install the connector NXLog DNS Logs to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 07/26/2023 The NXLog DNS Logs data connector uses Event Tracing for Windows ([ETW](/windows | | | | **Log Analytics table(s)** | NXLog_DNS_Server_CL<br/> | | **Data collection rules support** | Not currently supported |-| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) | +| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) | ## Query samples |
sentinel | Onelogin Iam Platform Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/onelogin-iam-platform-using-azure-functions.md | + + Title: "OneLogin IAM Platform(using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector OneLogin IAM Platform(using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# OneLogin IAM Platform(using Azure Functions) connector for Microsoft Sentinel ++The [OneLogin](https://www.onelogin.com/) data connector provides the capability to ingest common OneLogin IAM Platform events into Microsoft Sentinel through Webhooks. The OneLogin Event Webhook API which is also known as the Event Broadcaster will send batches of events in near real-time to an endpoint that you specify. When a change occurs in the OneLogin, an HTTPS POST request with event information is sent to a callback data connector URL. Refer to [Webhooks documentation](https://developers.onelogin.com/api-docs/1/events/webhooks) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | OneLoginBearerToken<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-OneLogin-functionapp | +| **Log Analytics table(s)** | OneLogin_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**OneLogin Events - All Activities.** + ```kusto +OneLogin + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with OneLogin IAM Platform(using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Webhooks Credentials/permissions**: **OneLoginBearerToken**, **Callback URL** are required for working Webhooks. See the documentation to learn more about [configuring Webhooks](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010469).You need to generate **OneLoginBearerToken** according to your security requirements and use it in **Custom Headers** section in format: Authorization: Bearer **OneLoginBearerToken**. Logs Format: JSON Array. +++## Vendor installation instructions +++> [!NOTE] + > This data connector uses Azure Functions based on HTTP Trigger for waiting POST requests with logs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**OneLogin**](https://aka.ms/sentinel-OneLogin-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuration steps for the OneLogin** ++ Follow the [instructions](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010469) to configure Webhooks. ++1. Generate the **OneLoginBearerToken** according to your password policy. +2. Set Custom Header in the format: Authorization: Bearer OneLoginBearerToken. +3. Use JSON Array Logs Format. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the OneLogin data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the OneLogin data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-OneLogin-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **OneLoginBearerToken** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. +6. After deploying open Function App page, select your app, go to the **Functions** and click **Get Function Url** copy it and follow p.7 from STEP 1. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the OneLogin data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-OneLogin-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. OneLoginXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + OneLoginBearerToken + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-oneloginiam?tab=Overview) in the Azure Marketplace. |
sentinel | Oracle Cloud Infrastructure Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/oracle-cloud-infrastructure-using-azure-functions.md | + + Title: "Oracle Cloud Infrastructure (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Oracle Cloud Infrastructure (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Oracle Cloud Infrastructure (using Azure Functions) connector for Microsoft Sentinel ++The Oracle Cloud Infrastructure (OCI) data connector provides the capability to ingest OCI Logs from [OCI Stream](https://docs.oracle.com/iaas/Content/Streaming/Concepts/streamingoverview.htm) into Microsoft Sentinel using the [OCI Streaming REST API](https://docs.oracle.com/iaas/api/#/streaming/streaming/20180418). ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-OracleCloudInfrastructureLogsConnector-functionapp | +| **Log Analytics table(s)** | OCI_Logs_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**All OCI Events** + ```kusto +OCI_Logs_CL ++ | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Oracle Cloud Infrastructure (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **OCI API Credentials**: **API Key Configuration File** and **Private Key** are required for OCI API connection. See the documentation to learn more about [creating keys for API access](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm) +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected [**OCILogs**](https://aka.ms/sentinel-OracleCloudInfrastructureLogsConnector-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Creating Stream** ++1. Log in to OCI console and go to *navigation menu* -> *Analytics & AI* -> *Streaming* +2. Click *Create Stream* +3. Select Stream Pool or create a new one +4. Provide the *Stream Name*, *Retention*, *Number of Partitions*, *Total Write Rate*, *Total Read Rate* based on your data amount. +5. Go to *navigation menu* -> *Logging* -> *Service Connectors* +6. Click *Create Service Connector* +6. Provide *Connector Name*, *Description*, *Resource Compartment* +7. Select Source: Logging +8. Select Target: Streaming +9. (Optional) Configure *Log Group*, *Filters* or use custom search query to stream only logs that you need. +10. Configure Target - select the strem created before. +11. Click *Create* ++Check the documentation to get more information about [Streaming](https://docs.oracle.com/en-us/iaas/Content/Streaming/home.htm) and [Service Connectors](https://docs.oracle.com/en-us/iaas/Content/service-connector-hub/home.htm). +++**STEP 2 - Creating credentials for OCI REST API** ++Follow the documentation to [create Private Key and API Key Configuration File.](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm) ++>**IMPORTANT:** Save Private Key and API Key Configuration File created during this step as they will be used during deployment step. +++**STEP 3 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the OCI data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as OCI API credentials, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the OCI data connector using an ARM Template. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-OracleCloudInfrastructureLogsConnector-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key**, **User**, **Key_content**, **Pass_phrase**, **Fingerprint**, **Tenancy**, **Region**, **Message Endpoint**, **Stream Ocid** +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the OCI data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-OracleCloudInfrastructureLogsConnector-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. OciAuditXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + AzureSentinelWorkspaceId + AzureSentinelSharedKey + user + key_content + pass_phrase (Optional) + fingerprint + tenancy + region + Message Endpoint + StreamOcid + logAnalyticsUri (Optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://WORKSPACE_ID.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ocilogs?tab=Overview) in the Azure Marketplace. |
sentinel | Proofpoint On Demand Email Security Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/proofpoint-on-demand-email-security-using-azure-functions.md | + + Title: "Proofpoint On Demand Email Security (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Proofpoint On Demand Email Security (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Proofpoint On Demand Email Security (using Azure Functions) connector for Microsoft Sentinel ++Proofpoint On Demand Email Security data connector provides the capability to get Proofpoint on Demand Email Protection data, allows users to check message traceability, monitoring into email activity, threats,and data exfiltration by attackers and malicious insiders. The connector provides ability to review events in your org on an accelerated basis, get event log files in hourly increments for recent activity. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-proofpointpod-functionapp | +| **Kusto function alias** | ProofpointPOD | +| **Kusto function url** | https://aka.ms/sentinel-proofpointpod-parser | +| **Log Analytics table(s)** | ProofpointPOD_message_CL<br/> ProofpointPOD_maillog_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ++## Query samples ++**Last ProofpointPOD message Events** + ```kusto +ProofpointPOD + + | where EventType == 'message' + + | sort by TimeGenerated desc + ``` ++**Last ProofpointPOD maillog Events** + ```kusto +ProofpointPOD + + | where EventType == 'maillog' + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Proofpoint On Demand Email Security (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Websocket API Credentials/permissions**: **ProofpointClusterID**, **ProofpointToken** is required. [See the documentation to learn more about API](https://proofpointcommunities.force.com/community/s/article/Proofpoint-on-Demand-Pod-Log-API). +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Proofpoint Websocket API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++>This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-proofpointpod-parser) to create the Kusto functions alias, **ProofpointPOD** +++**STEP 1 - Configuration steps for the Proofpoint Websocket API** ++1. Proofpoint Websocket API service requires Remote Syslog Forwarding license. Please refer the [documentation](https://proofpointcommunities.force.com/community/s/article/Proofpoint-on-Demand-Pod-Log-API) on how to enable and check PoD Log API. +2. You must provide your cluster id and security token. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Proofpoint On Demand Email Security data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Proofpoint POD Log API credentials, readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Proofpoint On Demand Email Security data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-proofpointpod-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **ProofpointClusterID**, **ProofpointToken** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Proofpoint On Demand Email Security data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> NOTE:You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-proofpointpod-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. ProofpointXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + ProofpointClusterID + ProofpointToken + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) + - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +3. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-proofpointpod?tab=Overview) in the Azure Marketplace. |
sentinel | Qualys Vulnerability Management Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/qualys-vulnerability-management-using-azure-functions.md | + + Title: "Qualys Vulnerability Management (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Qualys Vulnerability Management (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Qualys Vulnerability Management (using Azure Functions) connector for Microsoft Sentinel ++The [Qualys Vulnerability Management (VM)](https://www.qualys.com/apps/vulnerability-management/) data connector provides the capability to ingest vulnerability host detection data into Microsoft Sentinel through the Qualys API. The connector provides visibility into host detection data from vulerability scans. This connector provides Microsoft Sentinel the capability to view dashboards, create custom alerts, and improve investigation ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | apiUsername<br/>apiPassword<br/>workspaceID<br/>workspaceKey<br/>uri<br/>filterParameters<br/>timeInterval<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-QualysVM-functioncodeV2 | +| **Log Analytics table(s)** | QualysHostDetectionV2_CL<br/> QualysHostDetection_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Top 10 Qualys V2 Vulerabilities detected** + ```kusto +QualysHostDetectionV2_CL + + | extend Vulnerability = tostring(QID_s) + + | summarize count() by Vulnerability + + | top 10 by count_ + ``` ++**Top 10 Vulerabilities detected** + ```kusto +QualysHostDetection_CL + + | mv-expand todynamic(Detections_s) + + | extend Vulnerability = tostring(Detections_s.Results) + + | summarize count() by Vulnerability + + | top 10 by count_ + ``` ++++## Prerequisites ++To integrate with Qualys Vulnerability Management (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Qualys API Key**: A Qualys VM API username and password is required. [See the documentation to learn more about Qualys VM API](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf). +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to Qualys VM to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the Qualys VM API** ++1. Log into the Qualys Vulnerability Management console with an administrator account, select the **Users** tab and the **Users** subtab. +2. Click on the **New** drop-down menu and select **Users..** +3. Create a username and password for the API account. +4. In the **User Roles** tab, ensure the account role is set to **Manager** and access is allowed to **GUI** and **API** +4. Log out of the administrator account and log into the console with the new API credentials for validation, then log out of the API account. +5. Log back into the console using an administrator account and modify the API accounts User Roles, removing access to **GUI**. +6. Save all changes. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Qualys VM connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Qualys VM API Authorization Key(s), readily available. +++++> [!NOTE] + > This connector has been updated, if you have previously deployed an earlier version, and want to update, please delete the existing Qualys VM Azure Function before redeploying this version. Please use Qualys V2 version Workbook, detections. ++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Qualys VM connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-QualysVM-azuredeployV2) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password** , update the **URI**, and any additional URI **Filter Parameters** (each filter should be separated by an "&" symbol, no spaces.) +> - Enter the URI that corresponds to your region. The complete list of API Server URLs can be [found here](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348) -- There is no need to add a time suffix to the URI, the Function App will dynamically append the Time Value to the URI in the proper format. + - The default **Time Interval** is set to pull the last five (5) minutes of data. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion. +> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Quayls VM connector manually with Azure Functions. +++**1. Create a Function App** ++1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**. +2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**. +3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected. +4. Make other preferrable configuration changes, if needed, then click **Create**. +++**2. Import Function App Code** ++1. In the newly created Function App, select **Functions** on the left pane and click **+ New Function**. +2. Select **Timer Trigger**. +3. Enter a unique Function **Name** and leave the default cron schedule of every 5 minutes, then click **Create**. +5. Click on **Code + Test** on the left pane. +6. Copy the [Function App Code](https://aka.ms/sentinel-QualysVM-functioncodeV2) and paste into the Function App `run.ps1` editor. +7. Click **Save**. +++**3. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following eight (8) application settings individually, with their respective string values (case-sensitive): + apiUsername + apiPassword + workspaceID + workspaceKey + uri + filterParameters + timeInterval + logAnalyticsUri (optional) +> - Enter the URI that corresponds to your region. The complete list of API Server URLs can be [found here](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348). The `uri` value must follow the following schema: `https://<API Server>/api/2.0/fo/asset/host/vm/detection/?action=list&vm_processed_after=` -- There is no need to add a time suffix to the URI, the Function App will dynamically append the Time Value to the URI in the proper format. +> - Add any additional filter parameters, for the `filterParameters` variable, that need to be appended to the URI. Each parameter should be seperated by an "&" symbol and should not include any spaces. +> - Set the `timeInterval` (in minutes) to the value of `5` to correspond to the Timer Trigger of every `5` minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion. +> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. +++**4. Configure the host.json**. ++Due to the potentially large amount of Qualys host detection data being ingested, it can cause the execution time to surpass the default Function App timeout of five (5) minutes. Increase the default timeout duration to the maximum of ten (10) minutes, under the Consumption Plan, to allow more time for the Function App to execute. ++1. In the Function App, select the Function App Name and select the **App Service Editor** blade. +2. Click **Go** to open the editor, then select the **host.json** file under the **wwwroot** directory. +3. Add the line `"functionTimeout": "00:10:00",` above the `managedDependancy` line +4. Ensure **SAVED** appears on the top right corner of the editor, then exit the editor. ++> NOTE: If a longer timeout duration is required, consider upgrading to an [App Service Plan](/azure/azure-functions/functions-scale#timeout) ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-qualysvm?tab=Overview) in the Azure Marketplace. |
sentinel | Rapid7 Insight Platform Vulnerability Management Reports Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rapid7-insight-platform-vulnerability-management-reports-using-azure-functions.md | + + Title: "Rapid7 Insight Platform Vulnerability Management Reports (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Rapid7 Insight Platform Vulnerability Management Reports (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Rapid7 Insight Platform Vulnerability Management Reports (using Azure Functions) connector for Microsoft Sentinel ++The [Rapid7 Insight VM](https://www.rapid7.com/products/insightvm/) Report data connector provides the capability to ingest Scan reports and vulnerability data into Microsoft Sentinel through the REST API from the Rapid7 Insight platform (Managed in the cloud). Refer to [API documentation](https://docs.rapid7.com/insight/api-overview/) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | InsightVMAPIKey<br/>InsightVMCloudRegion<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Log Analytics table(s)** | NexposeInsightVMCloud_assets_CL<br/> NexposeInsightVMCloud_vulnerabilities_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Insight VM Report Events - Assets information** + ```kusto +NexposeInsightVMCloud_assets_CL + + | sort by TimeGenerated desc + ``` ++**Insight VM Report Events - Vulnerabilities information** + ```kusto +NexposeInsightVMCloud_vulnerabilities_CL + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Rapid7 Insight Platform Vulnerability Management Reports (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials**: **InsightVMAPIKey** is required for REST API. [See the documentation to learn more about API](https://docs.rapid7.com/insight/api-overview/). Check all [requirements and follow the instructions](https://docs.rapid7.com/insight/managing-platform-api-keys/) for obtaining credentials +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Insight VM API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parsers based on a Kusto Function to work as expected [**InsightVMAssets**](https://aka.ms/sentinel-InsightVMAssets-parser) and [**InsightVMVulnerabilities**](https://aka.ms/sentinel-InsightVMVulnerabilities-parser) which is deployed with the Microsoft Sentinel Solution. +++**STEP 1 - Configuration steps for the Insight VM Cloud** ++ [Follow the instructions](https://docs.rapid7.com/insight/managing-platform-api-keys/) to obtain the credentials. ++++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Workspace data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Rapid7 Insight Vulnerability Management Report data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-InsightVMCloudAPI-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **InsightVMAPIKey**, choose **InsightVMCloudRegion** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Rapid7 Insight Vulnerability Management Report data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](https://aka.ms/sentinel-InsightVMCloudAPI-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ 1. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ 1. **Select Subscription:** Choose the subscription to use. ++ 1. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ 1. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. InsightVMXXXXX). ++ 1. **Select a runtime:** Choose Python 3.8. ++ 1. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + + `InsightVMAPIKey` ++ `InsightVMCloudRegion` ++ `WorkspaceID` ++ `WorkspaceKey` ++ `logAnalyticsUri` (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +3. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-rapid7insightvm?tab=Overview) in the Azure Marketplace. |
sentinel | Sentinelone Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sentinelone-using-azure-functions.md | + + Title: "SentinelOne (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector SentinelOne (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# SentinelOne (using Azure Functions) connector for Microsoft Sentinel ++The [SentinelOne](https://www.sentinelone.com/) data connector provides the capability to ingest common SentinelOne server objects such as Threats, Agents, Applications, Activities, Policies, Groups, and more events into Microsoft Sentinel through the REST API. Refer to API documentation: `https://<SOneInstanceDomain>.sentinelone.net/api-doc/overview` for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | SentinelOneAPIToken<br/>SentinelOneUrl<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-SentinelOneAPI-functionapp | +| **Log Analytics table(s)** | SentinelOne_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**SentinelOne Events - All Activities.** + ```kusto +SentinelOne + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with SentinelOne (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **SentinelOneAPIToken** is required. See the documentation to learn more about API on the `https://<SOneInstanceDomain>.sentinelone.net/api-doc/overview`. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the SentinelOne API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SentinelOne and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/SentinelOne/Parsers/SentinelOne.txt). The function usually takes 10-15 minutes to activate after solution installation/update. +++**STEP 1 - Configuration steps for the SentinelOne API** ++ Follow the instructions to obtain the credentials. ++1. Log in to the SentinelOne Management Console with Admin user credentials. +2. In the Management Console, click **Settings**. +3. In the **SETTINGS** view, click **USERS** +4. Click **New User**. +5. Enter the information for the new console user. +5. In Role, select **Admin**. +6. Click **SAVE** +7. Save credentials of the new user for using in the data connector. +++**NOTE :-** Admin access can be delegated using custom roles. Please review SentinelOne [documentation](https://www.sentinelone.com/blog/feature-spotlight-fully-custom-role-based-access-control/) to learn more about custom RBAC. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the SentinelOne data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the SentinelOne Audit data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-SentinelOneAPI-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **SentinelOneAPIToken**, **SentinelOneUrl** `(https://<SOneInstanceDomain>.sentinelone.net)` and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the SentinelOne Reports data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-SentinelOneAPI-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. SOneXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++ 1. In the Function App, select the Function App Name and select **Configuration**. ++ 2. In the **Application settings** tab, select ** New application setting**. ++ 3. Add each of the following application settings individually, with their respective string values (case-sensitive): + SentinelOneAPIToken + SentinelOneUrl + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) ++> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. ++ 4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sentinelone?tab=Overview) in the Azure Marketplace. |
sentinel | Slack Audit Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/slack-audit-using-azure-functions.md | + + Title: "Slack Audit (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Slack Audit (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Slack Audit (using Azure Functions) connector for Microsoft Sentinel ++The [Slack](https://slack.com) Audit data connector provides the capability to ingest [Slack Audit Records](https://api.slack.com/admins/audit-logs) events into Microsoft Sentinel through the REST API. Refer to [API documentation](https://api.slack.com/admins/audit-logs#the_audit_event) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | SlackAPIBearerToken<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-SlackAuditAPI-functionapp | +| **Kusto function alias** | SlackAudit | +| **Kusto function url** | https://aka.ms/sentinel-SlackAuditAPI-parser | +| **Log Analytics table(s)** | SlackAudit_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Slack Audit Events - All Activities.** + ```kusto +SlackAudit + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Slack Audit (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **SlackAPIBearerToken** is required for REST API. [See the documentation to learn more about API](https://api.slack.com/web#authentication). Check all [requirements and follow the instructions](https://api.slack.com/web#authentication) for obtaining credentials. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Slack REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-SlackAuditAPI-parser) to create the Kusto functions alias, **SlackAudit** +++**STEP 1 - Configuration steps for the Slack API** ++ [Follow the instructions](https://api.slack.com/web#authentication) to obtain the credentials. ++++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Slack Audit data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Slack Audit data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-SlackAuditAPI-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **SlackAPIBearerToken** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Slack Audit data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-SlackAuditAPI-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. SlackAuditXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + SlackAPIBearerToken + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +3. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-slackaudit?tab=Overview) in the Azure Marketplace. |
sentinel | Symantec Proxysg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-proxysg.md | Title: "Symantec ProxySG connector for Microsoft Sentinel" description: "Learn how to install the connector Symantec ProxySG to connect your data source to Microsoft Sentinel." Previously updated : 03/25/2023 Last updated : 07/26/2023 Configure the facilities you want to collect and their severities. 3. Configure and connect the Symantec ProxySG -[Follow these instructions](https://knowledge.broadcom.com/external/article/166529/sending-access-logs-to-a-syslog-server.html) to enable syslog streaming of **Access** Logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. + + 1. Log in to the Blue Coat Management Console . + 2. Select Configuration > Access Logging > Formats. + 3. Select New. + 4. Enter a unique name in the Format Name field. + 5. Click the radio button for **Custom format string** and paste the following string into the field. + <p><code>date time time-taken c-ip cs-userdn cs-auth-groups x-exception-id sc-filter-result cs-categories cs(Referer) sc-status s-action cs-method rs(Content-Type) cs-uri-scheme cs-host cs-uri-port cs-uri-path cs-uri-query cs-uri-extension cs(User-Agent) s-ip sr-bytes rs-bytes x-virus-id x-bluecoat-application-name x-bluecoat-application-operation cs-uri-port x-cs-client-ip-country cs-threat-risk</code></p> + 6. Click the **OK** button. + 7. Click the **Apply** button. + 8. [Follow these instructions](https://knowledge.broadcom.com/external/article/166529/sending-access-logs-to-a-syslog-server.html) to enable syslog streaming of **Access** Logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address |
sentinel | Tenable Io Vulnerability Management Using Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/tenable-io-vulnerability-management-using-azure-function.md | Title: "Tenable.io Vulnerability Management (using Azure Functions) connector fo description: "Learn how to install the connector Tenable.io Vulnerability Management (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 07/26/2023 Tenable_IO_Assets_CL To integrate with Tenable.io Vulnerability Management (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/Content/Platform/Settings/MyAccount/GenerateAPIKey.htm) for obtaining credentials.+- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) for obtaining credentials. ## Vendor installation instructions To integrate with Tenable.io Vulnerability Management (using Azure Function) mak **STEP 1 - Configuration steps for Tenable.io** - [Follow the instructions](https://docs.tenable.com/tenableio/Content/Platform/Settings/MyAccount/GenerateAPIKey.htm) to obtain the required API credentials. + [Follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) to obtain the required API credentials. |
sentinel | Trend Vision One Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-vision-one-using-azure-functions.md | + + Title: "Trend Vision One (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Trend Vision One (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Trend Vision One (using Azure Functions) connector for Microsoft Sentinel ++The [Trend Vision One](https://www.trendmicro.com/en_us/business/products/detection-response/xdr.html) connector allows you to easily connect your Workbench alert data with Microsoft Sentinel to view dashboards, create custom alerts, and to improve monitoring and investigation capabilities. This gives you more insight into your organization's networks/systems and improves your security operation capabilities. ++The Trend Vision One connector is supported in Microsoft Sentinel in the following regions: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central India, Central US, East Asia, East US, East US 2, France Central, Japan East, Korea Central, North Central US, North Europe, Norway East, South Africa North, South Central US, Southeast Asia, Sweden Central, Switzerland North, UAE North, UK South, UK West, West Europe, West US, West US 2, West US 3. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | TrendMicro_XDR_WORKBENCH_CL<br/> TrendMicro_XDR_RCA_Task_CL<br/> TrendMicro_XDR_RCA_Result_CL<br/> TrendMicro_XDR_OAT_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Trend Micro](https://success.trendmicro.com/dcx/s/?language=en_US) | ++## Query samples ++**Critical & High Severity Workbench Alerts** + ```kusto +TrendMicro_XDR_WORKBENCH_CL + + | where severity_s == 'critical' or severity_s == 'high' + ``` ++**Medium & Low Severity Workbench Alerts** + ```kusto +TrendMicro_XDR_WORKBENCH_CL + + | where severity_s == 'medium' or severity_s == 'low' + ``` ++++## Prerequisites ++To integrate with Trend Vision One (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **Trend Vision One API Token**: A Trend Vision One API Token is required. See the documentation to learn more about the [Trend Vision One API](https://automation.trendmicro.com/xdr/home). +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Trend Vision One API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the Trend Vision One API** ++ [Follow these instructions](https://docs.trendmicro.com/en-us/enterprise/trend-micro-xdr-help/ObtainingAPIKeys) to create an account and an API authentication token. +++**STEP 2 - Use the below deployment option to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Trend Vision One connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Trend Vision One API Authorization Token, readily available. ++++Azure Resource Manager (ARM) Template Deployment ++This method provides an automated deployment of the Trend Vision One connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-trendmicroxdr-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter a unique **Function Name**, **Workspace ID**, **Workspace Key**, **API Token** and **Region Code**. + - Note: Provide the appropriate region code based on where your Trend Vision One instance is deployed: us, eu, au, in, sg, jp + - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.trend_micro_vision_one_xdr_mss?tab=Overview) in the Azure Marketplace. |
sentinel | Vmware Carbon Black Cloud Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-carbon-black-cloud-using-azure-functions.md | + + Title: "VMware Carbon Black Cloud (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector VMware Carbon Black Cloud (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# VMware Carbon Black Cloud (using Azure Functions) connector for Microsoft Sentinel ++The [VMware Carbon Black Cloud](https://www.vmware.com/products/carbon-black-cloud.html) connector provides the capability to ingest Carbon Black data into Microsoft Sentinel. The connector provides visibility into Audit, Notification and Event logs in Microsoft Sentinel to view dashboards, create custom alerts, and to improve monitoring and investigation capabilities. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | apiId<br/>apiKey<br/>workspaceID<br/>workspaceKey<br/>uri<br/>timeInterval<br/>CarbonBlackOrgKey<br/>CarbonBlackLogTypes<br/>s3BucketName<br/>EventPrefixFolderName<br/>AlertPrefixFolderName<br/>AWSAccessKeyId<br/>AWSSecretAccessKey<br/>SIEMapiId (Optional)<br/>SIEMapiKey (Optional)<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinelcarbonblackazurefunctioncode | +| **Log Analytics table(s)** | CarbonBlackEvents_CL<br/> CarbonBlackAuditLogs_CL<br/> CarbonBlackNotifications_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft](https://support.microsoft.com/) | ++## Query samples ++**Top 10 Event Generating Endpoints** + ```kusto +CarbonBlackEvents_CL + + | summarize count() by deviceDetails_deviceName_s ++ | top 10 by count_ + ``` ++**Top 10 User Console Logins** + ```kusto +CarbonBlackAuditLogs_CL + + | summarize count() by loginName_s ++ | top 10 by count_ + ``` ++**Top 10 Threats** + ```kusto +CarbonBlackNotifications_CL + + | summarize count() by threatHunterInfo_reportName_s ++ | top 10 by count_ + ``` ++++## Prerequisites ++To integrate with VMware Carbon Black Cloud (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **VMware Carbon Black API Key(s)**: Carbon Black API and/or SIEM Level API Key(s) are required. See the documentation to learn more about the [Carbon Black API](https://developer.carbonblack.com/reference/carbon-black-cloud/cb-defense/latest/rest-api/). + - A Carbon Black **API** access level API ID and Key is required for [Audit](https://developer.carbonblack.com/reference/carbon-black-cloud/cb-defense/latest/rest-api/#audit-log-events) and [Event](https://developer.carbonblack.com/reference/carbon-black-cloud/platform/latest/data-forwarder-config-api/) logs. + - A Carbon Black **SIEM** access level API ID and Key is required for [Notification](https://developer.carbonblack.com/reference/carbon-black-cloud/cb-defense/latest/rest-api/#notifications) alerts. +- **Amazon S3 REST API Credentials/permissions**: **AWS Access Key Id**, **AWS Secret Access Key**, **AWS S3 Bucket Name**, **Folder Name in AWS S3 Bucket** are required for Amazon S3 REST API. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to VMware Carbon Black to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the VMware Carbon Black API** ++ [Follow these instructions](https://developer.carbonblack.com/reference/carbon-black-cloud/authentication/#creating-an-api-key) to create an API Key. +++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the VMware Carbon Black connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the VMware Carbon Black API Authorization Key(s), readily available. ++++Option 1 - Azure Resource Manager (ARM) Template ++This method provides an automated deployment of the VMware Carbon Black connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinelcarbonblackazuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +3. Enter the **Workspace ID**, **Workspace Key**, **Log Types**, **API ID(s)**, **API Key(s)**, **Carbon Black Org Key**, **S3 Bucket Name**, **AWS Access Key Id**, **AWS Secret Access Key**, **EventPrefixFolderName**,**AlertPrefixFolderName**, and validate the **URI**. +> - Enter the URI that corresponds to your region. The complete list of API URLs can be [found here](https://community.carbonblack.com/t5/Knowledge-Base/PSC-What-URLs-are-used-to-access-the-APIs/ta-p/67346) + - The default **Time Interval** is set to pull the last five (5) minutes of data. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion. + - Carbon Black requires a seperate set of API ID/Keys to ingest Notification alerts. Enter the SIEM API ID/Key values or leave blank, if not required. +> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the VMware Carbon Black connector manually with Azure Functions. +++**1. Create a Function App** ++1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**. +2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**. +3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected. +4. Make other preferrable configuration changes, if needed, then click **Create**. +++**2. Import Function App Code** ++1. In the newly created Function App, select **Functions** on the left pane and click **+ Add**. +2. Select **Timer Trigger**. +3. Enter a unique Function **Name** and modify the cron schedule, if needed. The default value is set to run the Function App every 5 minutes. (Note: the Timer trigger should match the `timeInterval` value below to prevent overlapping data), click **Create**. +4. Click on **Code + Test** on the left pane. +5. Copy the [Function App Code](https://aka.ms/sentinelcarbonblackazurefunctioncode) and paste into the Function App `run.ps1` editor. +5. Click **Save**. +++**3. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select **+ New application setting**. +3. Add each of the following thirteen to sixteen (13-16) application settings individually, with their respective string values (case-sensitive): + apiId + apiKey + workspaceID + workspaceKey + uri + timeInterval + CarbonBlackOrgKey + CarbonBlackLogTypes + s3BucketName + EventPrefixFolderName + AlertPrefixFolderName + AWSAccessKeyId + AWSSecretAccessKey + SIEMapiId (Optional) + SIEMapiKey (Optional) + logAnalyticsUri (optional) +> - Enter the URI that corresponds to your region. The complete list of API URLs can be [found here](https://community.carbonblack.com/t5/Knowledge-Base/PSC-What-URLs-are-used-to-access-the-APIs/ta-p/67346). The `uri` value must follow the following schema: `https://<API URL>.conferdeploy.net` - There is no need to add a time suffix to the URI, the Function App will dynamically append the Time Value to the URI in the proper format. +> - Set the `timeInterval` (in minutes) to the default value of `5` to correspond to the default Timer Trigger of every `5` minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion. +> - Carbon Black requires a seperate set of API ID/Keys to ingest Notification alerts. Enter the `SIEMapiId` and `SIEMapiKey` values, if needed, or omit, if not required. +> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details. +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us` +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-vmwarecarbonblack?tab=Overview) in the Azure Marketplace. |
sentinel | Zero Networks Segment Audit Function Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zero-networks-segment-audit-function-using-azure-functions.md | + + Title: "Zero Networks Segment Audit (Function) (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Zero Networks Segment Audit (Function) (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Zero Networks Segment Audit (Function) (using Azure Functions) connector for Microsoft Sentinel ++The [Zero Networks Segment](https://zeronetworks.com/product/) Audit data connector provides the capability to ingest Audit events into Microsoft Sentinel through the REST API. Refer to API guide for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | APIToken<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional)<br/>uri<br/>tableName | +| **Azure function app code** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/ZeroNetworks/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip | +| **Log Analytics table(s)** | ZNSegmentAudit_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Zero Networks](https://zeronetworks.com) | ++## Query samples ++**Zero Networks Segment Audit - All Activities** + ```kusto +ZNSegmentAudit_CL + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Zero Networks Segment Audit (Function) (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials**: **Zero Networks Segment** **API Token** is required for REST API. See the API Guide. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Zero Networks REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++**STEP 1 - Configuration steps for the Zero Networks API** ++ See the API Guide to obtain the credentials. ++++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Workspace data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Zero Networks Segment Audit data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-ZeroNetworks-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **APIToken** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Zero Networks Segment Audit data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-powershell#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/ZeroNetworks/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. ZNSegmentAuditXXXXX). ++ e. **Select a runtime:** Choose PowerShell. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + APIToken + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) + uri + tableName +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +3. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zeronetworksltd1629013803351.azure-sentinel-solution-znsegmentaudit?tab=Overview) in the Azure Marketplace. |
sentinel | Zoom Reports Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zoom-reports-using-azure-functions.md | + + Title: "Zoom Reports (using Azure Functions) connector for Microsoft Sentinel" +description: "Learn how to install the connector Zoom Reports (using Azure Functions) to connect your data source to Microsoft Sentinel." ++ Last updated : 07/26/2023+++++# Zoom Reports (using Azure Functions) connector for Microsoft Sentinel ++The [Zoom](https://zoom.us/) Reports data connector provides the capability to ingest [Zoom Reports](https://developers.zoom.us/docs/api/rest/reference/zoom-api/methods/#tag/Reports) events into Microsoft Sentinel through the REST API. Refer to [API documentation](https://developers.zoom.us/docs/api/) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Application settings** | ZoomApiKey<br/>ZoomApiSecret<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | +| **Azure function app code** | https://aka.ms/sentinel-ZoomAPI-functionapp | +| **Kusto function alias** | Zoom | +| **Kusto function url** | https://aka.ms/sentinel-ZoomAPI-parser | +| **Log Analytics table(s)** | Zoom_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ++## Query samples ++**Zoom Events - All Activities.** + ```kusto +Zoom_CL + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Zoom Reports (using Azure Functions) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **REST API Credentials/permissions**: **ZoomApiKey** and **ZoomApiSecret** are required for Zoom API. [See the documentation to learn more about API](https://developers.zoom.us/docs/internal-apps/jwt/#generating-jwts). Check all [requirements and follow the instructions](https://developers.zoom.us/docs/internal-apps/jwt/#generating-jwts) for obtaining credentials. +++## Vendor installation instructions +++> [!NOTE] + > This connector uses Azure Functions to connect to the Zoom API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. +++>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. +++> [!NOTE] + > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-ZoomAPI-parser) to create the Kusto functions alias, **Zoom** +++**STEP 1 - Configuration steps for the Zoom API** ++ [Follow the instructions](https://developers.zoom.us/docs/internal-apps/jwt/#generating-jwts) to obtain the credentials. ++++**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function** ++>**IMPORTANT:** Before deploying the Zoom Reports data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following). ++++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Zoom Audit data connector using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-ZoomAPI-azuredeploy) +2. Select the preferred **Subscription**, **Resource Group** and **Location**. +> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. +3. Enter the **ZoomApiKey**, **ZoomApiSecret** and deploy. +4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +5. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Zoom Reports data connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy a Function App** ++> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development. ++1. Download the [Azure Function App](https://aka.ms/sentinel-ZoomAPI-functionapp) file. Extract archive to your local development computer. +2. Start VS Code. Choose File in the main menu and select Open Folder. +3. Select the top level folder from extracted files. +4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button. +If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure** +If you're already signed in, go to the next step. +5. Provide the following information at the prompts: ++ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app. ++ b. **Select Subscription:** Choose the subscription to use. ++ c. Select **Create new Function App in Azure** (Don't choose the Advanced option) ++ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. ZoomXXXXX). ++ e. **Select a runtime:** Choose Python 3.8. ++ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located. ++6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. +7. Go to Azure Portal for the Function App configuration. +++**2. Configure the Function App** ++1. In the Function App, select the Function App Name and select **Configuration**. +2. In the **Application settings** tab, select ** New application setting**. +3. Add each of the following application settings individually, with their respective string values (case-sensitive): + ZoomApiKey + ZoomApiSecret + WorkspaceID + WorkspaceKey + logAnalyticsUri (optional) +> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. +4. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-zoomreports?tab=Overview) in the Azure Marketplace. |
service-fabric | How To Managed Cluster Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-nat-gateway.md | + + Title: Configure a Service Fabric managed cluster to use a NAT gateway +description: Use a NAT gateway on your Service Fabric managed cluster to provide internet access without exposing them directly to the internet. +++++ Last updated : 07/24/2023+++# Use a NAT gateway on a Service Fabric managed cluster ++Service Fabric managed clusters have external facing IPs that allows external clients to access the resources of the cluster. However, in some scenarios, it may be preferable to provide internet access to these resources without exposing them directly to the internet. NAT gateways enable this function. ++If your cluster has resources that need to receive inbound traffic from the internet but also has private resources that need to be protected, a NAT gateway can help. Additionally, if you have applications that need to make connections outside of the cluster to access secrets, storage, and other private resources, a NAT gateway can help. ++Here are some of the benefits of using a NAT gateway for your managed cluster: +* Improved security: Azure NAT Gateway is built on the zero trust network security model and is secure by default. With NAT gateway, private instances within a subnet don't need public IP addresses to reach the internet. Private resources can reach external sources outside the virtual network by Source Network Address Translating (SNAT) to the NAT gateway's static public IP addresses or prefixes. You can provide a contiguous set of IPs for outbound connectivity by using a public IP prefix, and you can configure destination firewall rules based on this predictable IP list. +* Resiliency: Azure NAT Gateway is fully managed and distributed service. It doesn't depend on individual compute instances such as VMs or a single physical gateway device. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage. Software defined networking makes a NAT gateway highly resilient. +* Simplified network architecture: NAT gateways allow you to simplify your network architecture by eliminating the need for a bastion host or VPN connection to access instances in private subnets. +* Performance: Azure NAT Gateway is [performant and stable](../nat-gateway/nat-gateway-resource.md#performance). ++The following diagram depicts a cluster with a primary and secondary node type where each node type has their own subnet. The secondary node type is placed behind a NAT gateway, and all its outgoing traffic is routed through the gateway. When traffic originates from the secondary node type, the public IP address is that of the NAT gateway. Because all outgoing requests are routed through the NAT gateway, you can implement additional NSG rules, which improve security and prevents external services from discovering internal services. ++![Diagram depicting a cluster using a NAT gateway to handle outgoing traffic.](media/how-to-managed-cluster-nat-gateway/nat-gateway-scenario-diagram.png) ++The following scenarios are supported use cases for NAT gateways on Service Fabric managed clusters: +* Customers can attach a NAT gateway to any node type and subnet configuration under the [Bring your own virtual network section of the Configure managed cluster network settings article](how-to-managed-cluster-networking.md#bring-your-own-virtual-network). +* Customers can attach a NAT gateway to secondary node types using a dedicated subnet as outlined in the [Bring your own Azure Load Balancer section of the Configure managed cluster network settings article](how-to-managed-cluster-networking.md#bring-your-own-azure-load-balancer). When you add your own load balancer and NAT gateway, you get increased control over your network traffic. ++## Prerequisites ++For your scenario, make sure you follow the steps to configure your managed cluster's network properly. ++* [Bring your own virtual network](how-to-managed-cluster-networking.md#bring-your-own-virtual-network) +* [Bring your own Azure Load Balancer](how-to-managed-cluster-networking.md#bring-your-own-azure-load-balancer) +++## Bring your own virtual network with NAT gateway ++The following steps describe how to attach a NAT gateway to your virtual network subnets. ++1. Follow the steps in the [Azure NAT Gateway quickstart](../nat-gateway/quickstart-create-nat-gateway-portal.md) to create a NAT gateway. ++1. Provide the Service Fabric resource provider permission to modify the NAT gateway's settings using role assignment. Follow the first two steps in [Bring your own virtual network section of the Configure managed cluster network settings article](how-to-managed-cluster-networking.md#bring-your-own-virtual-network), injecting your NAT gateway's information into subnet parameters. ++1. Now, you're ready to attach the NAT gateway to your virtual network's subnet. You can use an ARM template, the Azure CLI, Azure PowerShell, or the Azure portal. ++### ARM template + +Modify and deploy the following ARM template to introduce the NAT gateway into your subnet's properties: ++```json +{ + "apiVersion": "[variables('networkApiVersion')]", + "type": "Microsoft.Network/virtualNetworks", + "name": "[parameters('vnetName')]", + "location": "[resourcegroup().location]", + "dependsOn": [ + "[parameters('natGatewayId'))]" + ], + "properties": { + "subnets": [ + { + "name": "[parameters('subnetName')]", + "properties": { + "addressPrefix": "[parameters('subnetAddressPrefix')]", + "natGateway": { + "id": "[parameters('natGatewayId'))]" + } + } + } + ] + } +} +``` ++### Azure CLI ++Modify and run the following Azure CLI command with your information: ++```azurecli +az network vnet subnet update --resource-group myResourceGroup --vnet-name mvVNet --name mySubnet --nat-gateway myNATGateway +``` ++### Azure PowerShell ++1. Place the virtual network into a variable ++ ```powershell + $net = @{ + Name = `myVNet` + ResourceGroupName = 'myResourceGroup' + } + $vnet = Get-AzVirtualNetwork @net + ``` ++1. Place the NAT gateway into a variable ++ ```powershell + $nat = @{ + Name = 'myNATgateway' + ResourceGroupName = 'myResourceGroup' + } + $natGateway = Get-AzNatGateway @nat + ``` ++1. Set the subnet configuration ++ ```powershell + $subnet = @{ + Name = 'mySubnet' + VirtualNetwork = $vnet + NatGateway = $natGateway + AddressPrefix = '10.0.2.0/24' + } + Set-AzVirtualNetworkSubnetConfig @subnet + ``` ++1. Save the configuration to the virtual network ++ ```powershell + $vnet | Set-AzVirtualNetwork + ``` ++### Azure portal ++1. On the [Azure portal](https://portal.azure.com), navigate to your virtual network resource. ++1. Under **Settings**, select **Subnets**. ++1. Select the subnet you want to associate with your NAT gateway. ++1. Open the **NAT gateway** dropdown and select your NAT gateway. ++ ![Screenshot showing the dropdown for selecting your NAT gateway.](media/how-to-managed-cluster-nat-gateway/attach-nat-gateway-portal.png) ++1. Click **Save**. ++## Bring your own load balancer with Azure NAT Gateway ++The following steps describe how to attach a NAT gateway to your virtual network subnets. ++> [!NOTE] +> This scenario is only supported via ARM template. ++1. Follow the steps in the [Azure NAT Gateway quickstart](../nat-gateway/quickstart-create-nat-gateway-portal.md) to create a NAT gateway. ++1. Provide the Service Fabric resource provider permission to modify the NAT gateway's settings using role assignment. Follow the first two steps in [Bring your own virtual network section of the Configure managed cluster network settings article](how-to-managed-cluster-networking.md#bring-your-own-virtual-network), injecting your NAT gateway's information into subnet parameters. ++1. Add the following property to your deployment to attach the NAT gateway to your dedicated subnet: ++```json +{ + "apiVersion": "2023-03-01-preview", + "type": "Microsoft.ServiceFabric/managedclusters/nodetypes", + "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]", + "location": "[parameters('clusterLocation')]", + "properties": { + ... + "isPrimary": false, + "natGatewayId": "[variables('natID')]", + "frontendConfigurations": [...], + ... +} +``` ++## Next steps ++* Review the [Service Fabric managed cluster networking scenarios](how-to-managed-cluster-networking.md) outlined in this article. +* Review [Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md). + |
service-fabric | Service Fabric Application Arm Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-arm-resource.md | |
service-health | Resource Health Checks Resource Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-checks-resource-types.md | Below is a complete list of all the checks executed through resource health by r ## Microsoft.cognitiveservices/accounts |Executed Checks| ||-|<ul><li>Can the account be reached from within the datacenter?</li><li>Is the Cognitive Services Resource Provider available?</li><li>Is the Cognitive Service available in the appropriate region?</li><li>Can read operations be performed on the storage account holding the resource metadata?</li><li>Has the API call quota been reached?</li><li>Has the API call read-limit been reached?</li></ul>| +|<ul><li>Can the account be reached from within the datacenter?</li><li>Is the Azure AI services resource provider available?</li><li>Is the Cognitive Service available in the appropriate region?</li><li>Can read operations be performed on the storage account holding the resource metadata?</li><li>Has the API call quota been reached?</li><li>Has the API call read-limit been reached?</li></ul>| ## Microsoft.compute/hostgroups/hosts |Executed Checks| |
site-recovery | Azure To Azure How To Enable Zone To Zone Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md | -> [!NOTE] -> -> - Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, East Asia, Japan East, Korea Central, Australia East, India Central, China North 3, Qatar Central, UK South, West Europe, North Europe, Germany West Central, Norway East, France Central, Switzerland North, Sweden Central (Managed Access), South Africa North, Canada Central, US Gov Virginia, Central US, South Central US, East US, East US 2, West US 2, Brazil South, West US 3 and UAE North. -> -> -> - Site Recovery does not move or store customer data out of the region in which it's deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data. -> -> -> - Zone to Zone disaster recovery is not supported for VMs having ZRS managed disks. - Site Recovery service contributes to your business continuity and disaster recovery strategy by keeping your business apps up and running, during planned and unplanned outages. It is the recommended Disaster Recovery option to keep your applications up and running if there are regional outages. Availability Zones are unique physical locations within an Azure region. Each zone has one or more datacenters. If you want to move VMs to an availability zone in a different region, [review this article](../resource-mover/move-region-availability-zone.md). +## Supported regions for Zone to Zone Disaster Recovery ++Support for Zone to Zone disaster recovery is currently limited to the following regions: ++| **Americas** | Europe | Middle East | Africa | APAC | +|--|--|-|--|--| +| Canada Central | UK South | Qatar Central | South Africa North | Southeast Asia | +| US Gov Virginia | West Europe | | | East Asia | +| Central US | North Europe | UAE North | | Japan East | +| South Central US | Germany West Central | | | Korea Central | +| East US | Norway East | | | Australia East | +| East US 2 | France Central | | | Central India | +| West US 2 | Switzerland North | | | China North 3 | +| West US 3 | Sweden Central (Managed Access) | | | | +| Brazil South | Poland Central | | | | +| | Italy North | | | | + +Site Recovery does not move or store customer data out of the region in which it's deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data. ++>[!Note] +>Zone to Zone disaster recovery is not supported for VMs having ZRS managed disks. + ## Using Availability Zones for Disaster Recovery Typically, Availability Zones are used to deploy VMs in a High Availability configuration. They may be too close to each other to serve as a Disaster Recovery solution in natural disaster. |
site-recovery | Vmware Physical Mobility Service Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md | Agent configuration logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentConfigurat 3. After the installation is finished, the Mobility service must be registered to the configuration server. Run the following command to register the Mobility service with the configuration server. ```bash- /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <CSIP> -P /var/passphrase.txt + /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <CSIP> -P /var/passphrase.txt -c CSLegacy ``` #### Installation settings |
spring-apps | Quickstart Deploy Microservice Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-microservice-apps.md | This article explains how to deploy microservice applications to Azure Spring Ap The diagram shows the following architectural flows and relationships of the Pet Clinic sample: -- Uses Azure Spring Apps to manage the Spring Boot apps.+- Uses Azure Spring Apps to manage the Spring Boot apps. Each app uses HSQLDB as the persistent store. - Uses the managed components Spring Cloud Config Server and Eureka Service Discovery on Azure Spring Apps. The Config Server reads Git repository configuration. - Exposes the URL of API Gateway to load balance requests to service apps, and exposes the URL of the Admin Server to manage the applications. - Analyzes logs using the Log Analytics workspace. - Monitors performance with Application Insights. +> [!NOTE] +> This article uses a simplified version of PetClinic, using an in-memory database that is not production-ready to quickly deploy to Azure Spring Apps. +> +> The deployed app `admin-server` exposes public access, which is a risk point. The production environment needs to secure the Spring Boot Admin application. + ## 1. Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] Open the URL exposed by the app `admin-server` to manage the applications throug ## 7. Next steps +> [!div class="nextstepaction"] +> [Quickstart: Integrate with Azure Database for MySQL](./quickstart-integrate-azure-database-mysql.md) ++> [!div class="nextstepaction"] +> [Use Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md) ++> [!div class="nextstepaction"] +> [Automate application deployments to Azure Spring Apps](./how-to-cicd.md) ++> [!div class="nextstepaction"] +> [Structured application log for Azure Spring Apps](./structured-app-log.md) ++> [!div class="nextstepaction"] +> [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md) + > [!div class="nextstepaction"] > [Quickstart: Using Log Analytics with Azure Spring Apps](./quickstart-setup-log-analytics.md) Open the URL exposed by the app `admin-server` to manage the applications throug > [Quickstart: Monitoring with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md) > [!div class="nextstepaction"]-> [Quickstart: Integrate with Azure Database for MySQL](./quickstart-integrate-azure-database-mysql.md) +> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md) ++> [!div class="nextstepaction"] +> [Quickstart: Introduction to the sample app - Azure Spring Apps](./quickstart-sample-app-introduction.md) ++> [!div class="nextstepaction"] +> [Introduction to the Fitness Store sample app](./quickstart-sample-app-acme-fitness-store-introduction.md) For more information, see the following articles: |
static-web-apps | Application Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/application-settings.md | Title: Configure application settings for Azure Static Web Apps description: Learn how to configure application settings for Azure Static Web Apps. -+ Last updated 01/10/2023-+ |
static-web-apps | Database Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/database-configuration.md | The following code shows you how to use a folder named *db-config* for the datab app_location: "/src" api_location: "api" output_location: "/dist"-data_api_location: "db-config" # Folder holding the staticwebapps.database.config.json file +data_api_location: "db-config" # Folder holding the staticwebapp.database.config.json file ``` |
static-web-apps | Database Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/database-overview.md | Here's an example command that starts the SWA CLI with a database connection: swa start ./src --data-api-location swa-db-connections ``` -This command starts the SWA CLI in the *src* directory. The `--data-api-location` option tells the CLI that a folder named *swa-db-connections* holds the *[staticwebapps.database.config.json](https://github.com/MicrosoftDocs/data-api-builder-docs/blob/main/data-api-builder/configuration-file.md)* file. +This command starts the SWA CLI in the *src* directory. The `--data-api-location` option tells the CLI that a folder named *swa-db-connections* holds the *[staticwebapp.database.config.json](https://github.com/MicrosoftDocs/data-api-builder-docs/blob/main/data-api-builder/configuration-file.md)* file. > [!NOTE] > In development, if you use a connection string to authenticate, use the `env()` function to read a connection string from an environment variable. The string passed in to the `env` function must be surrounded by quotes. |
static-web-apps | Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/private-endpoint.md | Title: Configure private endpoint in Azure Static Web Apps description: Learn to configure private endpoint access for Azure Static Web Apps --++ Last updated 7/28/2021 Since your application is no longer publicly available, the only way to access i ## Next steps > [!div class="nextstepaction"]-> [Learn more about private endpoints](../private-link/private-endpoint-overview.md) +> [Learn more about private endpoints](../private-link/private-endpoint-overview.md) |
storage | Blob Upload Function Trigger Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger-javascript.md | Title: Upload and analyze a file with Azure Functions (JavaScript) and Blob Storage -description: With JavaScript, learn how to upload an image to Azure Blob Storage and analyze its content using Azure Functions and Cognitive Services +description: With JavaScript, learn how to upload an image to Azure Blob Storage and analyze its content using Azure Functions and Azure AI services In this tutorial, learn how to: > [!div class="checklist"] > * Upload images and files to Blob Storage > * Use an Azure Function event trigger to process data uploaded to Blob Storage-> * Use Cognitive Services to analyze an image +> * Use Azure AI services to analyze an image > * Write data to Cosmos DB using Azure Function output bindings :::image type="content" source="./media/blob-upload-storage-function/functions-storage-database-architectural-diagram.png" alt-text="Architectural diagram showing an image blob is added to Blob Storage, then analyzed by an Azure Function, with the analysis inserted into a Cosmos DB."::: If you're not going to continue to use this application, you can delete the reso ## Next steps * [Create a function app that connects to Azure services using identities instead of secrets](/azure/azure-functions/functions-identity-based-connections-tutorial)-* [Remediating anonymous public read access for blob data](/azure/storage/blobs/anonymous-read-access-overview) +* [Remediating anonymous public read access for blob data](/azure/storage/blobs/anonymous-read-access-overview) |
storage | Blob Upload Function Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger.md | Title: Upload and analyze a file with Azure Functions and Blob Storage -description: Learn how to upload an image to Azure Blob Storage and analyze its content using Azure Functions and Cognitive Services +description: Learn how to upload an image to Azure Blob Storage and analyze its content using Azure Functions and Azure AI services In this tutorial, you will learn how to: > [!div class="checklist"] > - Upload images and files to Blob Storage > - Use an Azure Function event trigger to process data uploaded to Blob Storage-> - Use Cognitive Services to analyze an image +> - Use Azure AI services to analyze an image > - Write data to Table Storage using Azure Function output bindings ## Prerequisites Sign in to the [Azure portal](https://portal.azure.com/#create/Microsoft.Storage 1. On the navigation panel, choose **Containers**. -1. On the **Containers** page, select **+ Container** at the top. In the slide out panel, enter a **Name** of *imageanalysis*, and make sure the **Public access level** is set to **Blob (anonymous read access for blobs only**. Then select **Create**. +1. On the **Containers** page, select **+ Container** at the top. In the slide out panel, enter a **Name** of *imageanalysis*, and make sure the **Public access level** is set to **Blob (anonymous read access for blobs only)**. Then select **Create**. :::image type="content" source="./media/blob-upload-storage-function/portal-container-create-small.png" alt-text="A screenshot showing how to create a new storage container." lightbox="media/blob-upload-storage-function/portal-container-create.png"::: Copy the value of the `connectionString` property and paste it somewhere to use ## Create the Computer Vision service -Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers a variety of features for extracting data out of images. You can learn more about Computer Vision on the [overview page](../../ai-services/computer-vision/overview.md). +Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure AI services and offers a variety of features for extracting data out of images. You can learn more about Computer Vision on the [overview page](../../ai-services/computer-vision/overview.md). ### [Azure portal](#tab/azure-portal) |
storage | Storage Blob Use Access Tier Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md | |
storage | Storage Blob Use Access Tier Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md | |
storage | Azure Defender Storage Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md | Learn more about Microsoft Defender for Storage [capabilities](../../defender-fo |Aspect|Details| |-|:-|-|Release state:|General availability (GA)| -|Feature availability:|- Activity monitoring (security alerts) - General availability (GA)<br>- Malware Scanning ΓÇô Preview<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview| -|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): Free during public preview\*\*<br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* In the future, Malware Scanning will be priced at $0.15/GB of data ingested. Billing for Malware Scanning is not enabled during public preview and advanced notice will be given before billing starts.| +|Release state:|General Availability (GA)| +|Feature availability:|- Activity monitoring (security alerts) - General Availability (GA)<br>- Malware Scanning ΓÇô Preview, **General Availability (GA) on September 1, 2023** <br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview| +|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): Free during public preview\*\*<br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* Malware Scanning is offered for free during the public preview but will **start being billed on September 1, 2023, at $0.15/GB (USD) of data ingested.** Customers are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per month per storage account and control costs using this feature.| | Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring | |Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.| |Clouds:|:::image type="icon" source="../../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds\*<br>:::image type="icon" source="../../defender-for-cloud/media/icons/no-icon.png"::: Azure Government (only activity monitoring support on the classic plan)<br>:::image type="icon" source="../../defender-for-cloud/media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="../../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts| |
storage | Classic Account Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md | For more information about the advantages of using Azure Resource Manager, see [ ## What happens if I don't migrate my accounts? -Starting on September 1, 2024, customers will no longer be able to connect to classic storage accounts by using Azure Service Manager. Any data still contained in these accounts will no longer be accessible through Azure Service Manager. +Starting on September 1, 2024, customers will no longer be able to manage classic storage accounts using Azure Service Manager. Any data still contained in these accounts will be preserved. -If your applications are using Azure Service Manager classic APIs to access classic accounts, then those applications will no longer be able to access those storage accounts after August 31, 2024. +If your applications are using Azure Service Manager classic APIs to manage classic accounts, then those applications will no longer be able to manage those storage accounts after August 31, 2024. > [!WARNING]-> If you do not migrate your classic storage accounts to Azure Resource Manager by August 31, 2024, you will permanently lose access to the data in those accounts. +> If you do not migrate your classic storage account to Azure Resource Manager by August 31, 2024, you will no longer be able to perform management operations through Azure Service Manager. ## What actions should I take? For step-by-step instructions for migrating your classic storage accounts, see [ Depending on when your subscription was created, you may no longer be able to create classic storage accounts: - Subscriptions created after August 31, 2022 can no longer create classic storage accounts.-- Subscriptions created before September 1, 2022 will be able to create classic storage accounts until September 1, 2023+- Subscriptions created before September 1, 2022 will be able to create classic storage accounts until September 1, 2023. +- Also, beginning August 31, 2022, the ability to create classic storage accounts has been discontinued in additional phases based on the last time a classic storage account was created. We recommend creating storage accounts only in Azure Resource Manager from this point forward. ### What happens to existing classic storage accounts after August 31, 2024? -After August 31, 2024, you'll no longer be able to access data in your classic storage accounts or manage them. It won't be possible to migrate a classic storage account after August 31, 2024. --### Can Microsoft handle this migration for me? --No, Microsoft can't migrate a customer's storage account on their behalf. Customers must use the self-serve options listed above. +After August 31, 2024, you'll no longer be able to manage data in your classic storage accounts through Azure Service Manager. The data will be preserved but we highly recommend migrating these accounts to Azure Resource Manager to avoid any service interruptions. ### Will there be downtime when migrating my storage account from Classic to Resource Manager? |
storage | Storage Network Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md | The following table lists services that can access your storage account data if | Microsoft Autonomous Systems | `Microsoft.AutonomousSystems/workspaces` | Enables access to storage accounts. | | Azure Cache for Redis | `Microsoft.Cache/Redis` | Enables access to storage accounts. [Learn more](../../azure-cache-for-redis/cache-managed-identity.md).| | Azure Cognitive Search | `Microsoft.Search/searchServices` | Enables access to storage accounts for indexing, processing, and querying. |-| Azure Cognitive Services | `Microsoft.CognitiveService/accounts` | Enables access to storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).| +| Azure AI services | `Microsoft.CognitiveService/accounts` | Enables access to storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).| | Azure Container Registry | `Microsoft.ContainerRegistry/registries`| Through the ACR Tasks suite of features, enables access to storage accounts when you're building container images. | | Azure Databricks | `Microsoft.Databricks/accessConnectors` | Enables access to storage accounts. | | Azure Data Factory | `Microsoft.DataFactory/factories` | Enables access to storage accounts through the Data Factory runtime. | |
storage | Videos Azure Files And File Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/videos-azure-files-and-file-sync.md | description: View a comprehensive list of Azure Files and Azure File Sync video Previously updated : 04/19/2023 Last updated : 07/26/2023 If you're new to Azure Files and File Sync or looking to deepen your understandi ## Video list + :::column::: + <iframe width="560" height="315" src="https://www.youtube.com/embed/TOHaNJpAOfc" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> + :::column-end::: + :::column::: + **How Azure Files can help protect against ransomware and accidental data loss** + :::column-end::: + :::row::: :::column::: <iframe width="560" height="315" src="https://www.youtube.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> |
storage | Elastic San Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-performance.md | + + Title: Azure Elastic SAN Preview and virtual machine performance +description: Learn how your workload's performance is handled by Azure Elastic SAN and Azure Virtual Machines. +++ Last updated : 07/28/2023++++# Elastic SAN Preview and virtual machine performance ++This article clarifies how Elastic SAN performance works, and how the combination of Elastic SAN limits and Azure Virtual Machines (VM) limits can affect the performance of your workloads. ++## How performance works ++Azure VMs have input/output operations per second (IOPS) and throughput performance limits based on the [type and size of the VM](../../virtual-machines/sizes.md). An Elastic SAN has a pool of performance that it allocates to each of its volumes. Elastic SAN volumes can be attached to VMs and each volume has its own IOPS and throughput limits. ++Your application's performance gets throttled when it requests more IOPS or throughput than what is allotted for the VM or attached volumes. When throttled, the application has suboptimal performance, and can experience negative consequences like increased latency. One of the main benefits of an Elastic SAN is its ability to provision IOPS automatically, based on demand. Your SAN's IOPS are shared amongst all its volumes, so when a workload peaks, it can be handled without throttling or extra cost. This article shows how this provisioning works. ++### Elastic SAN performance ++An Elastic SAN has three attributes that determine its performance: total capacity, IOPS, and throughput. ++### Capacity ++The total capacity of your Elastic SAN is determined by two different capacities, the base capacity and the additional capacity. Increasing the base capacity also increases the SAN's IOPS and throughput but is more costly than increasing the additional capacity. Increasing additional capacity doesn't increase IOPS or throughput. ++### IOPS ++The IOPS of an Elastic SAN increases by 5,000 per base TiB. So if you had an Elastic SAN that has 6 TiB of base capacity, that SAN could still provide up to 30,000 IOPS. That same SAN would still provide 30,000 IOPS whether it had 50 TiB of additional capacity or 500 TiB of additional capacity, since the SAN's performance is only determined by the base capacity. The IOPS of an Elastic SAN are distributed among all its volumes. ++### Throughput ++The throughput of an Elastic SAN increases by 80 MB/s per base TiB. So if you had an Elastic SAN that has 6 TiB of base capacity, that SAN could still provide up to 480 MB/s. That same SAN would provide 480-MB/s throughput whether it had 50 TiB of additional capacity or 500 TiB of additional capacity, since the SAN's performance is only determined by the base capacity. The throughput of an Elastic SAN is distributed among all its volumes. ++### Elastic SAN volumes ++The performance of an individual volume is determined by its capacity. The maximum IOPS of a volume increase by 750 per GiB, up to a maximum of 64,000 IOPS. The maximum throughput increases by 60 MB/s per GiB, up to a maximum of 1,024 MB/s. A volume needs at least 86 GiB to be capable of using 64,000 IOPS. A volume needs at least 18 GiB in order to be capable of using the maximum 1,024 MB/s. The combined IOPS and throughput of all your volumes can't exceed the IOPS and throughput of your SAN. ++## Example configuration ++Each of the example scenarios in this article uses the following configuration for the VMs and the Elastic SAN: ++### VM limits ++|VM |VM IOPS limit | +||| +|Standard_DS2_v2 (AKS) |5,000 | +|Standard_L48s_v2 (workload 1) |48,000 | +|Standard_L32s_v3 (workload 2) |51,200 | +|Standard_L48_v3 (workload 3) |76,800 | ++### Elastic SAN limits ++|Resource |Capacity |IOPS | +|||| +|Elastic SAN |25 TiB |135,000 (provisioned) | +|AKS SAN volume |3 TiB | Up to 64,000 | +|Workload 1 SAN volume |10 TiB |Up to 64,000 | +|Workload 2 SAN volume |4 TiB |Up to 64,000 | +|Workload 3 SAN volume |2 TiB |Up to 64,000 | +++## Example scenarios ++The following example scenarios depict how your Elastic SAN handles performance allocation. ++### Typical workload ++|Workload |Requested IOPS |Served IOPS | +|||| +|AKS workload |3,000 |3,000 | +|Workload 1 |10,000 |10,000 | +|Workload 2 |8,000 |8,000 | +|Workload 3 |20,000 |20,000 | ++In this scenario, no throttling occurs at either the VM or SAN level. The SAN itself has 135,000 IOPS, each volume is large enough to serve up to 64,000 IOPS, enough IOPS are available from the SAN, none of the VM's IOPS limits have been surpassed, and the total IOPS requested is 41,000. So the workloads all execute without any throttling. +++### Single workload spike +++|Workload |Requested IOPS |Served IOPS |Spike time| +||||| +|AKS workload |2,000 |2,000 |N/A | +|Workload 1 |10,000 |10,000 |N/A | +|Workload 2 |10,000 |10,000 |N/A | +|Workload 3 |64,000 |64,000 |9:00 am | ++In this scenario, no throttling occurs. Workload 3 spiked at 9am, requesting 64,000 IOPS. None of the other workloads spiked and the SAN had enough free IOPS to distribute to the workload, so there was no throttling. ++Generally, this is the ideal configuration for a SAN sharing workloads. It's best to have enough performance to handle the normal operations of workloads, and occasional peaks. +++### All workloads spike +++|Workload |Requested IOPS |Served IOPS |Spike time | +||||| +|AKS workload |5,000 |5,000 |9:00 am | +|Workload 1 |40,000 |19,000 |9:01 am | +|Workload 2 |45,000 |45,000 |9:00 am | +|Workload 3 |64,000 |64,000 |9:00 am | +++It's important to know the behavior of a SAN in the worst case scenario, where each workload peaks at the same time. ++In this scenario, all the workloads hit their spike at almost the same time. At this point, the total IOPS required by all the workloads combined (64,000 + 45,000 + 40,000 + 5,000) is more than the IOPS provisioned at the SAN level (135,000). So the workloads are throttled. Throttling happens on a first-come, first-served basis, so whichever workloads request IOPS after the max capacity has been reached doesn't get more performance. In this case, workload 1 requested 40,000 IOPS after the other workloads, the SAN had already allocated most of its available IOPS, so only the remaining IOPS was provided. +++## Next steps ++[Deploy an Elastic SAN (preview)](elastic-san-create.md). |
storage | File Sync Share To Share Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-share-to-share-migration.md | Title: Migrate files from one SMB Azure file share to another when using Azure F description: Learn how to migrate files from one SMB Azure file share to another when using Azure File Sync, even if the file shares are in different storage accounts. Previously updated : 07/11/2023 Last updated : 07/27/2023 The following instructions assume you have one Azure File Sync server in your sy 1. Make sure that cloud tiering is off on the server endpoint. You can check and change the status from the Azure portal under server endpoint properties. -1. If you've tiered a small amount of data to the cloud (<1 TiB), run the `Invoke-StorageSyncFileRecall` cmdlet with retries to sync the tiered files back down (see [How to manage tiered files](file-sync-how-to-manage-tiered-files.md)). Because there could be an active cloud tiering session when you first run this cmdlet, it's a good idea to run it twice to ensure that all files are fully recalled and local on the server before you continue. +1. Run the `Invoke-StorageSyncFileRecall` cmdlet with retries to sync any tiered files back down (see [How to manage tiered files](file-sync-how-to-manage-tiered-files.md)). Because there could be an active cloud tiering session when you first run this cmdlet, it's a good idea to run it twice and examine the summary output to ensure that all files are fully recalled and local on the server before you continue. 1. [Create a new SMB Azure file share](../files/storage-how-to-create-file-share.md) as the target. 1. [Create a new sync group](file-sync-deployment-guide.md#create-a-sync-group-and-a-cloud-endpoint) and associate the cloud endpoint to the Azure file share you created. The sync group must be in a storage sync service in the same region as the new target Azure file share. -Now you have two options: You can either sync your data to the new Azure file share [using the same local file server](#connect-to-the-new-azure-file-share-using-the-same-local-file-server) (recommended), or [move to a new Azure File Sync server](#move-to-a-new-azure-file-sync-server). +Now you have two options: You can either sync your data to the new Azure file share [using the same local file server](#connect-to-the-new-azure-file-share) (recommended), or [move to a new Azure File Sync server](#move-to-a-new-azure-file-sync-server-optional). -### Connect to the new Azure file share using the same local file server +### Move to a new Azure File Sync server (optional) -If you plan to use the same local file server, follow these instructions. +If you plan to use the same local file server, you can skip this section and proceed to [Connect to the new Azure file share](#connect-to-the-new-azure-file-share). -1. [Remove the existing sever endpoint](file-sync-server-endpoint-delete.md). This will keep all the data, but will remove the association with the existing sync group and existing file share. --1. If the new sync group isn't in the same storage sync service, you'll need to [unregister the server](file-sync-server-registration.md#registerunregister-a-server-with-storage-sync-service) from that storage sync service and register it with the new service. Keep in mind that a server can only be registered with one storage sync service. --1. [Create a new server endpoint](file-sync-server-endpoint-create.md#create-a-server-endpoint) in the sync group you created and connect it to the same local data. ---### Move to a new Azure File Sync server --If you want to move to a new local file server, you can use [Storage Migration Service](/windows-server/storage/storage-migration-service/overview) (SMS) to: +If you want to move to a new local Azure File Sync server, you can use [Storage Migration Service](/windows-server/storage/storage-migration-service/overview) (SMS) to: - Copy over all your share-level permissions - Make several passes to catch up with changes that happened during migration All you need to do is set up a new on-premises file server, and then connect the Optionally, you can manually copy the source share to another share on the existing file server. +### Connect to the new Azure file share ++Follow these instructions to connect to the new Azure file share. ++1. [Remove the existing sever endpoint](file-sync-server-endpoint-delete.md). This will keep all the data, but will remove the association with the existing sync group and existing file share. ++1. If the new sync group isn't in the same storage sync service, you'll need to [unregister the server](file-sync-server-registration.md#registerunregister-a-server-with-storage-sync-service) from that storage sync service and register it with the new service. Keep in mind that a server can only be registered with one storage sync service. ++1. [Create a new server endpoint](file-sync-server-endpoint-create.md#create-a-server-endpoint) in the sync group you created and connect it to the same local data. ++ ## Migrate files when cloud tiering is on -If you're using the cloud tiering feature of Azure File Sync, we recommend copying the data from within Azure to prevent unnecessary cloud recalls through the source. The process will differ slightly depending on whether you're migrating within the same region or across regions. +If you're using the cloud tiering feature of Azure File Sync, we recommend copying the data from within Azure to prevent unnecessary cloud recalls through the source. The process will differ slightly depending on whether you're migrating within the same region or across regions. The migration process always requires some downtime during the cutover. An Azure File Sync registered server can only join one storage sync service, and the storage sync service must be in the same region as the share. Therefore, if you're moving between regions, you'll need to migrate to a new Azure File Sync server connected to the target share. If you're moving within the same region, you can use the existing AFS server. +> [!IMPORTANT] +> When mounting Azure file shares in a migration scenario, be sure to use the storage account key to make sure the VM has access to all the files. Don't use a domain identity. + ### Migrate within the same region Follow these instructions if cloud tiering is on and you're migrating within the same region. You can use your existing Azure File Sync server (see diagram), or optionally create a new server if you're concerned about impacting the existing share. You can now start the [initial copy](#initial-copy). Use Robocopy, a tool that's built into Windows, to copy the files from source to target. -1. Run this command at the Windows command prompt: +1. Run this command at the Windows command prompt. Optionally, you can include flags for logging features as a best practice (/NP, /NFL, /NDL, /UNILOG). ```console- robocopy <source> <target> /mir /copyall /mt:16 /DCOPY:DAT + robocopy <source> <target> /MIR /COPYALL /MT:16 /R:2 /W:1 /B /IT /DCOPY:DAT ``` If your source share was mounted as s:\ and target was t:\ then the command looks like this: ```console- robocopy s:\ t:\ /mir /copyall /mt:16 /DCOPY:DAT + robocopy s:\ t:\ /MIR /COPYALL /MT:16 /R:2 /W:1 /B /IT /DCOPY:DAT ``` 1. Connect the on-premises Azure File Sync server to the new sync group so the namespace metadata can sync. Be sure to use the same root folder name as the existing share. For example, if your current cache location is D:\cache, use T:\cache for the new server endpoint. If you're using the existing Azure File Sync server (for migrations within the same region), place the local cache on a separate volume from the existing endpoint. Using the same volume is okay as long as the directory isn't the same directory or a sub-directory of the server endpoint that's connected to the source share. Enable cloud tiering on this endpoint so that none of the data will automatically download to the on-premises server. |
storage | Files Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-disaster-recovery.md | description: Learn how to recover your data in Azure Files. Understand the conce Previously updated : 06/19/2023 Last updated : 07/28/2023 You can subscribe to the [Azure Service Health Dashboard](https://azure.microsof ## Understand the account failover process -Customer-managed account failover enables you to fail your entire storage account over to the secondary region if the primary becomes unavailable for any reason. When you force a failover to the secondary region, clients can begin writing data to the secondary endpoint after the failover is complete. The failover typically takes about an hour. +Customer-managed account failover enables you to fail your entire storage account over to the secondary region if the primary becomes unavailable for any reason. When you force a failover to the secondary region, clients can begin writing data to the secondary endpoint after the failover is complete. The failover typically takes about an hour. We recommend suspending your workload as much as possible before initiating an account failover. To learn how to initiate an account failover, see [Initiate an account failover](../common/storage-initiate-account-failover.md). The customer initiates the account failover to the secondary endpoint. The failo [ ![Diagram showing the customer initiates account failover to secondary endpoint.](media/files-disaster-recovery/failover-to-secondary.png) ](media/files-disaster-recovery/failover-to-secondary.png#lightbox) -Write access is restored for geo-redundant accounts once the DNS entry has been updated and requests are being directed to the new primary endpoint. Existing storage service endpoints remain the same after the failover. +Write access is restored for geo-redundant accounts once the DNS entry has been updated and requests are being directed to the new primary endpoint. Existing storage service endpoints remain the same after the failover. File handles and leases aren't retained on failover, so clients must unmount and remount the file shares. > [!IMPORTANT] > After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint/region (we say this later too). To resume replication to the new secondary, configure the account for geo-redundancy again. |
storage | Files Smb Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md | |
storage | Storage Files Quick Create Use Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md | description: This tutorial covers how to create an SMB Azure file share using th Previously updated : 10/24/2022 Last updated : 07/28/2022 #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file shares so I can determine whether I want to subscribe to the service. So far, you've created an Azure storage account and a file share with one file i ![Screenshot of Basic tab, basic VM information filled out.](./media/storage-files-quick-create-use-windows/vm-resource-group-and-subscription.png) 1. Under **Instance details**, name the VM *qsVM*.-1. For **Image** select **Windows Server 2019 Datacenter - Gen2**. +1. For **Security type**, select **Standard**. +1. For **Image**, select **Windows Server 2019 Datacenter - x64 Gen2**. 1. Leave the default settings for **Region**, **Availability options**, and **Size**. 1. Under **Administrator account**, add a **Username** and enter a **Password** for the VM. 1. Under **Inbound port rules**, choose **Allow selected ports** and then select **RDP (3389)** and **HTTP** from the drop-down. Now that you've created the VM, connect to it so you can mount your file share. ### Map the Azure file share to a Windows drive 1. In the Azure portal, navigate to the *qsfileshare* fileshare and select **Connect**.-1. Select a drive letter then copy the contents of the second box and paste it in **Notepad**. +1. Select a drive letter and then **Show script**. +1. Copy the script and paste it in **Notepad**. :::image type="content" source="medilet-resize.png"::: Now that you've created the VM, connect to it so you can mount your file share. Now that you've mapped the drive, create a snapshot. -1. In the portal, navigate to your file share, select **Snapshots**, then select **+ Add snapshot**. +1. In the portal, navigate to your file share, select **Snapshots**, then select **+ Add snapshot** and then **OK**. ![Screenshot of storage account snapshots tab.](./media/storage-files-quick-create-use-windows/create-snapshot.png) Now that you've mapped the drive, create a snapshot. :::image type="content" source="media/storage-files-quick-create-use-windows/restore-share-snapshot.png" alt-text="Screenshot of the snapshot tab, qstestfile is selected, restore is highlighted."::: -1. Select **Overwrite original file**. +1. Select **Overwrite original file** and then **OK**. ![Screenshot of restore pop up, overwrite original file is selected.](./media/storage-files-quick-create-use-windows/snapshot-download-restore-portal.png) Now that you've mapped the drive, create a snapshot. ## Delete a share snapshot +1. Before you can delete a share snapshot, you'll need to remove any locks on the storage account. Navigate to the storage account you created for this tutorial and select **Settings** > **Locks**. If any locks are listed, delete them. 1. On your file share, select **Snapshots**. 1. On the **Snapshots** tab, select the last snapshot in the list and select **Delete**. Just like with on-premises VSS snapshots, you can view the snapshots from your m ## Next steps > [!div class="nextstepaction"]-> [Use an Azure file share with Windows](storage-how-to-use-files-windows.md) +> [Use an Azure file share with Windows](storage-how-to-use-files-windows.md) |
synapse-analytics | Overview Cognitive Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md | Using pretrained models from Azure AI services, you can enrich your data with ar There are a few ways that you can use a subset of Azure AI services with your data in Synapse Analytics: -- The "Cognitive Services" wizard in Synapse Analytics generates PySpark code in a Synapse notebook that connects to a with Azure AI services using data in a Spark table. Then, using pretrained machine learning models, the service does the work for you to add AI to your data. Check out [Sentiment analysis wizard](tutorial-cognitive-services-sentiment.md) and [Anomaly detection wizard](tutorial-cognitive-services-anomaly.md) for more details.+- The "Azure AI services" wizard in Synapse Analytics generates PySpark code in a Synapse notebook that connects to a with Azure AI services using data in a Spark table. Then, using pretrained machine learning models, the service does the work for you to add AI to your data. Check out [Sentiment analysis wizard](tutorial-cognitive-services-sentiment.md) and [Anomaly detection wizard](tutorial-cognitive-services-anomaly.md) for more details. - Synapse Machine Learning ([SynapseML](https://github.com/microsoft/SynapseML)) allows you to build powerful and highly scalable predictive and analytical models from various Spark data sources. Synapse Spark provide built-in SynapseML libraries including [synapse.ml.cognitive](https://github.com/microsoft/SynapseML/tree/master/notebooks/features/cognitive_services). The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tuto - Document Translation: Translates documents across all supported languages and dialects while preserving document structure and data format. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DocumentTranslator.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DocumentTranslator)) ### Document Intelligence-[**Document Intelligence**](https://azure.microsoft.com/services/form-recognizer/) (formerly known as Form Recognizer) +[**Document Intelligence**](https://azure.microsoft.com/services/form-recognizer/) (formerly known as Azure AI Document Intelligence) - Analyze Layout: Extract text and layout information from a given document. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeLayout.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeLayout)) - Analyze Receipts: Detects and extracts data from receipts using optical character recognition (OCR) and our receipt model, enabling you to easily extract structured data from receipts such as merchant name, merchant phone number, transaction date, transaction total, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeReceipts.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeReceipts)) - Analyze Business Cards: Detects and extracts data from business cards using optical character recognition (OCR) and our business card model, enabling you to easily extract structured data from business cards such as contact names, company names, phone numbers, emails, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeBusinessCards.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeBusinessCards)) display( ``` ## Document Intelligence sample-[Document Intelligence](https://azure.microsoft.com/services/form-recognizer/) (formerly known as "Form Recognizer") is a part of Azure AI services that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documents. The service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more. In this sample, we analyze a business card image and extract its information into structured data. +[Document Intelligence](https://azure.microsoft.com/services/form-recognizer/) (formerly known as "Azure AI Document Intelligence") is a part of Azure AI services that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documents. The service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more. In this sample, we analyze a business card image and extract its information into structured data. ```python |
synapse-analytics | Tutorial Cognitive Services Anomaly | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-anomaly.md | df.write.mode("overwrite").saveAsTable("anomaly_detector_testing_data") ``` A Spark table named **anomaly_detector_testing_data** should now appear in the default Spark database. -## Open the Cognitive Services wizard +<a name='open-the-cognitive-services-wizard'></a> ++## Open the Azure AI services wizard 1. Right-click the Spark table created in the previous step. Select **Machine Learning** > **Predict with a model** to open the wizard. You can now run all cells to perform anomaly detection. Select **Run All**. [Lea - [Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics](../../ai-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md) - [SynapseML anomaly detection](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#anomaly-detection) - [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)- |
synapse-analytics | Tutorial Cognitive Services Sentiment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment.md | You'll need a Spark table for this tutorial. df.write.mode("overwrite").saveAsTable("default.YourTableName") ``` -## Open the Cognitive Services wizard +<a name='open-the-cognitive-services-wizard'></a> ++## Open the Azure AI services wizard 1. Right-click the Spark table created in the previous procedure. Select **Machine Learning** > **Predict with a model** to open the wizard. |
synapse-analytics | Tutorial Configure Cognitive Services Synapse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md | You can create an [Anomaly Detector](https://portal.azure.com/#create/Microsoft. ![Screenshot that shows Anomaly Detector in the portal, with the Create button.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00a.png) -You can create a [Form Recognizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) resource (for Document Intelligence) in the Azure portal: +You can create an [Azure AI Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) resource (for Document Intelligence) in the Azure portal: -![Screenshot that shows Form Recognizer in the portal, with the Create button.](media/tutorial-configure-cognitive-services/tutorial-configure-form-recognizer.png) +![Screenshot that shows Azure AI Document Intelligence in the portal, with the Create button.](media/tutorial-configure-cognitive-services/tutorial-configure-form-recognizer.png) You can create a [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource in the Azure portal: |
synapse-analytics | System Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/system-integration.md | This article highlights Microsoft system integration partner companies building | :::image type="content" source="./media/system-integration/capax-global-logo.png" alt-text="The logo of Capax Global."::: |**Capax Global**<br>We improve your business by making better use of information you already have. Building custom solutions that align to your business goals, and setting you up for long-term success. We combine well-established patterns and practices with technology while using our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us.|[Capax Global](https://www.capaxglobal.com/)<br>| | :::image type="content" source="./media/system-integration/coeo-logo.png" alt-text="The logo of Coeo."::: |**Coeo**<br>Coeo's team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to providing flexible and scalable analytic solutions. Coeo can help you move to a hybrid or full Azure solution.|[Coeo](https://www.coeo.com/analytics/)<br>| | :::image type="content" source="./media/system-integration/cognizant-logo.png" alt-text="The logo of Cognizant."::: |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Cognizant](https://mbg.cognizant.com/technologies-capabilities/microsoft-azure/)<br>|-| :::image type="content" source="./media/system-integration/neal-analytics-logo.png" alt-text="The logo of Neal Analytics."::: |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Cognitive Services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Neal Analytics](https://nealanalytics.com/)<br>| +| :::image type="content" source="./media/system-integration/neal-analytics-logo.png" alt-text="The logo of Neal Analytics."::: |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Azure AI services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Neal Analytics](https://nealanalytics.com/)<br>| | :::image type="content" source="./media/system-integration/pragmatic-works-logo.png" alt-text="The logo of Pragmatic Works."::: |**Pragmatic Works**<br>Pragmatic Works can help you capitalize on the value of your data by empowering more users and applications on the same dataset. We kickstart, accelerate, and maintain your cloud environment with a range of solutions that fit your business needs.|[Pragmatic Works](https://www.pragmaticworks.com/)<br>| ## Next steps |
synapse-analytics | Apache Spark Machine Learning Training | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-training.md | When using automated ML within Azure Synapse Analytics, you can leverage the dee > > You can learn more about creating an Azure Machine Learning automated ML experiment by following this [tutorial](./spark/../apache-spark-azure-machine-learning-tutorial.md). -## Azure Cognitive Services -[Azure Cognitive Services](../../ai-services/what-are-ai-services.md) provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services. A Cognitive Service provides part or all of the components in a machine learning solution: data, algorithm, and trained model. These services are meant to require general knowledge about your data without needing experience with machine learning or data science. You can leverage these pre-trained Cognitive Services automatically within Azure Synapse Analytics. +<a name='azure-cognitive-services'></a> ++## Azure AI services +[Azure AI services](../../ai-services/what-are-ai-services.md) provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services. A Cognitive Service provides part or all of the components in a machine learning solution: data, algorithm, and trained model. These services are meant to require general knowledge about your data without needing experience with machine learning or data science. You can leverage these pre-trained Azure AI services automatically within Azure Synapse Analytics. ## Next steps This article provides an overview of the various options to train machine learning models within Apache Spark pools in Azure Synapse Analytics. You can learn more about model training by following the tutorial below: - Run Automated ML experiments using Azure Machine Learning and Azure Synapse Analytics: [Automated ML Tutorial](../spark/apache-spark-azure-machine-learning-tutorial.md) - Run SparkML experiments: [Apache SparkML Tutorial](../spark/apache-spark-machine-learning-mllib-notebook.md)-- View the default libraries: [Azure Synapse Analytics runtime](../spark/apache-spark-version-support.md)+- View the default libraries: [Azure Synapse Analytics runtime](../spark/apache-spark-version-support.md) |
synapse-analytics | Apache Spark Secure Credentials With Tokenlibrary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md | Console.WriteLine(connectionString); While Azure Synapse Analytics supports a variety of linked service connections (from pipelines and other Azure products), not all of them are supported from the Spark runtime. Here is the list of supported linked - Azure Blob Storage+ - Azure AI services - Azure Cosmos DB - Azure Data Explorer - Azure Database for MySQL |
synapse-analytics | Microsoft Spark Utilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md | putSecretWithLS(linkedService, secretName, secretValue): puts AKV secret for a g Returns Azure AD token for a given audience, name (optional). The table below list all the available audience types: -|Audience Type|Audience key| -|--|--| -|Audience Resolve Type|'Audience'| -|Storage Audience Resource|'Storage'| -|Dedicated SQL pools (Data warehouse)|'DW'| -|Data Lake Audience Resource|'AzureManagement'| -|Vault Audience Resource|'DataLakeStore'| -|Azure OSSDB Audience Resource|'AzureOSSDB'| -|Azure Synapse Resource|'Synapse'| -|Azure Data Factory Resource|'ADF'| +| Audience Type | String literal to be used in API call | +|-|| +| Azure Storage | `Storage` | +| Azure Key Vault | `Vault` | +| Azure Management | `AzureManagement` | +| Azure SQL Data Warehouse (Dedicated and Serverless) | `DW` | +| Azure Synapse | `Synapse` | +| Azure Data Lake Store | `DataLakeStore` | +| Azure Data Factory | `ADF` | +| Azure Data Explorer | `AzureDataExplorer` | +| Azure Database for MySQL | `AzureOSSDB` | +| Azure Database for MariaDB | `AzureOSSDB` | +| Azure Database for PostgreSQL | `AzureOSSDB` | :::zone pivot = "programming-language-python" |
synapse-analytics | Quickstart Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-arm-template.md | Title: Create a dedicated SQL pool (formerly SQL DW) by using Azure Resource Man description: Learn how to create an Azure Synapse Analytics SQL pool by using Azure Resource Manager template. -+++tags: azure-resource-manager Last updated 06/09/2020 The template defines one resource: ## Deploy the template 1. Select the following image to sign in to Azure and open the template. This template creates a dedicated SQL pool (formerly SQL DW).- + [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fsql-data-warehouse-transparent-encryption-create%2Fazuredeploy.json) 1. Enter or update the following values: The template defines one resource: * **SQL Administrator login**: Enter the administrator username for the SQL Server. * **SQL Administrator password**: Enter the administrator password for the SQL Server. * **Data Warehouse Name**: Enter a dedicated SQL pool name.- * **Transparent Data Encryption**: Accept the default, enabled. + * **Transparent Data Encryption**: Accept the default, enabled. * **Service Level Objective**: Accept the default, DW400c. * **Location**: Accept the default location of the resource group. * **Review and Create**: Select. You can either use the Azure portal to check the deployed resources, or use Azur ```azurecli-interactive echo "Enter the resource group where your dedicated SQL pool (formerly SQL DW) exists:" && read resourcegroupName &&-az resource list --resource-group $resourcegroupName +az resource list --resource-group $resourcegroupName ``` # [PowerShell](#tab/PowerShell) |
synapse-analytics | Quickstart Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bicep.md | Title: Create an Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) us description: Learn how to create an Azure Synapse Analytics SQL pool using Bicep. -+++tags: azure-resource-manager, bicep Last updated 05/20/2022 |
synapse-analytics | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md | This section is an archive of guidance and sample project resources for Azure Sy |**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |-| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models. | +| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure AI services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models. | | June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). | | June 2022 | **Azure Synapse success by design** | The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. | | June 2022 | **Migration guides for Teradata** | A new Microsoft-authored migration guide for Teradata to Azure Synapse Analytics is now available. [Design and performance for Teradata migrations](migration-guides/teradat). | What follows are the previous format of monthly news updates for Synapse Analyti ### General -* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models. +* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure AI services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models. * **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md). |
synapse-analytics | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md | This section summarizes new guidance and sample project resources for Azure Syna | September 2022 | **What is the difference between Synapse dedicated SQL pool (formerly SQL DW) and Serverless SQL pool?** | Understand dedicated vs serverless pools and their concurrency. Read more at [basic concepts of dedicated SQL pools and serverless SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/understand-synapse-dedicated-sql-pool-formerly-sql-dw-and/ba-p/3594628).| | September 2022 | **Reading Delta Lake in dedicated SQL Pool** | [Sample script](https://github.com/microsoft/Azure_Synapse_Toolbox/tree/master/TSQL_Queries/Delta%20Lake) to import Delta Lake files directly into the dedicated SQL Pool and support features like time-travel. For an explanation, see [Reading Delta Lake in dedicated SQL Pool](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/reading-delta-lake-in-dedicated-sql-pool/ba-p/3571053).| | September 2022 | **Azure Synapse Customer Success Engineering blog series** | The new [Azure Synapse Customer Success Engineering blog series](https://aka.ms/synapsecseblog) launches with a detailed introduction to [Building the Lakehouse - Implementing a Data Lake Strategy with Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/building-the-lakehouse-implementing-a-data-lake-strategy-with/ba-p/3612291).|-| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models. | +| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure AI services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models. | | June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). | | June 2022 | **Azure Synapse success by design** | The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. | | June 2022 | **Migration guides for Teradata** | A new Microsoft-authored migration guide for Teradata to Azure Synapse Analytics is now available. [Design and performance for Teradata migrations](migration-guides/teradat). | For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/Syn - [Become an Azure Synapse Influencer](https://aka.ms/synapseinfluencers) - [Azure Synapse Analytics terminology](overview-terminology.md) - [Azure Synapse Analytics migration guides](migration-guides/index.yml)-- [Azure Synapse Analytics frequently asked questions](overview-faq.yml)+- [Azure Synapse Analytics frequently asked questions](overview-faq.yml) |
update-center | Dynamic Scope Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/dynamic-scope-overview.md | The criteria will be evaluated at the scheduled run time, which will be the fina > [!NOTE] > You can associate one dynamic scope to one schedule. +## Prerequisites ++#### [Azure VMs](#tab/avms) ++- Patch Orchestration must be set to Customer Managed Schedules (Preview). This sets patch mode to AutomaticByPlatform and the **BypassPlatformSafetyChecksOnUserSchedule** = *True*. +- Associate a Schedule with the VM. +- Ensure to register the preview feature in your Azure subscription by following these steps: ++ 1. Sign in to the [Azure portal](https://portal.azure.com). + 1. In search, enter and select **Subscriptions**. + 1. In **Subscriptions** home page, select your subscription from the list. + 1. In the **Subscription | Preview features** page, under **Settings**, select **Preview features**. + 1. Search for **Dynamic Scope (preview)**. + 1. Select **Register** and then select **OK** to get started with Dynamic scope (preview). + +#### [Arc-enabled VMs](#tab/arcvms) ++There are no pre-requisities for patch orchestration. However, you must associate a schedule with the VM for Schedule patching. For more information, see [Configure schedule patching on Azure VMs to ensure business continuity](prerequsite-for-schedule-patching.md). +++ ## Permissions For dynamic scoping (preview) and configuration assignment, ensure that you have the following permissions: For dynamic scoping (preview) and configuration assignment, ensure that you have - Write permissions to create or modify a schedule. - Read permissions to assign or read a schedule. +## Service limits -## Prerequisites for Azure VMs +The following are the Dynamic scope (preview) limits for **each dynamic scope**. -- Patch Orchestration must be set to Customer Managed Schedules (Preview). This sets patch mode to AutomaticByPlatform and the **BypassPlatformSafetyChecksOnUserSchedule** = *True*.-- Associate a Schedule with the VM.+| Resource | Limit | +|-|-| +| Resource associations | 1000 | +| Number of tag filters | 50 | +| Number of Resource Group filters | 50 | > [!NOTE]-> For Arc VMs, there are no patch orchestration pre-requisites. However, you must associate a schedule with the VM for Schedule patching. For more information, see [Configure schedule patching on Azure VMs to ensure business continuity](prerequsite-for-schedule-patching.md). -+> The above limits are for Dynamic scope (preview) in the Guest scope only. ## Next steps |
update-center | Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md | Update management center (preview) uses maintenance control schedule instead of 1. All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. 1. VMs in a common availability set are updated within Update Domain boundaries and, VMs across multiple Update Domains aren't updated concurrently. +## Service limits ++The following are the recommended limits for the mentioned indicators: ++| Indicator | Limit | +|-|-| +| Number of schedules per Subscription per Region | 250 | +| Total number of Resource associations to a schedule | 3000 | +| Resource associations on each dynamic scope | 1000 | +| Number of dynamic scopes per Resource Group or Subscription per Region | 250 | ++ ## Schedule recurring updates on single VM >[!NOTE] |
virtual-desktop | Private Link Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md | Title: Set up Private Link with Azure Virtual Desktop - Azure description: Learn how to set up Private Link with Azure Virtual Desktop to privately connect to your remote resources. + Last updated 07/17/2023 |
virtual-desktop | Security Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-guide.md | Software updates for the Remote Desktop clients you can use to access Azure Virt ## Next steps -To learn how to enable multi-factor authentication, see [Set up multi-factor authentication](set-up-mfa.md). +- Learn how to [Set up multi-factor authentication](set-up-mfa.md). +- [Apply Zero Trust principles for an Azure Virtual Desktop deployment](/security/zero-trust/azure-infrastructure-avd). |
virtual-desktop | Teams Supported Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-supported-features.md | Title: Supported features for Microsoft Teams on Azure Virtual Desktop - Azure description: Supported features for Microsoft Teams on Azure Virtual Desktop. Previously updated : 06/12/2023 Last updated : 07/26/2023 The following table lists whether the Windows Desktop client, Azure Virtual Desk | Dynamic e911 | Yes | Yes | | Give and take control | Yes | Yes | | Live captions | Yes | Yes |+| Live reactions | Yes | Yes | | Manage breakout rooms | Yes | Yes | | Mirror my video | Yes | No | | Multiwindow | Yes | Yes | The following table lists whether the Windows Desktop client, Azure Virtual Desk | Screen share and video together | Yes | Yes | | Screen share | Yes | Yes | | Secondary ringer | Yes | Yes |+| Shared system audio | Yes | No | | Simulcast | Yes | Yes | ## Version requirements The following table lists the minimum required versions for each Teams feature. |--|--|--|--|--| | Application window sharing | 1.2.3770 and later | Not supported | 1.31.2211.15001 | Updates within 90 days of the current version | | Audio/video call | 1.2.1755 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version |-| Background blur | 1.2.3004 and later | 10.7.10 and later | 1.1.2110.16001 and later | 1.5.00.11865 and later | -| Background images | 1.2.3004 and later | 10.7.10 and later | 1.1.2110.16001 and later | 1.5.00.11865 and later | +| Background blur | 1.2.3004 and later | 10.7.10 and later | 1.1.2110.16001 and later | Updates within 90 days of the current version | +| Background images | 1.2.3004 and later | 10.7.10 and later | 1.1.2110.16001 and later | Updates within 90 days of the current version | | CART transcriptions | 1.2.2322 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Call health panel | 1.2.1755 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Configure audio devices | 1.2.1755 and later | Not supported | 1.0.2006.11001 and later | Updates within 90 days of the current version | The following table lists the minimum required versions for each Teams feature. | Dynamic e911 | 1.2.2600 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Give and take control | 1.2.2924 and later | 10.7.10 and later | 1.0.2006.11001 and later (Windows), 1.31.2211.15001 and later (macOS) | Updates within 90 days of the current version | | Live captions | 1.2.2322 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version |+| Live reactions | 1.2.1755 and later | 10.7.7 and later | 1.1.2110.16001 and later | Updates within 90 days of the current version | | Manage breakout rooms | 1.2.1755 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Mirror my video | 1.2.3770 and later | Not supported | 1.0.2006.11001 and later | Updates within 90 days of the current version |-| Multiwindow | 1.2.1755 and later | 10.7.7 and later | 1.1.2110.16001 and later | 1.5.00.11865 and later | +| Multiwindow | 1.2.1755 and later | 10.7.7 and later | 1.1.2110.16001 and later | Updates within 90 days of the current version | | Noise suppression | 1.2.3316 and later | 10.8.1 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Screen share and video together | 1.2.1755 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Screen share | 1.2.1755 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Secondary ringer | 1.2.3004 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version |+| Shared system audio | 1.2.4058 and later | Not supported | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Simulcast | 1.2.3667 and later | 10.8.1 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | ## Next steps |
virtual-desktop | Troubleshoot Custom Image Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-custom-image-templates.md | Azure Image Builder uses Hashicorp Packer to create images. Packer outputs all l In this resource group is a storage account with a blob container called **packerlogs**. In the container is a folder named with a GUID in which you'll find the log file. Entries for built-in scripts you use to customize your image begin **Starting AVD AIB Customization: {Script name}: {Timestamp}**, to help you locate any errors related to the scripts. +To learn how to interpret Azure Image Builder logs, see [Troubleshoot Azure VM Image Builder](../virtual-machines/linux/image-builder-troubleshoot.md). + > [!IMPORTANT] > Microsoft Support doesn't handle issues for any customer created scripts, or any scripts or templates copied from a Microsoft repository and modified. You are welcome to collaborate and improve these tools in our [GitHub repository](https://github.com/Azure/RDS-Templates/issues), where you can open an issue. For more information, see [Why do we not support customer or third party scripts?](https://techcommunity.microsoft.com/t5/ask-the-performance-team/help-my-powershell-script-isn-t-working-can-you-fix-it/ba-p/755797) |
virtual-machine-scale-sets | Virtual Machine Scale Sets Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md | |
virtual-machines | Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md | |
virtual-machines | Lasv3 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lasv3-series.md | The Lasv3-series of Azure Virtual Machines (Azure VMs) features high-throughput, 1. **Temp disk**: Lasv3-series VMs have a standard SCSI-based temp resource disk for use by the OS paging or swap file (`D:` on Windows, `/dev/sdb` on Linux). This disk provides 80 GiB of storage, 4000 IOPS, and 80 MBps transfer rate for every 8 vCPUs. For example, Standard_L80as_v3 provides 800 GiB at 40000 IOPS and 800 MBPS. This configuration ensures that the NVMe drives can be fully dedicated to application use. This disk is ephemeral, and all data is lost on stop or deallocation. 2. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM. -3. **NVMe Disk encryption** Lsv3 VMs created or allocated on or after 1/1/2023 have their local NVME drives encrypted by default using hardware-based encryption with a Platform-managed key, except for the regions listed below. +3. **NVMe Disk encryption** Lasv3 VMs created or allocated on or after 1/1/2023 have their local NVMe drives encrypted by default using hardware-based encryption with a Platform-managed key, except for the regions listed below. > [!NOTE]-> Central US and Qatar Central do not support Local NVME disk encryption, but will be added in the future. +> Central US and Qatar Central Lasv3 VMs created or allocated on or after 4/1/2023 have their local NVMe Drives Encrypted. 4. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lasv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on Lasv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization. 5. **Max burst uncached data disk throughput**: Lasv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time. |
virtual-machines | Create Upload Generic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md | The mechanism for rebuilding the initrd or initramfs image may vary depending on 1. Back up the existing initrd image: ```bash- sudo cd /boot + cd /boot sudo cp initrd-`uname -r`.img initrd-`uname -r`.img.bak ``` The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin **Initramfs** ```bash- sudo cd /boot + cd /boot sudo cp initramfs-<kernel-version>.img <kernel-version>.img.bak sudo dracut -f -v initramfs-<kernel-version>.img <kernel-version> --add-drivers "hv_vmbus hv_netvsc hv_storvsc" sudo grub-mkconfig -o /boot/grub/grub.cfg The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin **Initrd** ```bash- sudo cd /boot + cd /boot sudo cp initrd.img-<kernel-version> initrd.img-<kernel-version>.bak sudo mkinitramfs -o initrd.img-<kernel-version> <kernel-version> --with=hv_vmbus,hv_netvsc,hv_storvsc sudo update-grub The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin ```bash sudo rm -f /var/log/waagent.log sudo cloud-init clean- sudo -force -deprovision+user + sudo waagent -force -deprovision+user sudo rm -f ~/.bash_history sudo export HISTSIZE=0 ``` |
virtual-machines | Create Upload Ubuntu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-ubuntu.md | This article assumes that you've already installed an Ubuntu Linux operating sys 1. Change directory to the boot EFI directory: ```bash- sudo cd /boot/efi/EFI + cd /boot/efi/EFI ``` 2. Copy the ubuntu directory to a new directory named boot: This article assumes that you've already installed an Ubuntu Linux operating sys 3. Change directory to the newly created boot directory: ```bash- sudo cd boot + cd boot ``` 4. Rename the shimx64.efi file: |
virtual-machines | N Series Driver Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md | sudo reboot With Secure Boot enabled, all Linux kernel modules are required to be signed by the key trusted by the system. -1. Find latest nvidia driver version +1. Find latest NVIDIA driver version compatible with Azure + + ``` + sudo apt-get update + ``` -``` -sudo apt-get update -NVIDIA_DRIVER_VERSION=$(sudo apt-cache search 'linux-modules-nvidia-[0-9]+-azure$' | awk '{print $1}' | sort | tail -n 1 | head -n 1 | awk -F"-" '{print $4}') -``` + ``` + NVIDIA_DRIVER_VERSION=$(sudo apt-cache search 'linux-modules-nvidia-[0-9]+-azure$' | awk '{print $1}' | sort | tail -n 1 | head -n 1 | awk -F"-" '{print $4}') + ``` -2. Install pre-built azure linux kernel based nvidia modules and driver +2. Install pre-built Azure Linux kernel based NVIDIA modules and driver -``` -sudo apt install -y linux-modules-nvidia-${NVIDIA_DRIVER_VERSION}-azure nvidia-driver-${NVIDIA_DRIVER_VERSION} -``` + ``` + sudo apt install -y linux-modules-nvidia-${NVIDIA_DRIVER_VERSION}-azure nvidia-driver-${NVIDIA_DRIVER_VERSION} + ``` -3. Change preference of nvidia packages to prefer NVIDIA repos +3. Change preference of NVIDIA packages to prefer NVIDIA repository -``` -sudo tee /etc/apt/preferences.d/cuda-repository-pin-600 > <<EOL -Package: nsight-compute -Pin: origin *ubuntu.com* -Pin-Priority: -1 -Package: nsight-systems -Pin: origin *ubuntu.com* -Pin-Priority: -1 -Package: nvidia-modprobe -Pin: release l=NVIDIA CUDA -Pin-Priority: 600 -Package: nvidia-settings -Pin: release l=NVIDIA CUDA -Pin-Priority: 600 -Package: * -Pin: release l=NVIDIA CUDA -Pin-Priority: 100 -EOL -``` + ``` + sudo tee /etc/apt/preferences.d/cuda-repository-pin-600 > <<EOL + Package: nsight-compute + Pin: origin *ubuntu.com* + Pin-Priority: -1 + Package: nsight-systems + Pin: origin *ubuntu.com* + Pin-Priority: -1 + Package: nvidia-modprobe + Pin: release l=NVIDIA CUDA + Pin-Priority: 600 + Package: nvidia-settings + Pin: release l=NVIDIA CUDA + Pin-Priority: 600 + Package: * + Pin: release l=NVIDIA CUDA + Pin-Priority: 100 + EOL + ``` 4. Add CUDA repository -``` -sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub -sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" -``` + ``` + sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/3bf863cc.pub + ``` -5. Find appropriate CUDA driver version + ``` + sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/ /" + ``` + + where `$distro/$arch` should be replaced by one of the following: -``` -CUDA_DRIVER_VERSION=$(apt-cache madison cuda-drivers | awk '{print $3}' | sort -r | while read line; do - if dpkg --compare-versions $(dpkg-query -f='${Version}\n' -W nvidia-driver-${NVIDIA_DRIVER_VERSION}) ge $line ; then - echo "$line" - break - fi -done) -NVIDIA_DRIVER_MAPPING=$(echo $CUDA_DRIVER_VERSION | awk -F"." '{print $1}') -``` + ``` + ubuntu2004/arm64 + ubuntu2004/x86_64 + ubuntu2204/arm64 + ubuntu2204/x86_64 + ``` + + If `add-apt-repository` command is not found, run `sudo apt-get install software-properties-common` to install it. -6. Install CUDA driver +5. Install the kernel headers and development packages, and remove outdated signing key -``` -sudo apt install -y cuda-drivers-${NVIDIA_DRIVER_MAPPING}=${CUDA_DRIVER_VERSION} cuda-drivers=${CUDA_DRIVER_VERSION} -``` + ``` + sudo apt-get install linux-headers-$(uname -r) + sudo apt-key del 7fa2af80 + ``` -7. Find CUDA toolkit and runtime version +6. Install the new cuda-keyring package -``` -CUDA_VERSION=$(apt-cache showpkg cuda-drivers | grep -o 'cuda-runtime-[0-9][0-9]-[0-9],cuda-drivers [0-9\.]*' | while read line; do - if dpkg --compare-versions ${CUDA_DRIVER_VERSION} ge $(echo $line | grep -Eo '[[:digit:]]+\.[[:digit:]]+') ; then - echo $(echo $line | grep -Eo '[[:digit:]]+-[[:digit:]]') - break - fi -done) -``` + ``` + wget https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/cuda-keyring_1.1-1_all.deb + sudo dpkg -i cuda-keyring_1.1-1_all.deb + ``` -8. Install CUDA toolkit and runtime + Note: When prompt on different versions of cuda-keyring, select `Y or I : install the package maintainer's version` to proceed. -``` -sudo apt install -y cuda-${CUDA_VERSION} -``` +7. Update the APT repository cache ++ ``` + sudo apt-get update + ``` + +8. Install CUDA toolkit and driver ++ ``` + sudo apt-get install -y cuda + sudo apt-get install -y nvidia-gds + ``` ++ Note that during the installation you will be prompted for password when configuring secure boot, a password of your choice needs to be provided and then proceed. ++ ![Secure Boot Password Configuration](./media/n-series-driver-setup/secure-boot-passwd.png) ++9. Reboot the VM ++ ``` + sudo reboot + ``` ++10. Verify the installation + + a. Verify NVIDIA driver is installed and loaded + + ``` + dpkg -l | grep -i nvidia + nvidia-smi + ``` ++ b. Verify CUDA toolkit is installed and loaded ++ ``` + dpkg -l | grep -i cuda + export PATH=/usr/local/cuda/bin:$PATH + export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH + nvcc --version + ``` ### CentOS or Red Hat Enterprise Linux sudo apt install -y cuda-${CUDA_VERSION} LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Refer to the [Linux Integration Services documentation](https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details. Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions. - ```bash - wget https://aka.ms/lis - tar xvzf lis - cd LISISO + ```bash + wget https://aka.ms/lis + tar xvzf lis + cd LISISO - sudo ./install.sh - sudo reboot - ``` + sudo ./install.sh + sudo reboot + ``` 3. Reconnect to the VM and continue installation with the following commands:+ ```bash sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo sudo apt install -y cuda-${CUDA_VERSION} The installation can take several minutes. - > [!NOTE] + > [!NOTE] > Visit [Fedora](https://dl.fedoraproject.org/pub/epel/) and [Nvidia CUDA repo](https://developer.download.nvidia.com/compute/cuda/repos/) to pick the correct package for the CentOS or RHEL version you want to use. > To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection EnableUI=FALSE ``` -9. Remove the following from `/etc/nvidia/gridd.conf` if its present: +9. Remove the following from `/etc/nvidia/gridd.conf` if it is present: ``` FeatureType=0 ```+ 10. Reboot the VM and proceed to verify the installation. #### Install GRID driver on Ubuntu with Secure Boot enabled -The GRID driver installation process does not offer any options to skip kernel module build and installation, so secure boot has to be disabled in Linux VMs in order to use them with GRID, after installing signed kernel modules. +The GRID driver installation process does not offer any options to skip kernel module build and installation and select a different source of signed kernel modules, so secure boot has to be disabled in Linux VMs in order to use them with GRID, after installing signed kernel modules. ### CentOS or Red Hat Enterprise Linux The GRID driver installation process does not offer any options to skip kernel m Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions. - ```bash - wget https://aka.ms/lis - tar xvzf lis - cd LISISO + ```bash + wget https://aka.ms/lis + tar xvzf lis + cd LISISO - sudo ./install.sh - sudo reboot + sudo ./install.sh + sudo reboot - ``` + ``` 4. Reconnect to the VM and run the `lspci` command. Verify that the NVIDIA M60 card or cards are visible as PCI devices. The GRID driver installation process does not offer any options to skip kernel m chmod +x NVIDIA-Linux-x86_64-grid.run sudo ./NVIDIA-Linux-x86_64-grid.run- ``` + ``` + 6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select **Yes**. 7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/ The GRID driver installation process does not offer any options to skip kernel m IgnoreSP=FALSE EnableUI=FALSE ```+ 9. Remove one line from `/etc/nvidia/gridd.conf` if it is present: ``` FeatureType=0 ```+ 10. Reboot the VM and proceed to verify the installation. |
virtual-machines | Suse Create Upload Vhd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/suse-create-upload-vhd.md | As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your 6. Enable waagent & cloud-init to start on boot ```bash- sudo -i sudo chkconfig waagent on sudo systemctl enable cloud-init-local.service sudo systemctl enable cloud-init.service As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your 10. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V: ```bash- sudo -i sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules sudo rm -f /etc/udev/rules.d/70-persistent-net.rules ``` As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this step is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk or create the swap file. Use these commands to modify `/etc/waagent.conf` appropriately: ```bash- sudo -i sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf ``` As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your * Use a cloud-init directive baked into the image that configures swap space every time the VM is created: ```bash- sudo -i sudo echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your ```bash sudo rm -f ~/.bash_history # Remove current user history- sudo -i sudo rm -rf /var/lib/waagent/ sudo rm -f /var/log/waagent.log sudo waagent -force -deprovision+user |
virtual-machines | Lsv3 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv3-series.md | The Lsv3-series VMs are available in sizes from 8 to 80 vCPUs. There are 8 GiB o 1. **Temp disk**: Lsv3-series VMs have a standard SCSI-based temp resource disk for use by the OS paging or swap file (`D:` on Windows, `/dev/sdb` on Linux). This disk provides 80 GiB of storage, 4,000 IOPS, and 80 MBps transfer rate for every 8 vCPUs. For example, Standard_L80s_v3 provides 800 GiB at 40000 IOPS and 800 MBPS. This configuration ensures the NVMe drives can be fully dedicated to application use. This disk is ephemeral, and all data is lost on stop or deallocation. 2. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM. -3. **NVMe Disk encryption** Lsv3 VMs created or allocated on or after 1/1/2023 have their local NVME drives encrypted by default using hardware-based encryption with a Platform-managed key, except for the regions listed below. +3. **NVMe Disk encryption** Lsv3 VMs created or allocated on or after 1/1/2023 have their local NVMe drives encrypted by default using hardware-based encryption with a Platform-managed key, except for the regions listed below. > [!NOTE]-> Central US and Qatar Central do not support Local NVME disk encryption, but will be added in the future. +> Central US and Qatar Central Lsv3 VMs created or allocated on or after 4/1/2023 have their local NVMe Drives Encrypted. 4. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lsv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization. 5. **Max burst uncached data disk throughput**: Lsv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time. |
virtual-machines | Snapshot Copy Managed Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/snapshot-copy-managed-disk.md | To create a snapshot using the Azure portal, complete these steps. # [PowerShell](#tab/powershell) -This example requires that you use [Cloud Shell](https://shell.azure.com/bash) or have the [Azure CLI](/cli/azure/) installed. +This example requires that you use [Cloud Shell](https://shell.azure.com/bash) or install the [Azure PowerShell module](/powershell/azure/install-azure-powershell). Follow these steps to take a snapshot with the `New-AzSnapshotConfig` and `New-AzSnapshot` cmdlets. This example assumes that you have a VM called *myVM* in the *myResourceGroup* resource group. The code sample provided creates a snapshot in the same resource group and within the same region as your source VM. |
virtual-machines | Client Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/client-images.md | If you do not know your offer ID, you can obtain it through the Azure portal. - On the *Subscriptions* window: ![Offer ID details from the Azure portal](./media/client-images/offer-id-azure-portal.png) - Or, click **Billing** and then click your subscription ID. The offer ID appears in the *Billing* window. -- You can also view the offer ID from the ['Subscriptions' tab](https://account.windowsazure.com/Subscriptions) of the Azure Account portal:- ![Offer ID details from the Azure Account portal](./media/client-images/offer-id-azure-account-portal.png) ## Next steps You can now deploy your VMs using [PowerShell](quick-create-powershell.md), [Resource Manager templates](ps-template.md), or [Visual Studio](../../azure-resource-manager/templates/create-visual-studio-deployment-project.md). |
virtual-network-manager | Create Virtual Network Manager Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-bicep.md | |
virtual-network | How To Create Encryption Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-create-encryption-cli.md | |
virtual-network | How To Create Encryption Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-create-encryption-powershell.md | Remove-AzResourceGroup -Name 'test-rg' -Force - For more information about Azure Virtual Networks, see [What is Azure Virtual Network?](/azure/virtual-network/virtual-networks-overview). -- For more information about Azure Virtual Network encryption, see [What is Azure Virtual Network encryption?](virtual-network-encryption-overview.md).+- For more information about Azure Virtual Network encryption, see [What is Azure Virtual Network encryption?](virtual-network-encryption-overview.md). |
virtual-network | Network Security Group How It Works | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-group-how-it-works.md | Reference the previous picture, along with the following text, to understand how For inbound traffic, Azure processes the rules in a network security group associated to a subnet first, if there's one, and then the rules in a network security group associated to the network interface, if there's one. This process includes intra-subnet traffic as well. -- **VM1**: The security rules in *NSG1* are processed, since it's associated to *Subnet1* and *VM1* is in *Subnet1*. Unless you've created a rule that allows port 80 inbound, the [DenyAllInbound](./network-security-groups-overview.md#denyallinbound) default security rule denies the traffic. The traffic doesn't get evaluated by NSG2 because it's associated with the network interface. If *NSG1* allows port 80 in its security rule, *NSG2* processes the traffic. To allow port 80 to the virtual machine, both *NSG1* and *NSG2* must have a rule that allows port 80 from the internet.+- **VM1**: The security rules in *NSG1* are processed, since it's associated to *Subnet1* and *VM1* is in *Subnet1*. Unless you've created a rule that allows port 80 inbound, the [DenyAllInbound](./network-security-groups-overview.md#denyallinbound) default security rule denies the traffic. This blocked traffic then doesn't get evaluated by NSG2 because it's associated with the network interface. However if *NSG1* allows port 80 in its security rule, then *NSG2* processes the traffic. To allow port 80 to the virtual machine, both *NSG1* and *NSG2* must have a rule that allows port 80 from the internet. - **VM2**: The rules in *NSG1* are processed because *VM2* is also in *Subnet1*. Since *VM2* doesn't have a network security group associated to its network interface, it receives all traffic allowed through *NSG1* or is denied all traffic denied by *NSG1*. Traffic is either allowed or denied to all resources in the same subnet when a network security group is associated to a subnet. |
virtual-network | Service Tags Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md | By default, service tags reflect the ranges for the entire cloud. Some service t | **AzureWebPubSub** | AzureWebPubSub | Both | Yes | Yes | | **BatchNodeManagement** | Management traffic for deployments dedicated to Azure Batch. | Both | Yes | Yes | | **ChaosStudio** | Azure Chaos Studio. <br/><br/>**Note**: If you have enabled Application Insights integration on the Chaos Agent, the AzureMonitor tag is also required. | Both | No | Yes |-| **CognitiveServicesFrontend** | The address ranges for traffic for Cognitive Services frontend portals. | Both | No | Yes | -| **CognitiveServicesManagement** | The address ranges for traffic for Azure Cognitive Services. | Both | No | Yes | +| **CognitiveServicesFrontend** | The address ranges for traffic for Azure AI services frontend portals. | Both | No | Yes | +| **CognitiveServicesManagement** | The address ranges for traffic for Azure AI services. | Both | No | Yes | | **DataFactory** | Azure Data Factory | Both | No | Yes | | **DataFactoryManagement** | Management traffic for Azure Data Factory. | Outbound | No | Yes | | **Dynamics365ForMarketingEmail** | The address ranges for the marketing email service of Dynamics 365. | Both | Yes | Yes | |
virtual-network | Virtual Network Service Endpoints Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md | Service endpoints are available for the following Azure services and regions. Th - **[Azure Event Hubs](../event-hubs/event-hubs-service-endpoints.md?toc=%2fazure%2fvirtual-network%2ftoc.json)** (*Microsoft.EventHub*): Generally available in all Azure regions. - **[Azure Data Lake Store Gen 1](../data-lake-store/data-lake-store-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json)** (*Microsoft.AzureActiveDirectory*): Generally available in all Azure regions where ADLS Gen1 is available. - **[Azure App Service](../app-service/app-service-ip-restrictions.md)** (*Microsoft.Web*): Generally available in all Azure regions where App service is available.-- **[Azure Cognitive Services](../ai-services/cognitive-services-virtual-networks.md?tabs=portal)** (*Microsoft.CognitiveServices*): Generally available in all Azure regions where Cognitive services are available.+- **[Azure Cognitive Services](../ai-services/cognitive-services-virtual-networks.md?tabs=portal)** (*Microsoft.CognitiveServices*): Generally available in all Azure regions where Azure AI services are available. **Public Preview** |
virtual-wan | About Virtual Hub Routing Preference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing-preference.md | description: Learn about Virtual WAN Virtual virtual hub routing preference. Previously updated : 05/31/2022 Last updated : 07/28/2023 # Virtual hub routing preference A Virtual WAN virtual hub connects to virtual networks (VNets) and on-premises using connectivity gateways, such as site-to-site (S2S) VPN gateway, ExpressRoute (ER) gateway, point-to-site (P2S) gateway, and SD-WAN Network Virtual Appliance (NVA). The virtual hub router provides central route management and enables advanced routing scenarios using route propagation, route association, and custom route tables. -The virtual hub router takes routing decisions using built-in route selection algorithm. To influence routing decisions in virtual hub router towards on-premises, we now have a new Virtual WAN hub feature called **Hub routing preference** (HRP). When a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections for a destination route-prefix in on-premises, the virtual hub routerΓÇÖs route selection algorithm will adapt based on the hub routing preference configuration and selects the best routes. For steps, see [How to configure virtual hub routing preference](howto-virtual-hub-routing-preference.md). --+The virtual hub router takes routing decisions using built-in route selection algorithm. To influence routing decisions in virtual hub router towards on-premises, we now have a new Virtual WAN hub feature called **Hub routing preference** (HRP). When a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections for a destination route-prefix in on-premises, the virtual hub routerΓÇÖs route selection algorithm adapts based on the hub routing preference configuration and selects the best routes. For steps, see [How to configure virtual hub routing preference](howto-virtual-hub-routing-preference.md). ## <a name="selection"></a>Route selection algorithm for virtual hub |
virtual-wan | Connect Virtual Network Gateway Vwan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/connect-virtual-network-gateway-vwan.md | description: Learn how to connect an Azure VPN gateway (virtual network gateway) Previously updated : 06/24/2022 Last updated : 07/28/2023 |
virtual-wan | Effective Routes Virtual Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/effective-routes-virtual-hub.md | You can view all routes of your Virtual WAN hub in the Azure portal. This articl ## <a name="routing"></a>Select connections or route tables -1. Navigate to your virtual hub, then select **Routing**. On the Routing page, select **Effective Routes**. +1. Navigate to your virtual hub. In the left pane, select **Effective Routes**. 1. From the dropdown, you can select **Route Table**. If you don't see a Route Table option, this means that you don't have a custom or default route table set up in this virtual hub. ## <a name="output"></a>View output The page output shows the following fields: -* **Prefix**: Address prefix known to the current entity (learnt from the virtual hub router) +* **Prefix**: Address prefix known to the current entity (learned from the virtual hub router) * **Next hop type**: Can be Virtual Network Connection, VPN_S2S_Gateway, ExpressRouteGateway, Remote Hub, or Azure Firewall. * **Next hop**: This is the link to the resource ID of the next hop, or simply shows On-link to imply the current hub. * **Origin**: Resource ID of the routing source. The page output shows the following fields: The values in the following example table imply that the virtual hub connection or route table has learned the route of 10.2.0.0/24 (a branch prefix). It has learned the route due to the **VPN Next hop type** VPN_S2S_Gateway with **Next hop** VPN Gateway resource ID. **Route Origin** points to the resource ID of the originating VPN gateway/Route table/Connection. **AS Path** indicates the AS Path for the branch. -Use the scroll bar at the bottom of the table to view the "AS Path". +Use the scroll bar at the bottom of the table to view the 'AS Path'. | **Prefix** | **Next hop type** | **Next hop** | **Route Origin** |**AS Path** | | | | | | | Use the scroll bar at the bottom of the table to view the "AS Path". **Considerations:** -* If you see 0.0.0.0/0 in the **Get Effective Routes** output, it implies the route exists in one of the route tables. However, if this route was set up for internet, an additional flag **"enableInternetSecurity": true** is required on the connection. The effective route on the VM NIC will not show the route if the "enableInternetSecurity" flag on the connection is "false". +* If you see 0.0.0.0/0 in the **Get Effective Routes** output, it implies the route exists in one of the route tables. However, if this route was set up for internet, an additional flag **"enableInternetSecurity": true** is required on the connection. The effective route on the VM NIC won't show the route if the "enableInternetSecurity" flag on the connection is "false". * The **Propagate Default Route** field is seen in Azure Virtual WAN portal when you edit a virtual network connection, a VPN connection, or an ExpressRoute connection. This field indicates the **enableInternetSecurity** flag, which is always by default "false" for ExpressRoute and VPN connections, but "true" for virtual network connections. -* When viewing effective routes on a VM NIC, if you see the next hop as 'Virtual Network Gateway', that implies the Virtual hub router when the VM is in a spoke connected to a Virtual WAN hub. +* When you view effective routes on a VM NIC, if you see the next hop as 'Virtual Network Gateway', that implies the Virtual hub router when the VM is in a spoke connected to a Virtual WAN hub. * **View Effective routes** for a virtual hub route table is populated only if the virtual hub has at least one type of connection (VPN/ER/VNET) connected to it. |
virtual-wan | Gateway Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/gateway-settings.md | description: This article answers common questions about Virtual WAN gateway set Previously updated : 05/20/2022 Last updated : 07/28/2023 |
virtual-wan | How To Nva Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-nva-hub.md | description: Learn how to deploy a Network Virtual Appliance in the Virtual WAN Previously updated : 06/17/2022 Last updated : 07/28/2023 # Customer intent: As someone with a networking background, I want to create a Network Virtual Appliance (NVA) in my Virtual WAN hub. # How to create a Network Virtual Appliance in an Azure Virtual WAN hub -This article shows you how to use Virtual WAN to connect to your resources in Azure through a **Network Virtual Appliance** (NVA) in Azure. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about Virtual WAN, see the [What is Virtual WAN?](virtual-wan-about.md). +This article shows you how to use Virtual WAN to connect to your resources in Azure through a **Network Virtual Appliance** (NVA) in Azure. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about Virtual WAN, see [What is Virtual WAN?](virtual-wan-about.md) The steps in this article help you create a **Barracuda CloudGen WAN** Network Virtual Appliance in the Virtual WAN hub. To complete this exercise, you must have a Barracuda Cloud Premise Device (CPE) and a license for the Barracuda CloudGen WAN appliance that you deploy into the hub before you begin. In this section, you create a connection between your hub and VNet. ## Next steps -* To learn more about Virtual WAN, see the [What is Virtual WAN?](virtual-wan-about.md) page. +* To learn more about Virtual WAN, see [What is Virtual WAN?](virtual-wan-about.md) * To learn more about NVAs in a Virtual WAN hub, see [About Network Virtual Appliance in the Virtual WAN hub](about-nva-hub.md). |
virtual-wan | Monitor Point To Site Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-point-to-site-connections.md | |
virtual-wan | Monitor Virtual Wan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md | |
virtual-wan | Nat Rules Vpn Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/nat-rules-vpn-gateway.md | |
virtual-wan | Openvpn Azure Ad Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-client.md | |
virtual-wan | Pricing Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/pricing-concepts.md | |
virtual-wan | Upgrade Virtual Wan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/upgrade-virtual-wan.md | description: You can upgrade your virtual WAN SKU type from Basic to Standard fo Previously updated : 04/29/2022 Last updated : 07/28/2023 |
virtual-wan | Virtual Wan Ipsec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-ipsec.md | |
vpn-gateway | Azure Vpn Client Optional Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-optional-configurations.md | description: Learn how to configure optional configuration settings for the Azur Previously updated : 11/22/2022 Last updated : 07/27/2023 You can add custom routes. Modify the downloaded profile XML file and add the ** ### Block (exclude) routes -You block (exclude) routes. Modify the downloaded profile XML file and add the **\<excluderoutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</excluderoutes>** tags. +The ability to completely block routes isn't supported by the Azure VPN Client. The Azure VPN Client doesn't support dropping routes from the local routing table. Instead, you can exclude routes from the VPN interface. Modify the downloaded profile XML file and add the **\<excluderoutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</excluderoutes>** tags. ```xml <azvpnprofile> |
vpn-gateway | Bgp Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-diagnostics.md | |
vpn-gateway | Openvpn Azure Ad Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-mfa.md | description: Learn how to enable multi-factor authentication (MFA) for VPN users Previously updated : 05/05/2021- Last updated : 07/28/2023+ # Enable Azure AD Multi-Factor Authentication (MFA) for VPN users-To connect to your virtual network, you must create and configure a VPN client profile. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md). +To connect to your virtual network, you must create and configure a VPN client profile. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md). |
vpn-gateway | Point To Site How To Vpn Client Install Azure Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-how-to-vpn-client-install-azure-cert.md | description: Learn how to install client certificates for P2S certificate authen Previously updated : 05/06/2022 Last updated : 07/28/2023 The Linux client certificate is installed on the client as part of the client co ## Next steps -Continue with the Point-to-Site configuration steps to Create and install VPN client configuration files for [Windows](point-to-site-vpn-client-cert-windows.md), [macOS](point-to-site-vpn-client-cert-windows.md), or [Linux](point-to-site-vpn-client-cert-linux.md)). +Continue with the Point-to-Site configuration steps to Create and install VPN client configuration files for [Windows](point-to-site-vpn-client-cert-windows.md), [macOS](point-to-site-vpn-client-cert-windows.md), or [Linux](point-to-site-vpn-client-cert-linux.md)). |
vpn-gateway | Vpn Gateway About Forced Tunneling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-forced-tunneling.md | |
vpn-gateway | Vpn Gateway Activeactive Rm Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-activeactive-rm-powershell.md | If you already have a VPN gateway, you can: You can combine these together to build a more complex, highly available network topology that meets your needs. > [!IMPORTANT]-> The active-active mode is available for all SKUs except Basic. +> The active-active mode is available for all SKUs except Basic or Standard. For more information, see [Configuration settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku). +> ## <a name ="aagateway"></a>Part 1 - Create and configure active-active VPN gateways |
vpn-gateway | Vpn Gateway Delete Vnet Gateway Classic Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-classic-powershell.md | description: Learn how to delete a virtual network gateway using PowerShell in t -+ Last updated 06/09/2023 |
vpn-gateway | Vpn Gateway Howto Always On Device Tunnel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-always-on-device-tunnel.md | description: Learn how to use gateways with Windows 10 or later Always On to est Previously updated : 09/03/2020 Last updated : 07/28/2023 |
vpn-gateway | Vpn Gateway Howto Site To Site Classic Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md | |
vpn-gateway | Vpn Gateway Howto Vnet Vnet Portal Classic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-portal-classic.md | description: Learn how to connect classic Azure virtual networks together using -+ Last updated 06/09/2023 |
vpn-gateway | Vpn Gateway Modify Local Network Gateway Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-modify-local-network-gateway-portal.md | description: Learn how to change IP address prefixes and configure BGP Settings Previously updated : 06/13/2022 Last updated : 07/28/2023 |
vpn-gateway | Vpn Gateway Multi Site | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-multi-site.md | description: Learn how to connect multiple on-premises sites to a classic virtua -+ Last updated 06/09/2023 |
vpn-gateway | Vpn Gateway P2s Advertise Custom Routes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-p2s-advertise-custom-routes.md | |
vpn-gateway | Vpn Profile Intune | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-profile-intune.md | description: Learn how to create an Intune custom profile to deploy Azure VPN cl Previously updated : 04/26/2021 Last updated : 07/28/2023 -For more information about point-to-site, see [About point-to-site](point-to-site-about.md). +For more information about point-to-site, see [About point-to-site](point-to-site-about.md). |