Updates from: 06/30/2022 01:09:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 06/27/2022 Last updated : 06/29/2022
Within a Conditional Access policy, an administrator can make use of access controls to either grant or block access to resources.
-![Conditional Access policy with a grant control requiring multi-factor authentication](./media/concept-conditional-access-grant/conditional-access-grant.png)
## Block access
Block is a powerful control that should be wielded with appropriate knowledge. P
Administrators can choose to enforce one or more controls when granting access. These controls include the following options: -- [Require multi-factor authentication (Azure AD Multi-Factor Authentication)](../authentication/concept-mfa-howitworks.md)
+- [Require multifactor authentication (Azure AD Multi-Factor Authentication)](../authentication/concept-mfa-howitworks.md)
- [Require device to be marked as compliant (Microsoft Intune)](/intune/protect/device-compliance-get-started) - [Require hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md) - [Require approved client app](app-based-conditional-access.md)
When administrators choose to combine these options, they can choose the followi
By default Conditional Access requires all selected controls.
-### Require multi-factor authentication
+### Require multifactor authentication
-Selecting this checkbox will require users to perform Azure AD Multi-Factor Authentication. More information about deploying Azure AD Multi-Factor Authentication can be found in the article [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
+Selecting this checkbox will require users to perform Azure AD Multifactor Authentication. More information about deploying Azure AD Multifactor Authentication can be found in the article [Planning a cloud-based Azure AD Multifactor Authentication deployment](../authentication/howto-mfa-getstarted.md).
-[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multi-factor authentication in Conditional Access policies.
+[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multifactor authentication in Conditional Access policies.
### Require device to be marked as compliant
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Previously updated : 04/21/2022 Last updated : 06/29/2022
The sign-in frequency setting works with apps that have implemented OAuth2 or OI
The sign-in frequency setting works with 3rd party SAML applications and apps that have implemented OAuth2 or OIDC protocols, as long as they don't drop their own cookies and are redirected back to Azure AD for authentication on regular basis.
-### User sign-in frequency and multi-factor authentication
+### User sign-in frequency and multifactor authentication
-Sign-in frequency previously applied to only to the first factor authentication on devices that were Azure AD joined, Hybrid Azure AD joined, and Azure AD registered. There was no easy way for our customers to re-enforce multi factor authentication (MFA) on those devices. Based on customer feedback, sign-in frequency will apply for MFA as well.
+Sign-in frequency previously applied to only to the first factor authentication on devices that were Azure AD joined, Hybrid Azure AD joined, and Azure AD registered. There was no easy way for our customers to re-enforce multifactor authentication (MFA) on those devices. Based on customer feedback, sign-in frequency will apply for MFA as well.
[![Sign in frequency and MFA](media/howto-conditional-access-session-lifetime/conditional-access-flow-chart-small.png)](media/howto-conditional-access-session-lifetime/conditional-access-flow-chart.png#lightbox)
The public preview supports the following scenarios:
- Require user reauthentication during [Intune device enrollment](/mem/intune/fundamentals/deployment-guide-enrollment), regardless of their current MFA status. - Require user reauthentication for risky users with the [require password change](concept-conditional-access-grant.md#require-password-change) grant control.-- Require user reauthentication for risky sign-ins with the [require multi-factor authentication](concept-conditional-access-grant.md#require-multi-factor-authentication) grant control.
+- Require user reauthentication for risky sign-ins with the [require multifactor authentication](concept-conditional-access-grant.md#require-multifactor-authentication) grant control.
When administrators select **Every time**, it will require full reauthentication when the session is evaluated.
Conditional Access is an Azure AD Premium capability and requires a premium lice
> [!WARNING] > If you are using the [configurable token lifetime](../develop/active-directory-configurable-token-lifetimes.md) feature currently in public preview, please note that we donΓÇÖt support creating two different policies for the same user or app combination: one with this feature and another one with configurable token lifetime feature. Microsoft retired the configurable token lifetime feature for refresh and session token lifetimes on January 30, 2021 and replaced it with the Conditional Access authentication session management feature. >
-> Before enabling Sign-in Frequency, make sure other reauthentication settings are disabled in your tenant. If "Remember MFA on trusted devices" is enabled, be sure to disable it before using Sign-in frequency, as using these two settings together may lead to prompting users unexpectedly. To learn more about reauthentication prompts and session lifetime, see the article, [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+> Before enabling Sign-in Frequency, make sure other reauthentication settings are disabled in your tenant. If "Remember MFA on trusted devices" is enabled, be sure to disable it before using Sign-in frequency, as using these two settings together may lead to prompting users unexpectedly. To learn more about reauthentication prompts and session lifetime, see the article, [Optimize reauthentication prompts and understand session lifetime for Azure AD Multifactor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
## Policy deployment
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to a Python web app" description: In this quickstart, learn how a Python web app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API. -+
Last updated 11/22/2021 -+
> > - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > - [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/)
-> - [Flask](http://flask.pocoo.org/), [Flask-Session](https://pypi.org/project/Flask-Session/), [requests](https://requests.kennethreitz.org/en/master/)
+> - [Flask](http://flask.pocoo.org/), [Flask-Session](https://pypi.org/project/Flask-Session/), [requests](https://github.com/psf/requests/graphs/contributors)
> - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) > > #### Step 1: Configure your application in Azure portal
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
From any of the plan pages, use your browser's Print to PDF capability to create
| [Privileged Identity Management](../privileged-identity-management/pim-deployment-plan.md)| Azure AD Privileged Identity Management (PIM) helps you manage privileged administrative roles across Azure AD, Azure resources, and other Microsoft Online Services. PIM provides solutions like just-in-time access, request approval workflows, and fully integrated access reviews so you can identify, uncover, and prevent malicious activities of privileged roles in real time. | | [Reporting and Monitoring](../reports-monitoring/plan-monitoring-and-reporting.md)| The design of your Azure AD reporting and monitoring solution depends on your legal, security, and operational requirements as well as your existing environment and processes. This article presents the various design options and guides you to the right deployment strategy. | | [Access Reviews](../governance/deploy-access-reviews.md) | Access Reviews are an important part of your governance strategy, enabling you to know and manage who has access, and to what they have access. This article helps you plan and deploy access reviews to achieve your desired security and collaboration postures. |
+| [Identity governance for applications](../governance/identity-governance-applications-prepare.md) | As part of your organization's controls to meet your compliance and risk management objectives for managing access for critical applications, you can use Azure AD features to set up and enforce appropriate access.|
## Include the right stakeholders
active-directory Custom Security Attributes Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md
Previously updated : 11/16/2021 Last updated : 06/30/2022
Once you have a better understanding of how your attributes will be organized an
To grant access to the appropriate people, follow these steps to assign one of the custom security attribute roles.
-#### Assign roles at attribute set scope
+### Assign roles at attribute set scope
+
+#### Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
To grant access to the appropriate people, follow these steps to assign one of t
> [!NOTE] > Users with attribute set scope role assignments currently can see other attribute sets and custom security attribute definitions.
-
-#### Assign roles at tenant scope
+
+#### PowerShell
+
+Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role. The following example assigns the Attribute Assignment Administrator role to a principal with an attribute set scope named Engineering.
+
+```powershell
+$roleDefinitionId = "58a13ea3-c632-46ae-9ee0-9c0d43cd7f3d"
+$directoryScope = "/attributeSets/Engineering"
+$principalId = "f8ca5a85-489a-49a0-b555-0a6d81e56f0d"
+$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope -RoleDefinitionId $roleDefinitionId -PrincipalId $principalId
+```
+
+#### Microsoft Graph API
+
+Use the [Create unified Role Assignment](/graph/api/rbacapplication-post-roleassignments?view=graph-rest-beta&preserve-view=true) API to assign the role. The following example assigns the Attribute Assignment Administrator role to a principal with an attribute set scope named Engineering.
+
+```http
+POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+Content-type: application/json
+
+{
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "roleDefinitionId": "58a13ea3-c632-46ae-9ee0-9c0d43cd7f3d",
+ "principalId": "f8ca5a85-489a-49a0-b555-0a6d81e56f0d",
+ "directoryScopeId": "/attributeSets/Engineering"
+}
+```
+
+### Assign roles at tenant scope
+
+#### Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
To grant access to the appropriate people, follow these steps to assign one of t
1. Add assignments for the custom security attribute roles.
+#### PowerShell
+
+Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role. For more information, see [Assign Azure AD roles at different scopes](../roles/assign-roles-different-scopes.md).
+
+#### Microsoft Graph API
+
+Use the [Create unified Role Assignment](/graph/api/rbacapplication-post-roleassignments?view=graph-rest-beta&preserve-view=true) API to assign the role. For more information, see [Assign Azure AD roles at different scopes](../roles/assign-roles-different-scopes.md).
+ ## View audit logs for attribute changes Sometimes you need information about custom security attribute changes, such as for auditing or troubleshooting purposes. Anytime someone makes changes to definitions or assignments, the changes get logged in the [Azure AD audit logs](../reports-monitoring/concept-audit-logs.md).
active-directory Access Reviews Application Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-application-preparation.md
[Azure Active Directory (Azure AD) Identity Governance](identity-governance-overview.md) allows you to balance your organization's need for security and employee productivity with the right processes and visibility. It provides you with capabilities to ensure that the right people have the right access to the right resources.
-Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. The application sensitivity may be based on its purpose or the data it contains, such as financial information or personal information of the organization's customers. For those applications, only a subset of all the users in the organization will typically be authorized to have access, and access should only be permitted based on documented business requirements. Azure AD can be integrated with many popular SaaS applications, on-premises applications, and applications that your organization has developed, using [standard protocol](../fundamentals/auth-sync-overview.md) and API interfaces. Through these interfaces, Azure AD can be the authoritative source to control who has access to those applications. As you integrate your applications with Azure AD, you can then use Azure AD access reviews to recertify the users who have access to those applications, and remove access of those users who no longer need access.
+Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. The application sensitivity may be based on its purpose or the data it contains, such as financial information or personal information of the organization's customers. For those applications, only a subset of all the users in the organization will typically be authorized to have access, and access should only be permitted based on documented business requirements. Azure AD can be integrated with many popular SaaS applications, on-premises applications, and applications that your organization has developed, using [standard protocol](../fundamentals/auth-sync-overview.md) and API interfaces. Through these interfaces, Azure AD can be the authoritative source to control who has access to those applications. As you integrate your applications with Azure AD, you can then use Azure AD access reviews to recertify the users who have access to those applications, and remove access of those users who no longer need access. You can also use other features, including terms of use, conditional access and entitlement management, for governing access to applications, as described in [how to govern access to applications in your environment](identity-governance-applications-prepare.md).
## Prerequisites for reviewing access
Also, while not required for reviewing access to an application, we recommend al
In order for Azure AD access reviews to be used for an application, then the application must first be integrated with Azure AD. An application being integrated with Azure AD means one of two requirements must be met: * The application relies upon Azure AD for federated SSO, and Azure AD controls authentication token issuance. If Azure AD is the only identity provider for the application, then only users who are assigned to one of the application's roles in Azure AD are able to sign into the application. Those users that are denied by a review lose their application role assignment and can no longer get a new token to sign in to the application.
-* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as SCIM or by the application querying Azure AD via Microsoft Graph. Those users that are denied by a review lose their application role assignment or group membership, and when those changes are made available to the application, then the denied users will no longer have access.
+* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as System for Cross-Domain Identity Management (SCIM) or by the application querying Azure AD via Microsoft Graph. Those users that are denied by a review lose their application role assignment or group membership, and when those changes are made available to the application, then the denied users will no longer have access.
If neither of those criteria are met for an application, as the application doesn't rely upon Azure AD, then access reviews can still be used, however there may be some limitations. Users that aren't in your Azure AD or are not assigned to the application roles in Azure AD, won't be included in the review. Also, the changes to remove denied won't be able to be automatically sent to the application if there is no provisioning protocol that the application supports. The organization must instead have a process to send the results of a completed review to the application.
In order to permit a wide variety of applications and IT requirements to be addr
|:||--| |A| The application supports federated SSO, Azure AD is the only identity provider, and the application doesn't rely upon group or role claims. | In this pattern, you'll configure that the application requires individual application role assignments, and that users are assigned to the application. Then to perform the review, you'll create a single access review for the application, of the users assigned to this application role. When the review completes, if a user was denied, then they will be removed from the application role. Azure AD will then no longer issue that user with federation tokens and the user will be unable to sign into that application.| |B|If the application uses group claims in addition to application role assignments.| An application may use Azure AD group membership, distinct from application roles to express finer-grained access. Here, you can choose based on your business requirements either to have the users who have application role assignments reviewed, or to review the users who have group memberships. If the groups do not provide comprehensive access coverage, in particular if users may have access to the application even if they aren't a member of those groups, then we recommend reviewing the application role assignments, as in pattern A above.|
-|C| If the application doesn't rely solely on Azure AD for federated SSO, but does support provisioning, via SCIM, or via updates to a SQL table of users or an LDAP directory. | In this pattern, you'll configure Azure AD to provision the users with application role assignments to the application's database or directory, update the application role assignments in Azure AD with a list of the users who currently have access, and then create a single access review of the application role assignments.|
+|C| If the application doesn't rely solely on Azure AD for federated SSO, but does support provisioning via SCIM, or via updates to a SQL table of users or an LDAP directory. | In this pattern, you'll configure Azure AD to provision the users with application role assignments to the application's database or directory, update the application role assignments in Azure AD with a list of the users who currently have access, and then create a single access review of the application role assignments. For more information, see [Governing an application's existing users](identity-governance-applications-existing-users.md) to update the application role assignments in Azure AD.|
### Other options
Now that you have identified the integration pattern for the application, check
1. If the application supports federated SSO, then change to the **Conditional Access** tab. Inspect the enabled policies for this application. If there are policies that are enabled, block access, have users assigned to the policies, but no other conditions, then those users may be already blocked from being able to get federated SSO to the application. 1. Change to the **Users and groups** tab. This list contains all the users who are assigned to the application in Azure AD. If the list is empty, then a review of the application will complete immediately, since there isn't any task for the reviewer to perform.
-1. If your application is integrated with pattern C, then you'll need to confirm that the users in this list are the same as those in the applications' internal data store, prior to starting the review. Azure AD does not automatically import the users or their access rights from an application, but you can [assign users to an application role via PowerShell](../manage-apps/assign-user-or-group-access-portal.md).
+1. If your application is integrated with pattern C, then you'll need to confirm that the users in this list are the same as those in the applications' internal data store, prior to starting the review. Azure AD does not automatically import the users or their access rights from an application, but you can [assign users to an application role via PowerShell](../manage-apps/assign-user-or-group-access-portal.md). See [Governing an application's existing users](identity-governance-applications-existing-users.md) for how to bring in users from different application data stores into Azure AD.
1. Check whether all users are assigned to the same application role, such as **User**. If users are assigned to multiple roles, then if you create an access review of the application, then all assignments to all of the application's roles will be reviewed together. 1. Check the list of directory objects assigned to the roles to confirm that there are no groups assigned to the application roles. It's possible to review this application if there is a group assigned to a role; however, a user who is a member of the group assigned to the role, and whose access was denied, won't be automatically removed from the group. We recommend first converting the application to have direct user assignments, rather than members of groups, so that a user whose access is denied during the access review can have their application role assignment removed automatically.
Once the reviews have started, you can monitor their progress, and update the ap
## Next steps * [Plan an Azure Active Directory access reviews deployment](deploy-access-reviews.md)
-* [Create an access review of a group or application](create-access-review.md)
+* [Create an access review of a group or application](create-access-review.md)
+* [Govern access to applications](identity-governance-applications-prepare.md)
active-directory Access Reviews Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-overview.md
Azure AD enables you to collaborate with users from inside your organization and
- **Too many users in privileged roles:** It's a good idea to check how many users have administrative access, how many of them are Global Administrators, and if there are any invited guests or partners that have not been removed after being assigned to do an administrative task. You can recertify the role assignment users in [Azure AD roles](../privileged-identity-management/pim-perform-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) such as Global Administrators, or [Azure resources roles](../privileged-identity-management/pim-perform-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) such as User Access Administrator in the [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) experience. - **When automation is not possible:** You can create rules for dynamic membership on security groups or Microsoft 365 Groups, but what if the HR data is not in Azure AD or if users still need access after leaving the group to train their replacement? You can then create a review on that group to ensure those who still need access should have continued access. - **When a group is used for a new purpose:** If you have a group that is going to be synced to Azure AD, or if you plan to enable the application Salesforce for everyone in the Sales team group, it would be useful to ask the group owner to review the group membership prior to the group being used in a different risk content.-- **Business critical data access:** for certain resources, it might be required to ask people outside of IT to regularly sign out and give a justification on why they need access for auditing purposes.
+- **Business critical data access:** for certain resources, such as [business critical applications](identity-governance-applications-prepare.md), it might be required as part of compliance processes to ask people to regularly reconfirm and give a justification on why they need continued access.
- **To maintain a policy's exception list:** In an ideal world, all users would follow the access policies to secure access to your organization's resources. However, sometimes there are business cases that require you to make exceptions. As the IT admin, you can manage this task, avoid oversight of policy exceptions, and provide auditors with proof that these exceptions are reviewed regularly. - **Ask group owners to confirm they still need guests in their groups:** Employee access might be automated with some on premises Identity and Access Management (IAM), but not invited guests. If a group gives guests access to business sensitive content, then it's the group owner's responsibility to confirm the guests still have a legitimate business need for access. - **Have reviews recur periodically:** You can set up recurring access reviews of users at set frequencies such as weekly, monthly, quarterly or annually, and the reviewers will be notified at the start of each review. Reviewers can approve or deny access with a friendly interface and with the help of smart recommendations.
Azure AD enables you to collaborate with users from inside your organization and
## Where do you create reviews?
-Depending on what you want to review, you will create your access review in Azure AD access reviews, Azure AD enterprise apps (in preview), or Azure AD PIM.
+Depending on what you want to review, you will create your access review in Azure AD access reviews, Azure AD enterprise apps (in preview), Azure AD PIM, or Azure AD entitlement management.
| Access rights of users | Reviewers can be | Review created in | Reviewer experience | | | | | |
Depending on what you want to review, you will create your access review in Azur
| Assigned to a connected app | Specified reviewers</br>Self-review | Azure AD access reviews</br>Azure AD enterprise apps (in preview) | Access panel | | Azure AD role | Specified reviewers</br>Self-review | [Azure AD PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) | Azure portal | | Azure resource role | Specified reviewers</br>Self-review | [Azure AD PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) | Azure portal |
+| Access package assignments | Specified reviewers</br>Group members</br>Self-review | Azure AD entitlement management | Access panel |
## License requirements
Here are some example license scenarios to help you determine the number of lice
## Next steps
+- [Prepare for an access review of users' access to an application](access-reviews-application-preparation.md)
- [Create an access review of groups or applications](create-access-review.md) - [Create an access review of users in an Azure AD administrative role](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) - [Review access to groups or applications](perform-access-review.md)
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
For a demonstration of how to add a multi-stage approval to a request policy, wa
>[!VIDEO https://www.microsoft.com/videoplayer/embed/RE4d1Jw]
-## Change approval settings of an existing access package
+## Change approval settings of an existing access package assignment policy
-Follow these steps to specify the approval settings for requests for the access package:
+Follow these steps to specify the approval settings for requests for the access package through a policy:
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
-#Customer intent: As an administrator, I want detailed information about how I can edit an access package to include requestor infromation to screen requestors and get requestors the resources they need to perform their job.
+#Customer intent: As an administrator, I want detailed information about how I can edit an access package to include requestor information to screen requestors and get requestors the resources they need to perform their job.
# Change lifecycle settings for an access package in Azure AD entitlement management As an access package manager, you can change the lifecycle settings for assignments in an access package at any time by editing an existing policy. If you change the expiration date for assignments on a policy, the expiration date for requests that are already in a pending approval or approved state will not change.
-This article describes how to change the lifecycle settings for an existing access package.
+This article describes how to change the lifecycle settings for an existing access package assignment policy.
## Open requestor information To ensure users have the right access to an access package, custom questions can be configured to ask users requesting access to certain access packages. Configuration options include: localization, required/optional, and text/multiple choice answer formats. Requestors will see the questions when they request the package and approvers see the answers to the questions to help them make their decision. Use the following steps to configure questions in an access package:
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
# Change request settings for an access package in Azure AD entitlement management
-As an access package manager, you can change the users who can request an access package at any time by editing the policy or adding a new policy. This article describes how to change the request settings for an existing access package.
+As an access package manager, you can change the users who can request an access package at any time by editing a policy for access package assignment requests, or adding a new policy to the access package. This article describes how to change the request settings for an existing access package assignment policy.
## Choose between one or multiple policies The way you specify who can request an access package is with a policy. Before creating a new policy or editing an existing policy in an access package, you need to determine how many policies the access package needs.
-When you create an access package, you specify the request, approval and lifecycle settings, which are stored on the first policy of the access package. Most access packages will have a single policy, but a single access package can have multiple policies. You would create multiple policies for an access package if you want to allow different sets of users to be granted assignments with different request and approval settings.
+When you create an access package, you can specify the request, approval and lifecycle settings, which are stored on the first policy of the access package. Most access packages will have a single policy for users to request access, but a single access package can have multiple policies. You would create multiple policies for an access package if you want to allow different sets of users to be granted assignments with different request and approval settings.
For example, a single policy cannot be used to assign internal and external users to the same access package. However, you can create two policies in the same access package, one for internal users and one for external users. If there are multiple policies that apply to a user, they will be prompted at the time of their request to select the policy they would like to be assigned to. The following diagram shows an access package with two policies.
For example, a single policy cannot be used to assign internal and external user
| | | | I want all users in my directory to have the same request and approval settings for an access package | One | | I want all users in certain connected organizations to be able to request an access package | One |
-| I want to allow users in my directory and also users outside my directory to request an access package | Multiple |
-| I want to specify different approval settings for some users | Multiple |
-| I want some users access package assignments to expire while other users can extend their access | Multiple |
+| I want to allow users in my directory and also users outside my directory to request an access package | Two |
+| I want to specify different approval settings for some users | One for each group of users |
+| I want some users access package assignments to expire while other users can extend their access | One for each group of users |
+| I want users to request access and other users to be assigned access by an administrator | Two |
For information about the priority logic that is used when multiple policies apply, see [Multiple policies](entitlement-management-troubleshoot.md#multiple-policies ).
-## Open an existing access package and add a new policy of request settings
+## Open an existing access package and add a new policy with different request settings
If you have a set of users that should have different request and approval settings, you'll likely need to create a new policy. Follow these steps to start adding a new policy to an existing access package:
Follow these steps if you want to bypass access requests and allow administrator
> When assigning users to an access package, administrators will need to verify that the users are eligible for that access package based on the existing policy requirements. Otherwise, the users won't successfully be assigned to the access package. If the access package contains a policy that requires user requests to be approved, users can't be directly assigned to the package without necessary approval(s) from the designated approver(s).
-## Open and edit an existing policy of request settings
+## Open and edit an existing policy's request settings
-To change the request and approval settings for an access package, you need to open the corresponding policy. Follow these steps to open and edit the request settings for an access package:
+To change the request and approval settings for an access package, you need to open the corresponding policy with those settings. Follow these steps to open and edit the request settings for an access package assignment policy:
**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md
# Create an access review of an access package in Azure AD entitlement management
-To reduce the risk of stale access, you should enable periodic reviews of users who have active assignments to an access package in Azure AD entitlement management. You can enable reviews when you create a new access package or edit an existing access package. This article describes how to enable access reviews of access packages.
+To reduce the risk of stale access, you should enable periodic reviews of users who have active assignments to an access package in Azure AD entitlement management. You can enable reviews when you create a new access package or edit an existing access package assignment policy. This article describes how to enable access reviews of access packages.
## Prerequisites
For more information, see [License requirements](entitlement-management-overview
## Create an access review of an access package
-You can enable access reviews when [creating a new access package](entitlement-management-access-package-create.md) or [editing an existing access package](entitlement-management-access-package-lifecycle-policy.md) policy. Follow these steps to enable access reviews of an access package:
+You can enable access reviews when [creating a new access package](entitlement-management-access-package-create.md) or [editing an existing access package assignment policy](entitlement-management-access-package-lifecycle-policy.md) policy. If you have multiple policies, for different communities of users to request access, you can have independent access review schedules for each policy. Follow these steps to enable access reviews of an access package's assignments:
-1. Open the **Lifecycle** tab for an access package to specify when a user's assignment to the access package expires. You can also specify whether users can extend their assignments.
+1. Open the **Lifecycle** tab for an access package assignment policy to specify when a user's assignment to the access package expires. You can also specify whether users can extend their assignments.
1. In the **Expiration** section, set Access package assignments expires to **On date**, **Number of days**, **Number of hours**, or **Never**.
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
These triggers to Logic Apps are controlled in a new tab within access package p
> [!NOTE] > Select **New access package** if you want to create a new access package.
- > For more information about how to create an access package see [Create a new access package in entitlement management](entitlement-management-access-package-create.md). For more information about how to edit an existing access package, see [Change request settings for an access package in Azure AD entitlement management](entitlement-management-access-package-request-policy.md#open-and-edit-an-existing-policy-of-request-settings).
+ > For more information about how to create an access package see [Create a new access package in entitlement management](entitlement-management-access-package-create.md). For more information about how to edit an existing access package, see [Change request settings for an access package in Azure AD entitlement management](entitlement-management-access-package-request-policy.md#open-and-edit-an-existing-policys-request-settings).
1. Change to the policy tab, select the policy and select **Edit**.
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
Azure AD entitlement management can help address these challenges. To learn mor
Here are some of capabilities of entitlement management:
+- Control who can get access to applications, groups, Teams and SharePoint sites, with multi-stage approval, and ensure users do not retain access indefinitely through time-limited assignments and recurring access reviews.
- Delegate to non-administrators the ability to create access packages. These access packages contain resources that users can request, and the delegated access package managers can define policies with rules for which users can request, who must approve their access, and when access expires. - Select connected organizations whose users can request access. When a user who is not yet in your directory requests access, and is approved, they are automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
There are several ways that you can configure entitlement management for your or
### Access package 1. [Watch video: Day-to-day management: Things have changed](https://www.microsoft.com/videoplayer/embed/RE3LD4Z)
-1. [Open an existing policy of request settings](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-of-request-settings)
-1. [Update the approval settings](entitlement-management-access-package-approval-policy.md#change-approval-settings-of-an-existing-access-package)
+1. [Open an existing policy's request settings](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings)
+1. [Update the approval settings](entitlement-management-access-package-approval-policy.md#change-approval-settings-of-an-existing-access-package-assignment-policy)
### Access package 1. [Watch video: Day-to-day management: Things have changed](https://www.microsoft.com/videoplayer/embed/RE3LD4Z) 1. [Remove users that no longer need access](entitlement-management-access-package-assignments.md)
-1. [Open an existing policy of request settings](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-of-request-settings)
+1. [Open an existing policy's request settings](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings)
1. [Add users that need access](entitlement-management-access-package-request-policy.md#for-users-in-your-directory) ### Access package
-1. [If users need different lifecycle settings, add a new policy to the access package](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-of-request-settings)
+1. [If users need different lifecycle settings, add a new policy to the access package](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings)
1. [Directly assign specific users to the access package](entitlement-management-access-package-assignments.md#directly-assign-a-user) ## Assignments and reports
active-directory Identity Governance Applications Define https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-define.md
+
+ Title: Define organizational policies for governing access to applications in your environment| - Azure AD
+description: Azure Active Directory Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. You can define policies for how users should obtain access to your business critical applications integrated with Azure AD.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 6/28/2022+++++
+# Define organizational policies for governing access to applications in your environment
+
+Once you've identified one or more applications that you want to use Azure AD to [govern access](identity-governance-applications-prepare.md), write down the organization's policies for determining which users should have access, and any other constraints that the system should provide.
+
+## Identifies applications and their roles in scope
+
+Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. If this application is an existing application in your environment, you may already have documented the access policies for who 'should have access' to this application. If not, you may need to consult with various stakeholders, such as compliance and risk management teams, to ensure that the policies being used to automate access decisions are appropriate for your scenario.
+
+1. **Collect the roles and permissions that each application provides.** Some applications may have only a single role, for example, an application that only has the role "User". More complex applications may surface multiple roles to be managed through Azure AD. These application roles typically make broad constraints on the access a user with that role would have within the app. For example, an application that has an administrator persona might have two roles, "User" and "Administrator". Other applications may also rely upon group memberships or claims for finer-grained role checks, which can be provided to the application from Azure AD in provisioning or claims issued using federation SSO protocols. Finally, there may be roles that don't surface in Azure AD - perhaps the application doesn't permit defining the administrators in Azure AD, instead relying upon its own authorization rules to identify administrators.
+ > [!Note]
+ > If you're using an application from the Azure AD application gallery that supports provisioning, then Azure AD may import defined roles in the application and automatically update the application manifest with the application's roles automatically, once provisioning is configured.
+
+1. **Select which roles and groups have membership that are to be governed in Azure AD.** Based on compliance and risk management requirements, organizations often prioritize those roles or groups that give privileged access or access to sensitive information.
+
+## Define the organization's policy with prerequisites and other constraints for access to the application
+
+In this section, you'll write down the organizational policies you plan to use to determine access to the application. You can record this as a table in a spreadsheet, for example
+
+|Role|Prerequisite for access|Approvers|Default duration of access|Separation of duties constraints|Conditional access policies|
+|:--|-|-|-|-|-|
+|*Western Sales*|Member of sales team|user's manager|Yearly review|Cannot have *Eastern Sales* access|Multifactor authentication (MFA) and registered device required for access|
+|*Western Sales*|Any employee outside of sales|head of Sales department|90 days|N/A|MFA and registered device required for access|
+|*Western Sales*|Non-employee sales rep|head of Sales department|30 days|N/A|MFA required for access|
+|*Eastern Sales*|Member of sales team|user's manager|Yearly review|Cannot have *Western Sales* access|MFA and registered device required for access|
+|*Eastern Sales*|Any employee outside of sales|head of Sales department|90 days|N/A|MFA and registered device required for access|
+|*Eastern Sales*|Non-employee sales rep|head of Sales department|30 days|N/A|MFA required for access|
+
+1. **Identify if there are prerequisite requirements, standards that a user must meet before to they're given access to an application.** For example, under normal circumstances, only full time employees, or those in a particular department or cost center, should be allowed to have access to a particular department's application. Also, you may require the entitlement management policy for a user from some other department requesting access to have one or more additional approvers. While having multiple stages of approval may slow the overall process of a user gaining access, these extra stages ensure access requests are appropriate and decisions are accountable. For example, requests for access by an employee could have two stages of approval, first by the requesting user's manager, and second by one of the resource owners responsible for data held in the application.
+
+1. **Determine how long a user who has been approved for access, should have access, and when that access should go away.** For many applications, a user might retain access indefinitely, until they're no longer affiliated with the organization. In some situations, access may be tied to particular projects or milestones, so that when the project ends, access is removed automatically. Or, if only a few users are using an application through a policy, you may configure quarterly or yearly reviews of everyone's access through that policy, so that there's regular oversight. These processes can ensure users lose access eventually when access is no longer needed, even if there isn't a pre-determined project end date.
+
+1. **Inquire if there are separation of duties constraints.** For example, you may have an application with two roles, *Western Sales* and *Eastern Sales*, and you want to ensure that a user can only have one sales territory at a time. Include a list of any pairs of roles that are incompatible for your application, so that if a user has one role, they aren't allowed to request the second role.
+
+1. **Select the appropriate conditional access policy for access to the application.** We recommend that you analyze your applications and group them into applications that have the same resource requirements for the same users. If this is the first federated SSO application you're integrating with Azure AD for identity governance, you may need to create a new conditional access policy to express constraints, such as requirements for Multifactor authentication (MFA) or location-based access. You can configure users to be required to agree to [a terms of use](../conditional-access/require-tou.md). See [plan a conditional access deployment](../conditional-access/plan-conditional-access.md) for more considerations on how to define a conditional access policy.
+
+1. **Determine how exceptions to your criteria should be handled.** For example, an application may typically only be available for designated employees, but an auditor or vendor may need temporary access for a specific project. Or, an employee who is traveling may require access from a location that is normally blocked as your organization has no presence in that location. In these situations, you may choose to also have an entitlement management policy for approval that may have different stages, or a different time limit, or a different approver. A vendor who is signed in as a guest user in your Azure AD tenant may not have a manager, so instead their access requests could be approved by a sponsor for their organization, or by a resource owner, or a security officer.
+
+As the organizational policy for who should have access is being reviewed by the stakeholders, then you can begin [integrating the application](identity-governance-applications-integrate.md) with Azure AD. That way at a later step you'll be ready to [deploy the organization-approved policies](identity-governance-applications-deploy.md) for access in Azure AD identity governance.
+
+## Next steps
+
+- [Integrate an application with Azure AD](identity-governance-applications-integrate.md)
+- [Deploy governance policies](identity-governance-applications-deploy.md)
+
active-directory Identity Governance Applications Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-deploy.md
+
+ Title: Deploying policies for governing access to applications integrated with Azure AD| Microsoft Docs
+description: Azure Active Directory Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. You can use entitlement management and other identity governance features to enforce the policies for access.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 6/28/2022+++++
+# Deploying organizational policies for governing access to applications integrated with Azure AD
++
+In previous sections, you [defined your governance policies for an application](identity-governance-applications-define.md) and [integrated that application with Azure AD](identity-governance-applications-integrate.md). In this section, you'll configure the Azure AD conditional access and entitlement management features to control ongoing access to your applications. You'll establish
+* Conditional access policies, for how a user authenticates to Azure AD for an application integrated with Azure AD for single sign-on
+* Entitlement management policies, for how a user obtains and keeps assignments to application roles and membership in groups
+* Access review policies, for how often group memberships are reviewed
+
+Once these policies are deployed, you can then monitor the ongoing behavior of Azure AD as users request and are assigned access to the application.
+
+## Deploy conditional access policies for SSO enforcement
+
+In this section, you'll establish the Conditional Access policies that are in scope for determining whether an authorized user is able to sign into the app, based on factors like the user's authentication strength or device status.
+
+Conditional access is only possible for applications that rely upon Azure AD for single sign-on (SSO). If the application isn't able to be integrated for SSO, then continue in the next section.
+
+1. **Upload the terms of use (TOU) document, if needed.** If you require users to accept a terms of use (TOU) prior to accessing the application, then create and [upload the TOU document](../conditional-access/terms-of-use.md) so that it can be included in a conditional access policy.
+1. **Verify users are ready for Azure Active Directory Multi-Factor Authentication.** We recommend requiring Azure AD Multi-Factor Authentication for business critical applications integrated via federation. For these applications, there should be a policy that requires the user to have met a multi-factor authentication requirement prior to Azure AD permitting them to sign into the application. Some organizations may also block access by locations, or [require the user to access from a registered device](../conditional-access/howto-conditional-access-policy-compliant-device.md). If there's no suitable policy already that includes the necessary conditions for authentication, location, device and TOU, then [add a policy to your conditional access deployment](../conditional-access/plan-conditional-access.md).
+1. **Bring the application into scope of the appropriate conditional access policy**. If you have an existing conditional access policy that was created for another application subject to the same governance requirements, you could update that policy to have it apply to this application as well, to avoid having a large number of policies. Once you have made the updates, check to ensure that the expected policies are being applied. You can see what policies would apply to a user with the [Conditional Access what if tool](../conditional-access/troubleshoot-conditional-access-what-if.md).
+1. **Create a recurring access review if any users will need temporary policy exclusions**. In some cases, it may not be possible to immediately enforce conditional access policies for every authorized user. For example, some users may not have an appropriate registered device. If it's necessary to exclude one or more users from the CA policy and allow them access, then configure an access review for the group of [users who are excluded from Conditional Access policies](../governance/conditional-access-exclusion.md).
+1. **Document the token lifetime and applications' session settings.** How long a user who has been denied continued access can continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. The session lifetime for an application depends upon the application itself. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/active-directory-configurable-token-lifetimes.md).
+
+## Deploy entitlement management policies for automating access assignment
+
+In this section, you'll configure Azure AD entitlement management so users can request access to your application's roles or to groups used by the application. In order to perform these tasks, you'll need to be in the *Global Administrator*, *Identity Governance Administrator* role, or be [delegated as a catalog creator](entitlement-management-delegate-catalog.md) and the owner of the application.
+
+1. **Access packages for governed applications should be in a designated catalog.** If you don't already have a catalog for your application governance scenario, [create a catalog](../governance/entitlement-management-catalog-create.md) in Azure AD entitlement management.
+1. **Populate the catalog with necessary resources.** Add the application, as well as any Azure AD groups that the application relies upon, [as resources in that catalog](../governance/entitlement-management-catalog-create.md).
+1. **Create an access package for each role or group which users can request.** For each of the applications' roles or groups, [create an access package](../governance/entitlement-management-access-package-create.md) that includes that role or group as its resource. At this stage of configuring that access package, configure the access package assignment policy for direct assignment, so that only administrators can create assignments. In that policy, set the access review requirements for existing users, if any, so that they don't keep access indefinitely.
+1. **Configure access packages to enforce separation of duties requirements.** If you have [separation of duties](entitlement-management-access-package-incompatible.md) requirements, then configure the incompatible access packages or existing groups for your access package. If your scenario requires the ability to override a separation of duties check, then you can also [set up additional access packages for those override scenarios](entitlement-management-access-package-incompatible.md#configuring-multiple-access-packages-for-override-scenarios).
+1. **Add assignments of existing users, who already have access to the application, to the access packages.** For each access package, assign existing users of the application in that role, or members of that group, to the access package. You can [directly assign a user](entitlement-management-access-package-assignments.md) to an access package using the Azure portal, or in bulk via Graph or PowerShell.
+1. **Create policies for users to request access.** In each access package, [create additional access package assignment policies](../governance/entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings) for users to request access. Configure the approval and recurring access review requirements in that policy.
+1. **Create recurring access reviews for other groups used by the application.** If there are groups that are used by the application but aren't resource roles for an access package, then [create access reviews](create-access-review.md) for the membership of those groups.
+
+## View reports on access
+
+Azure AD, in conjunction with Azure Monitor, provides several reports to help you understand who has access to an application and if they're using that access.
+
+* An administrator, or a catalog owner, can [retrieve the list of users who have access package assignments](entitlement-management-access-package-assignments.md), via the Azure portal, Graph or PowerShell.
+* You can also send the audit logs to Azure Monitor and view a history of [changes to the access package](entitlement-management-logs-and-reporting.md#view-events-for-an-access-package), in the Azure portal, or via PowerShell.
+* You can view the last 30 days of sign ins to an application in the [sign ins report](../reports-monitoring/howto-find-activity-reports.md#sign-ins-report) in the Azure portal, or via [Graph](/graph/api/signin-list?view=graph-rest-1.0&tabs=http).
+* You can also send the [sign in logs to Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md) to archive sign in activity for up to two years.
+
+## Monitor to adjust entitlement management policies and access as needed
+
+At regular intervals, such as weekly, monthly or quarterly, based on the volume of application access assignment changes for your application, use the Azure portal to ensure that access is being granted in accordance with the policies. You can also ensure that the identified users for approval and review are still the correct individuals for these tasks.
+
+* **Watch for application role assignments and group membership changes.** If you have Azure AD configured to send its audit log to Azure Monitor, use the `Application role assignment activity` in Azure Monitor to [monitor and report on any application role assignments that weren't made through entitlement management](../governance/entitlement-management-access-package-incompatible.md#monitor-and-report-on-access-assignments). If there are role assignments that were created by an application owner directly, you should contact that application owner to determine if that assignment was authorized. In addition, if the application relies upon Azure AD security groups, also monitor for changes to those groups as well.
+
+* **Also watch for users granted access directly within the application.** If the following conditions are met, then it's possible for a user to obtain access to an application without being part of Azure AD, or without being added to the applications' user account store by Azure AD:
+
+ * The application has a local user account store within the app
+ * The user account store is in a database or in an LDAP directory
+ * The application doesn't rely solely upon Azure AD for single sign-on.
+
+ For an application with the properties in the previous list, you should regularly check that users were only added to the application's local user store through Azure AD provisioning. If users that were created directly in the application, contact the application owner to determine if that assignment was authorized.
+
+* **Ensure approvers and reviewers are kept up to date.** For each access package that you configured in the previous section, ensure the access package assignment policies continue to have the correct approvers and reviewers. Update those policies if the approvers and reviewers that were previously configured are no longer present in the organization, or are in a different role.
+
+* **Validate that reviewers are making decisions during a review.** Monitor that [recurring access reviews for those access packages](entitlement-management-access-package-lifecycle-policy.md) are completing successfully, to ensure reviewers are participating and making decisions to approve or deny user's continued need for access.
+
+* **Check that provisioning and deprovisioning are working as expected.** If you had previously configured provisioning of users to the application, then when the results of a review are applied, or a user's assignment to an access package expires, Azure AD will begin deprovisioning denied users from the application. You can [monitor the process of deprovisioning users](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md). If provisioning indicates an error with the application, you can [download the provisioning log](../reports-monitoring/concept-provisioning-logs.md) to investigate if there was a problem with the application.
+
+* **Update the Azure AD configuration with any role or group changes in the application.** If the application adds new roles, updates existing roles, or relies upon additional groups, then you'll need to update the access packages and access reviews to account for those new roles or groups.
+
+## Next steps
+
+- [Access reviews deployment plan](deploy-access-reviews.md)
+
active-directory Identity Governance Applications Existing Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-existing-users.md
+
+ Title: Governing an application's existing users in Azure AD with Microsoft PowerShell
+description: Planning for a successful access reviews campaign for a particular application includes identifying if there are any users in that application whose access doesn't derive from Azure AD.
+
+documentationCenter: ''
++
+editor:
++
+ na
++ Last updated : 06/24/2022+++++
+#Customer intent: As an IT admin, I want to ensure access to specific applications is governed, by setting up access reviews for those applications. For this, I need to have in Azure AD the existing users of that application assigned to the application.
+++
+# Governing an application's existing users - Microsoft PowerShell
+
+There are two common scenarios in which it's necessary to populate Azure Active Directory (Azure AD) with existing users of an application, prior to using the application with an Azure AD identity governance feature such as [access reviews](access-reviews-application-preparation.md).
+
+### Application migrated to Azure AD after using its own identity provider
+
+The first scenario is one in which the application already exists in the environment, and previously used its own identity provider or data store to track which users had access. When you change the application to rely upon Azure AD, then only users who are in Azure AD and permitted access to that application can access it. As part of that configuration change, you can choose to bring in the existing users from that application's data store into Azure AD, so that those users continue to have access, through Azure AD. Having the users associated with the application represented in Azure AD will enable Azure AD to track users with access to the application, even though the user's relationship with the application originated elsewhere, such as in an applications' database or directory. Once Azure AD is aware of a user's assignment, Azure AD will be able send updates to the application's data store when that user's attributes change, or when the user goes out of scope of the application.
+
+### Application that doesn't use Azure AD as its only identity provider
+
+The second scenario is one in which an application doesn't solely rely upon Azure AD as its identity provider. In some cases, an application might support multiple identity providers, or have its own built-in credential storage. This scenario is described as Pattern C in [preparing for an access review of user's access to an application](access-reviews-application-preparation.md). If it isn't feasible to remove other identity providers or local credential authentication from the application, then in order to be able to use Azure AD to review who has access to that application, or remove someone's access from that application, you'll need to create assignments in Azure AD that represent the access by users of the application, those users who don't rely upon Azure AD for authentication. Having these assignments is necessary if you plan to review all users with access to the application, as part of an access review.
+
+For example, there's a user who's in the application's data store and Azure AD is configured to require role assignments to the application, however, the user doesn't have an application role assignment in Azure AD. If the user is updated in Azure AD, then no changes will be sent to the application. And if the application's role assignments are reviewed, the user won't be included in the review. To have all the users included in the review, then it's necessary to have application role assignments for all users of the application.
+
+## Terminology
+
+This article illustrates the process for managing application role assignments using the [Microsoft Graph PowerShell cmdlets](https://www.powershellgallery.com/packages/Microsoft.Graph) and so uses Microsoft Graph terminology.
+
+![Terminology](./media/identity-governance-applications-existing-users/data-model-terminology.png)
+
+In Azure AD, a `ServicePrincipal` represents an application in a particular organization's directory. The `ServicePrincipal` has a property `AppRoles` that lists the roles an application supports, such as `Marketing specialist`. An `AppRoleAssignment` links a `User` to a Service principal and specifies which role that user has in that application.
+
+You may also be using [Azure AD entitlement management](entitlement-management-overview.md) access packages to give users time-limited access to the application. In entitlement management, an `AccessPackage` contains one or more resource roles, potentially from multiple service principals, and has `Assignment` for users to the access package. When you create an assignment for a user to an access package, then Azure AD entitlement management automatically creates the necessary `AppRoleAssignment` for the user to each application. For more information, see the [Manage access to resources in Azure AD entitlement management](/powershell/microsoftgraph/tutorial-entitlement-management) tutorial on how to create access packages through PowerShell.
+
+## Before you begin
+
+- You must have one of the following licenses in your tenant:
+
+ - Azure AD Premium P2
+ - Enterprise Mobility + Security (EMS) E5 license
+
+- You'll need to have an appropriate administrative role. If this is the first time you're performing these steps, you'll need the `Global administrator` role to authorize the use of Microsoft Graph PowerShell in your tenant.
+- There needs to be a service principal for your application in your tenant.
+
+ - If the application uses an LDAP directory, follow the guide for [configuring Azure AD to provision users into LDAP directories](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) through the section to Download, install, and configure the Azure AD Connect Provisioning Agent Package.
+ - If the application uses a SQL database, follow the guide for [configuring Azure AD to provision users into SQL based applications](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) through the section to Download, install and configure the Azure AD Connect Provisioning Agent Package.
++
+## Collect existing users from an application
+
+The first step to ensuring all users are recorded in Azure AD, is to collect the list of existing users who have access to the application. Some applications may have a built-in command to export a list of current users from its data store. In other cases, the application may rely upon an external directory or database. In some environments, the application may be located on a network segment or system that isn't appropriate for use for managing access to Azure AD, so you might need to extract the list of users from that directory or database, and then transfer it as a file to another system that can be used for Azure AD interactions. This section explains three approaches for how to get a list of users, held in a comma separated text file (CSV),
+
+* From an LDAP directory
+* From a SQL Server database
+* From another SQL-based database
+
+### Collect existing users from an application that uses an LDAP directory
+
+This section applies to applications that use an LDAP directory as its underlying data store for users who don't authenticate to Azure AD.
+
+Many LDAP directories, such as Active Directory, include a command that outputs a list of users.
+
+1. Identify which of the users in that directory are in scope of being users of the application. This choice will be dependent upon your application's configuration. For some applications, any user who exists in an LDAP directory is a valid user. Other applications may require the user to have a particular attribute or be a member of a group in that directory.
+
+1. Run the command that retrieves that subset of users from your directory. Ensure that the output includes the attributes of users that will be used for matching with Azure AD - such as an employee ID, account name or email address. For example, this command would produce a CSV file in the current directory with the `userPrincipalName` attribute of every person in the directory.
+
+ ```powershell
+ $out_filename = ".\users.csv"
+ csvde -f $out_filename -l userPrincipalName,cn -r "(objectclass=person)"
+ ```
+1. If needed, transfer the CSV file containing the list of users to a system with the [Microsoft Graph PowerShell cmdlets](https://www.powershellgallery.com/packages/Microsoft.Graph) installed.
+1. Continue reading at the section below, **Confirm Azure AD has users for each user from the application**.
+
+### Collect existing users from an application's database table using a SQL Server wizard
+
+This section applies to applications that use SQL Server as its underlying data store.
+
+First, get a list of the users from the tables. Most databases provide a way to export the contents of tables to a standard file format, such as to a CSV file. If the application uses a SQL Server database, you can use the **SQL Server Import and Export Wizard** to export portions of a database. If you don't have a utility for your database, you can use the ODBC driver with PowerShell, described in the next section.
+
+1. Log in to the system where SQL Server is installed.
+1. Launch **SQL Server 2019 Import and Export (64 bit)** or the equivalent for your database.
+1. Select the existing database as the source.
+1. Select **Flat File Destination** as the destination. Provide a file name, and change the **Code page** to **65001 (UTF-8)**.
+1. Complete the wizard, and select to run immediately.
+1. Wait for the execution to complete.
+1. If needed, transfer the CSV file containing the list of users to a system with the [Microsoft Graph PowerShell cmdlets](https://www.powershellgallery.com/packages/Microsoft.Graph) installed.
+1. Continue reading at the section below, **Confirm Azure AD has users for each user from the application**.
+
+### Collect existing users from an application's database table using PowerShell
+
+This section applies to applications that use another SQL database as its underlying data store, where you're using the [ECMA Connector Host](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) to provision users into that application. If you've not yet configured the provisioning agent, use that guide to create the DSN connection file you'll use in this section.
+
+1. Log in to the system where the provisioning agent is or will be installed.
+1. Launch PowerShell.
+1. Construct a connection string for connecting to your database system. The components of a connection string will depend upon the requirements of your database. If you are using SQL Server, then see the [list of DSN and Connection String Keywords and Attributes](/sql/connect/odbc/dsn-connection-string-attribute). If you are using a different database, then you'll need to include the mandatory keywords for connecting to that database. For example, if your database uses the fully-qualified pathname of the DSN file, a userID and password, then construct the connection string using the following commands.
+
+ ```powershell
+ $filedsn = "c:\users\administrator\documents\db.dsn"
+ $db_cs = "filedsn=" + $filedsn + ";uid=p;pwd=secret"
+ ```
+
+1. Open a connection to your database, providing that connection string, using the following commands.
+
+ ```powershell
+ $db_conn = New-Object data.odbc.OdbcConnection
+ $db_conn.ConnectionString = $db_cs
+ $db_conn.Open()
+ ```
+
+1. Construct a SQL query to retrieve the users from the database table. Be sure to include the columns that will be used to match users in the application's database with those users in Azure AD, such as an employee ID, account name or email address. For example, if your users are held in a database table named `USERS` and have columns `name` and `email`, then type the following command.
+
+ ```powershell
+ $db_query = "SELECT name,email from USERS"
+
+ ```
+
+1. Send the query to the database via the connection, and retrieve the results.
+
+ ```powershell
+ $result = (new-object data.odbc.OdbcCommand($db_query,$db_conn)).ExecuteReader()
+ $table = new-object System.Data.DataTable
+ $table.Load($result)
+ ```
+
+1. Write the result, the list of rows representing users that were retrieved from the query, to a CSV file.
+
+ ```powershell
+ $out_filename = ".\users.csv"
+ $table.Rows | Export-Csv -Path $out_filename -NoTypeInformation -Encoding UTF8
+ ```
+
+1. If this system doesn't have the Microsoft Graph PowerShell cmdlets installed, or doesn't have connectivity to Azure AD, then transfer the CSV file that was generated in the previous step, containing the list of users, to a system that has the [Microsoft Graph PowerShell cmdlets](https://www.powershellgallery.com/packages/Microsoft.Graph) installed.
+
+## Confirm Azure AD has users for each user from the application
+
+Now that you have a list of all the users obtained from the application, you'll next match those users from the application's data store with users in Azure AD. Before proceeding, review the section on [matching users in the source and target systems](/azure/active-directory/app-provisioning/customize-application-attributes#matching-users-in-the-source-and-target--systems), as you'll configure Azure AD provisioning with equivalent mappings afterwards. That step will allow Azure AD provisioning to query the application's data store with the same matching rules.
+
+### Retrieve the IDs of the users in Azure AD
+
+This section shows how to interact with Azure AD using [Microsoft Graph PowerShell](https://www.powershellgallery.com/packages/Microsoft.Graph) cmdlets. The first time your organization use these cmdlets for this scenario, you'll need to be in a Global Administrator role to consent Microsoft Graph PowerShell to be used for these scenarios in your tenant. Subsequent interactions can use a lower privileged role, such as User Administrator role if you anticipate creating new users, or the Application Administrator or [Identity Governance Administrator](/azure/active-directory/roles/permissions-reference#identity-governance-administrator) role, if you're just managing application role assignments.
+
+1. Launch PowerShell.
+1. If you don't have the [Microsoft Graph PowerShell modules](https://www.powershellgallery.com/packages/Microsoft.Graph) already installed, install the `Microsoft.Graph.Users` module and others using
+
+ ```powershell
+ Install-Module Microsoft.Graph
+ ```
+
+1. If you already have the modules installed, ensure you are using a recent version.
+
+ ```powershell
+ Update-Module microsoft.graph.users,microsoft.graph.identity.governance,microsoft.graph.applications
+ ```
+
+1. Connect to Azure AD.
+
+ The first time you run these scripts, you'll need to be an administrator, to be able to consent Microsoft Graph PowerShell for these permissions.
+
+ ```powershell
+ $msg = Connect-MgGraph -ContextScope Process -Scopes "User.Read.All,Application.Read.All,AppRoleAssignment.ReadWrite.All,EntitlementManagement.ReadWrite.All"
+ ```
+
+1. Read the list of users obtained from the application's data store into the PowerShell session. If the list of users was in a CSV file, then you can use the PowerShell cmdlet `Import-Csv` and provide the filename of the file from the previous section as an argument. For example, if the file is named `users.csv` and located in the current directory, type the command
+
+ ```powershell
+ $filename = ".\users.csv"
+ $dbusers = Import-Csv -Path $filename -Encoding UTF8
+ ```
+
+1. Pick the column of the `users` file that will match with an attribute of a user in Azure AD.
+
+ For example, you might have users in the database where the value in the column named `EMail` is the same value as in the Azure AD attribute `mail`.
+
+ ```powershell
+ $db_match_column_name = "EMail"
+ $azuread_match_attr_name = "mail"
+ ```
+
+1. Retrieve the IDs of those users in Azure AD.
+
+ The following PowerShell script will use the `$dbusers`, `$db_match_column_name` and `$azuread_match_attr_name` specified above, and will query Azure AD to locate a user that has a matching value for each record in the source file. If there are many users in the database, this script may take several minutes to complete.
+
+ ```powershell
+ $dbu_not_queried_list = @()
+ $dbu_not_matched_list = @()
+ $dbu_match_ambiguous_list = @()
+ $dbu_query_failed_list = @()
+ $azuread_match_id_list = @()
+
+ foreach ($dbu in $dbusers) {
+ if ($null -ne $dbu.$db_match_column_name -and $dbu.$db_match_column_name.Length -gt 0) {
+ $val = $dbu.$db_match_column_name
+ $escval = $val -replace "'","''"
+ $filter = $azuread_match_attr_name + " eq '" + $escval + "'"
+ try {
+ $ul = @(Get-MgUser -Filter $filter -All -ErrorAction Stop)
+ if ($ul.length -eq 0) { $dbu_not_matched_list += $dbu; } elseif ($ul.length -gt 1) {$dbu_match_ambiguous_list += $dbu } else {
+ $id = $ul[0].id;
+ $azuread_match_id_list += $id;
+ }
+ } catch { $dbu_query_failed_list += $dbu }
+ } else { $dbu_not_queried_list += $dbu }
+ }
+
+ ```
+
+1. View the results of the previous queries to see if any of the users in the database couldn't be located in Azure AD, due to errors or missing matches.
+
+ The following PowerShell script will display the counts of records that weren't located.
+
+ ```powershell
+ $dbu_not_queried_count = $dbu_not_queried_list.Count
+ if ($dbu_not_queried_count -ne 0) {
+ Write-Error "Unable to query for $dbu_not_queried_count records as rows lacked values for $db_match_column_name."
+ }
+ $dbu_not_matched_count = $dbu_not_matched_list.Count
+ if ($dbu_not_matched_count -ne 0) {
+ Write-Error "Unable to locate $dbu_not_matched_count records in Azure AD by querying for $db_match_column_name values in $azuread_match_attr_name."
+ }
+ $dbu_match_ambiguous_count = $dbu_match_ambiguous_list.Count
+ if ($dbu_match_ambiguous_count -ne 0) {
+ Write-Error "Unable to locate $dbu_match_ambiguous_count records in Azure AD."
+ }
+ $dbu_query_failed_count = $dbu_query_failed_list.Count
+ if ($dbu_query_failed_count -ne 0) {
+ Write-Error "Unable to locate $dbu_query_failed_count records in Azure AD as queries returned errors."
+ }
+ if ($dbu_not_queried_count -ne 0 -or $dbu_not_matched_count -ne 0 -or $dbu_match_ambiguous_count -ne 0 -or $dbu_query_failed_count -ne 0) {
+ Write-Output "You will need to resolve those issues before access of all existing users can be reviewed."
+ }
+ $azuread_match_count = $azuread_match_id_list.Count
+ Write-Output "Users corresponding to $azuread_match_count records were located in Azure AD."
+ ```
+
+1. When the script completes, it will indicate an error if there were any records from the data source that weren't located in Azure AD. If not all the records for users from the application's data store could be located as users in Azure AD, then you'll need to investigate which records didn't match and why. For example, someone's email address may have been changed in Azure AD without their corresponding `mail` property being updated in the application's data source. Or, they may have already left the organization, but still be in the application's data source. Or there might be a vendor or super-admin account in the application's data source who does not correspond to any specific person in Azure AD.
+
+1. If there were users that couldn't be located in Azure AD, but you want to have their access be reviewed or their attributes updated in the database, you'll need to create Azure AD users for the users that could not be located. You can create users in bulk using either a CSV file, as described in [bulk create users in the Azure AD portal](../enterprise-users/users-bulk-add.md), or by using the [New-MgUser](/powershell/module/microsoft.graph.users/new-mguser?view=graph-powershell-1.0#examples) cmdlet. When doing so, ensure that the users are populated with the attributes required for Azure AD to later match these new users to the existing users in the application.
+
+1. After adding any missing users to Azure AD, then run the script from step 7 above again, and then the script from step 8, and check that no errors are reported.
+
+ ```powershell
+ $dbu_not_queried_list = @()
+ $dbu_not_matched_list = @()
+ $dbu_match_ambiguous_list = @()
+ $dbu_query_failed_list = @()
+ $azuread_match_id_list = @()
+
+ foreach ($dbu in $dbusers) {
+ if ($null -ne $dbu.$db_match_column_name -and $dbu.$db_match_column_name.Length -gt 0) {
+ $val = $dbu.$db_match_column_name
+ $escval = $val -replace "'","''"
+ $filter = $azuread_match_attr_name + " eq '" + $escval + "'"
+ try {
+ $ul = @(Get-MgUser -Filter $filter -All -ErrorAction Stop)
+ if ($ul.length -eq 0) { $dbu_not_matched_list += $dbu; } elseif ($ul.length -gt 1) {$dbu_match_ambiguous_list += $dbu } else {
+ $id = $ul[0].id;
+ $azuread_match_id_list += $id;
+ }
+ } catch { $dbu_query_failed_list += $dbu }
+ } else { $dbu_not_queried_list += $dbu }
+ }
+
+ $dbu_not_queried_count = $dbu_not_queried_list.Count
+ if ($dbu_not_queried_count -ne 0) {
+ Write-Error "Unable to query for $dbu_not_queried_count records as rows lacked values for $db_match_column_name."
+ }
+ $dbu_not_matched_count = $dbu_not_matched_list.Count
+ if ($dbu_not_matched_count -ne 0) {
+ Write-Error "Unable to locate $dbu_not_matched_count records in Azure AD by querying for $db_match_column_name values in $azuread_match_attr_name."
+ }
+ $dbu_match_ambiguous_count = $dbu_match_ambiguous_list.Count
+ if ($dbu_match_ambiguous_count -ne 0) {
+ Write-Error "Unable to locate $dbu_match_ambiguous_count records in Azure AD."
+ }
+ $dbu_query_failed_count = $dbu_query_failed_list.Count
+ if ($dbu_query_failed_count -ne 0) {
+ Write-Error "Unable to locate $dbu_query_failed_count records in Azure AD as queries returned errors."
+ }
+ if ($dbu_not_queried_count -ne 0 -or $dbu_not_matched_count -ne 0 -or $dbu_match_ambiguous_count -ne 0 -or $dbu_query_failed_count -ne 0) {
+ Write-Output "You will need to resolve those issues before access of all existing users can be reviewed."
+ }
+ $azuread_match_count = $azuread_match_id_list.Count
+ Write-Output "Users corresponding to $azuread_match_count records were located in Azure AD."
+ ```
+
+## Check for users who are not already assigned to the application
+
+The previous steps have confirmed that all the users in the application's data store exist as users in Azure AD. However, they may not all currently be assigned to the application's roles in Azure AD. So the next steps are to see which users don't have assignments to application roles.
+
+1. Retrieve the users who currently have assignments to the application in Azure AD.
+
+ For example, if the enterprise application is named `CORPDB1`, then type the following commands
+
+ ```powershell
+ $azuread_app_name = "CORPDB1"
+ $azuread_sp_filter = "displayName eq '" + ($azuread_app_name -replace "'","''") + "'"
+ $azuread_sp = Get-MgServicePrincipal -Filter $azuread_sp_filter -All
+ $azuread_existing_assignments = @(Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $azuread_sp.Id -All)
+ ```
+
+1. Compare the list of user IDs from the previous section to those users currently assigned to the application.
+
+ ```powershell
+ $azuread_not_in_role_list = @()
+ foreach ($id in $azuread_match_id_list) {
+ $found = $false
+ foreach ($existing in $azuread_existing_assignments) {
+ if ($existing.principalId -eq $id) {
+ $found = $true; break;
+ }
+ }
+ if ($found -eq $false) { $azuread_not_in_role_list += $id }
+ }
+ $azuread_not_in_role_count = $azuread_not_in_role_list.Count
+ Write-Output "$azuread_not_in_role_count users in the application's data store are not assigned to the application roles."
+ ```
+
+ If 0 users aren't assigned to application roles, indicating that all users are assigned to application roles, then no further changes are needed before performing an access review.
+
+ However, if one or more users aren't currently assigned to the application roles, you'll need to add them to one of the application's roles, as described in the sections below.
+
+1. Select the role of the application to assign the remaining users to.
+
+ An application may have more than one role. Use this command to list the available roles.
+
+ ```powershell
+ $azuread_sp.AppRoles | where-object {$_.AllowedMemberTypes -contains "User"} | ft DisplayName,Id
+ ```
+
+ Select the appropriate role from the list, and obtain its role ID. For example, if the role name is `Admin`, then provide that value in the following PowerShell commands.
+
+ ```powershell
+ $azuread_app_role_name = "Admin"
+ $azuread_app_role_id = ($azuread_sp.AppRoles | where-object {$_.AllowedMemberTypes -contains "User" -and $_.DisplayName -eq $azuread_app_role_name}).Id
+ if ($null -eq $azuread_app_role_id) { write-error "role $azuread_app_role_name not located in application manifest"}
+ ```
+
+## Configure application provisioning
+
+Before creating new assignments, you'll want to configure [Azure AD provisioning](/azure/active-directory/app-provisioning/user-provisioning) of Azure AD users to the application. Configuring provisioning will enable Azure AD to match up the users in Azure AD with the application role assignments to the users already in the application's data store.
+
+1. Ensure that the application is configured to require users to have application role assignments, so that only selected users will be provisioned to the application.
+1. If provisioning hasn't been configured for the application, then configure, but do not start, [provisioning](/azure/active-directory/app-provisioning/user-provisioning).
+
+ * If the application uses an LDAP directory, follow the guide for [configuring Azure AD to provision users into LDAP directories](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure).
+ * If the application uses a SQL database, follow the guide for [configuring Azure AD to provision users into SQL based applications](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure).
+
+1. Check the [attribute mappings](/azure/active-directory/app-provisioning/customize-application-attributes) for provisioning to that application. Make sure that *Match objects using this attribute* is set for the Azure AD attribute and column that you used in the sections above for matching. If these rules aren't using the same attributes as you used earlier, then when application role assignments are created, Azure AD may be unable to locate existing users in the applications' data store, and inadvertently create duplicate users.
+1. Check that there's an attribute mapping for **isSoftDeleted** to an attribute of the application. When a user is unassigned from the application, soft-deleted in Azure AD, or blocked from sign-in, then Azure AD provisioning will update the attribute mapped to **isSoftDeleted**. If no attribute is mapped, then users who later are unassigned from the application role will continue to exist in the application's data store.
+1. If provisioning has already been enabled for the application, check that the application provisioning is not in [quarantine](/azure/active-directory/app-provisioning/application-provisioning-quarantine-status). You'll need to resolve any issues that are causing the quarantine prior to proceeding.
+
+## Create app role assignments in Azure AD
+
+For Azure AD to match the users in the application with the users in Azure AD, you'll need to create application role assignments in Azure AD.
+
+When an application role assignment is created in Azure AD for a user to application, then
+
+ - Azure AD will query the application to determine if the user already exists.
+ - Subsequent updates to the user's attributes in Azure AD will be sent to the application.
+ - Users will remain in the application indefinitely, unless updated outside of Azure AD, or until the assignment in Azure AD is removed.
+ - On the next review of that application's role assignments, the user will be included in the review.
+ - If the user is denied in an access review, then their application role assignment will be removed, and Azure AD will notify the application that the user is blocked from sign in.
+
+1. Create application role assignments for users who don't currently have role assignments.
+
+ ```powershell
+ foreach ($u in $azuread_not_in_role_list) {
+ $res = New-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $azuread_sp.Id -AppRoleId $azuread_app_role_id -PrincipalId $u -ResourceId $azuread_sp.Id
+ }
+ ```
+
+1. Wait 1 minute for changes to propagate within Azure AD.
+
+## Check that Azure AD provisioning has matched the existing users
+
+1. Requery Azure AD to obtain the updated list of role assignments.
+
+ ```powershell
+ $azuread_existing_assignments = @(Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $azuread_sp.Id -All)
+ ```
+
+1. Compare the list of user IDs from the previous section to those users now assigned to the application.
+
+ ```powershell
+ $azuread_still_not_in_role_list = @()
+ foreach ($id in $azuread_match_id_list) {
+ $found = $false
+ foreach ($existing in $azuread_existing_assignments) {
+ if ($existing.principalId -eq $id) {
+ $found = $true; break;
+ }
+ }
+ if ($found -eq $false) { $azuread_still_not_in_role_list += $id }
+ }
+ $azuread_still_not_in_role_count = $azuread_still_not_in_role_list.Count
+ if ($azuread_still_not_in_role_count -gt 0) {
+ Write-Output "$azuread_still_not_in_role_count users in the application's data store are not assigned to the application roles."
+ }
+ ```
+
+ If any users aren't assigned to application roles, check the Azure AD audit log for an error from a previous step.
+
+1. If the **Provisioning Status** of the application is **Off**, turn the **Provisioning Status** to **On**.
+1. Based on the guidance for [how long will it take to provision users](/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user#how-long-will-it-take-to-provision-users), wait for Azure AD provisioning to match the existing users of the application to those users just assigned.
+1. Monitor the [provisioning status](/azure/active-directory/app-provisioning/check-status-user-account-provisioning) to ensure that all users were matched successfully. If you don't see users being provisioned, check the troubleshooting guide for [no users being provisioned](/azure/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned). If you see an error in the provisioning status and are provisioning to an on-premises application, then check the [troubleshooting guide for on-premises application provisioning](/azure/active-directory/app-provisioning/on-premises-ecma-troubleshoot).
+
+Once the users have been matched by the Azure AD provisioning service, based on the application role assignments you've created, then subsequent changes will be sent to the application.
+
+## Next steps
+
+ - [Prepare for an access review of users' access to an application](access-reviews-application-preparation.md)
active-directory Identity Governance Applications Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-integrate.md
+
+ Title: Integrate your applications for identity governance and establishing a baseline of reviewed access - Azure AD
+description: Azure Active Directory Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. You can integrate your existing business critical third party on-premises and cloud-based applications with Azure AD for identity governance scenarios.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 6/28/2022+++++
+# Integrating applications with Azure AD and establishing a baseline of reviewed access
++
+Once you've [established the policies](identity-governance-applications-define.md) for who should have access to an application, then you can [connect your application to Azure AD](../manage-apps/what-is-application-management.md) and then [deploy the policies](identity-governance-applications-deploy.md) for governing access to them.
+
+Azure AD identity governance can be integrated with many applications, using [standards](../fundamentals/auth-sync-overview.md) such as OpenID Connect, SAML, SCIM, SQL and LDAP. Through these standards, you can use Azure AD with many popular SaaS applications and on-premises applications, including applications that your organization has developed. This deployment plan covers how to connect your application to Azure AD and enable identity governance features to be used for that application.
+
+In order for Azure AD identity governance to be used for an application, the application must first be integrated with Azure AD. An application being integrated with Azure AD means one of two requirements must be met:
+
+* The application relies upon Azure AD for federated SSO, and Azure AD controls authentication token issuance. If Azure AD is the only identity provider for the application, then only users who are assigned to one of the application's roles in Azure AD are able to sign into the application. Those users that lose their application role assignment can no longer get a new token to sign in to the application.
+* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as SCIM or by the application querying Azure AD via Microsoft Graph.
+
+If neither of those criteria are met for an application, for example when the application doesn't rely upon Azure AD, then identity governance can still be used. However, there may be some limitations using identity governance without meeting the criteria. For instance, users that aren't in your Azure AD, or aren't assigned to the application roles in Azure AD, won't be included in access reviews of the application, until you add them to the application roles. For more information, see [Preparing for an access review of users' access to an application](access-reviews-application-preparation.md).
+
+## Integrate the application with Azure AD to ensure only authorized users can access the application
+
+Typically this process of integrating an application begins when you configure that application to rely upon Azure AD for user authentication, with a federated single sign-on (SSO) protocol connection, and then add provisioning. The most commonly used protocols for SSO are [SAML and OpenID Connect](../develop/active-directory-v2-protocols.md). You can read more about the tools and process to [discover and migrate application authentication to Azure AD](../manage-apps/migrate-application-authentication-to-azure-active-directory.md).
+
+Next, if the application implements a provisioning protocol, then you should configure Azure AD to provision users to the application, so that Azure AD can signal to the application when a user has been granted access or a user's access has been removed. These provisioning signals permit the application to make automatic corrections, such as to reassign content created by an employee who has left to their manager.
+
+1. Check if your application is on the [list of enterprise applications](../manage-apps/view-applications-portal.md) or [list of app registrations](../develop/app-objects-and-service-principals.md). If the application is already present in your tenant, then skip to step 5 in this section.
+1. If your application is a SaaS application that isn't already registered in your tenant, then check if the application is available the [application gallery](../manage-apps/overview-application-gallery.md) for applications that can be integrated for federated SSO. If it's in the gallery, then use the tutorials to integrate the application with Azure AD.
+ 1. Follow the [tutorial](../saas-apps/tutorial-list.md) to configure the application for federated SSO with Azure AD.
+ 1. if the application supports provisioning, [configure the application for provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md).
+ 1. When complete, skip to the next section in this article.
+ If the SaaS application isn't in the gallery, then [ask the SaaS vendor to onboard](../manage-apps/v2-howto-app-gallery-listing.md).
+1. If this is a private or custom application, you can also select a single sign-on integration that's most appropriate, based on the location and capabilities of the application.
+
+ * If this application is in the public cloud, and it supports single sign-on, then configure single sign-on directly from Azure AD to the application.
+
+ |Application supports| Next steps|
+ |-|--|
+ | OpenID Connect | [Add an OpenID Connect OAuth application](../saas-apps/openidoauth-tutorial.md) |
+ | SAML 2.0 | Register the application and configure the application with [the SAML endpoints and certificate of Azure AD](../develop/active-directory-saml-protocol-reference.md) |
+ | SAML 1.1 | [Add a SAML-based application](../saas-apps/saml-tutorial.md) |
+
+ * Otherwise, if this is an on-premises or IaaS hosted application that supports single sign-on, then configure single sign-on from Azure AD to the application through the application proxy.
+
+ |Application supports| Next steps|
+ |-|--|
+ | SAML 2.0| Deploy the [application proxy](../app-proxy/application-proxy.md) and configure an application for [SAML SSO](../app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md) |
+ | Integrated Windows Auth (IWA) | Deploy the [application proxy](../app-proxy/application-proxy.md), configure an application for [Integrated Windows authentication SSO](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md), and set firewall rules to prevent access to the application's endpoints except via the proxy.|
+ | header-based authentication | Deploy the [application proxy](../app-proxy/application-proxy.md) and configure an application for [header-based SSO](../app-proxy/application-proxy-configure-single-sign-on-with-headers.md) |
+
+1. If your application has multiple roles, and relies upon Azure AD to send a user's role as part of a user signing into the application, then configure those application roles in Azure AD on your application. You can use the [app roles UI](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to add those roles.
+
+1. If the application supports provisioning, then [configure provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md) of assigned users and groups from Azure AD to that application. If this is a private or custom application, you can also select the integration that's most appropriate, based on the location and capabilities of the application.
+
+ * If this application is in the public cloud and supports SCIM, then configure provisioning of users via SCIM.
+
+ |Application supports| Next steps|
+ |-|--|
+ | SCIM | Configure an application with SCIM [for user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md) |
+
+ * Otherwise, if this is an on-premises or IaaS hosted application, then configure provisioning to that application, either via SCIM or to the underlying database or directory of the application.
+
+ |Application supports| Next steps|
+ |-|--|
+ | SCIM | configure an application with the [provisioning agent for on-premises SCIM-based apps](../app-provisioning/on-premises-scim-provisioning.md)|
+ | local user accounts, stored in a SQL database | configure an application with the [provisioning agent for on-premises SQL-based applications](../app-provisioning/on-premises-sql-connector-configure.md)|
+ | local user accounts, stored in an LDAP directory | configure an application with the [provisioning agent for on-premises LDAP-based applications](../app-provisioning/on-premises-ldap-connector-configure.md) |
+
+1. If your application uses Microsoft Graph to query groups from Azure AD, then [consent](../develop/consent-framework.md) to the applications to have the appropriate permissions to read from your tenant.
+
+1. Set that access to **the application is only permitted for users assigned to the application**. This setting will prevent users from inadvertently seeing the application in MyApps, and attempting to sign into the application, prior to Conditional Access policies being enabled.
+
+## Perform an initial access review
+
+If this is a new application your organization hasn't used before, and therefore no one has pre-existing access, or if you've already been performing access reviews for this application, then skip to the [next section](identity-governance-applications-deploy.md).
+
+However, if the application already existed in your environment, then it's possible that users may have gotten access in the past through manual or out-of-band processes, and those users should now be reviewed to have confirmation that their access is still needed and appropriate going forward. We recommend performing an access review of the users who already have access to the application, before enabling policies for more users to be able to request access. This review will set a baseline of all users having been reviewed at least once, to ensure that those users are authorized for continued access.
+
+1. Follow the steps in [Preparing for an access review of users' access to an application](access-reviews-application-preparation.md).
+1. Bring in any [existing users and create application role assignments](identity-governance-applications-existing-users.md) for them.
+1. If the application wasn't integrated for provisioning, then once the review is complete, you may need to manually update the application's internal database or directory to remove those users who were denied.
+1. Once the review has been completed and the application access updated, or if no users have access, then continue on to the next steps to deploy conditional access and entitlement management policies for the application.
+
+Now that you have a baseline that ensures existing access has been reviewed, then you can [deploy the organization's policies](identity-governance-applications-deploy.md) for ongoing access and any new access requests.
+
+## Next steps
+
+- [Deploy governance policies](identity-governance-applications-deploy.md)
active-directory Identity Governance Applications Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-prepare.md
+
+ Title: Govern access for applications in your environment - Azure AD
+description: Azure Active Directory Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. These features can be used for your existing business critical third party on-premises and cloud-based applications.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 6/28/2022+++++
+# Govern access for applications in your environment
+
+Azure Active Directory (Azure AD) Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. Its features ensure that the right people have the right access to the right resources in your organization at the right time.
+
+Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. The application sensitivity may be based on its purpose or the data it contains, such as financial information or personal information of the organization's customers. For those applications, only a subset of all the users in the organization will typically be authorized to have access, and access should only be permitted based on documented business requirements. As part of your organization's controls for managing access, you can use Azure AD features to
+
+* set up appropriate access
+* enforce access checks
+* produce reports to demonstrate how those controls are being used to meet your compliance and risk management objectives.
+
+In addition to the application access governance scenario, you can also use identity governance and the other Azure AD features for other scenarios, such as [reviewing and removing users from other organizations](../governance/access-reviews-external-users.md) or [managing users who are excluded from Conditional Access policies](../governance/conditional-access-exclusion.md). If your organization has multiple administrators in Azure AD or Azure, uses B2B or self-service group management, then you should [plan an access reviews deployment](deploy-access-reviews.md) for those scenarios.
+
+## Getting started with governing access to applications
+
+Azure AD identity governance can be integrated with many applications, using [standards](../fundamentals/auth-sync-overview.md) such as OpenID Connect, SAML, SCIM, SQL and LDAP. Through these standards, you can use Azure AD with many popular SaaS applications, as well as on-premises applications, and applications that your organization has developed. Once you've prepared your Azure AD environment, as described in the section below, the three step plan covers how to connect an application to Azure AD and enable identity governance features to be used for that application.
+
+1. [Define your organization's policies for governing access to the application](identity-governance-applications-define.md)
+1. [Integrate the application with Azure AD](identity-governance-applications-integrate.md) to ensure only authorized users can access the application, and review user's existing access to the application to set a baseline of all users having been reviewed
+1. [Deploy those policies](identity-governance-applications-deploy.md) for controlling single sign-on (SSO) and automating access assignments for that application
+
+## Prerequisites before configuring Azure AD for identity governance
+
+Before you begin the process of governing application access from Azure AD, you should check your Azure AD environment is appropriately configured.
+
+* **Ensure your Azure AD and Microsoft Online Services environment is ready for the [compliance requirements](../standards/standards-overview.md) for the applications to be integrated and properly licensed**. Compliance is a shared responsibility among Microsoft, cloud service providers (CSPs), and organizations. To use Azure AD to govern access to applications, you must have one of the following licenses in your tenant:
+
+ * Azure AD Premium P2
+ * Enterprise Mobility + Security (EMS) E5 license
+
+ Your tenant will need to have at least as many licenses as the number of member (non-guest) users who have or can request access to the applications, approve, or review access to the applications. With an appropriate license for those users, you can then govern access to up to 1500 applications per user.
+
+* **If you will be governing guest's access to the application, link your Azure AD tenant to a subscription for MAU billing**. This step will be necessary prior to having a guest request or review their access. For more information, see [billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md).
+
+* **Check that Azure AD is already sending its audit log, and optionally other logs, to Azure Monitor.** Azure Monitor is optional, but useful for governing access to apps, as Azure AD only stores audit events for up to 30 days in its audit log. You can keep the audit data for longer than the default retention period, outlined in [How long does Azure AD store reporting data?](../reports-monitoring/reference-reports-data-retention.md), and use Azure Monitor workbooks and custom queries and reports on historical audit data. You can check the Azure AD configuration to see if it is using Azure Monitor, in **Azure Active Directory** in the Azure portal, by clicking on **Workbooks**. If this integration isn't configured, and you have an Azure subscription and are in the `Global Administrator` or `Security Administrator` roles, you can [configure Azure AD to use Azure Monitor](../governance/entitlement-management-logs-and-reporting.md).
+
+* **Make sure only authorized users are in the highly privileged administrative roles in your Azure AD tenant.** Administrators in the *Global Administrator*, *Identity Governance Administrator*, *User Administrator*, *Application Administrator*, *Cloud Application Administrator* and *Privileged Role Administrator* can make changes to users and their application role assignments. If the memberships of those roles have not yet been recently reviewed, you'll need a user who is in the *Global Administrator* or *Privileged Role Administrator* to ensure that [access review of these directory roles](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md) are started. You should also ensure that users in Azure roles in subscriptions that hold the Azure Monitor, Logic Apps and other resources needed for the operation of your Azure AD configuration have been reviewed.
+
+* **Check your tenant has appropriate isolation.** If your organization is using Active Directory on-premises, and these AD domains are connected to Azure AD, then you'll need to ensure that highly-privileged administrative operations for cloud-hosted services are isolated from on-premises accounts. Check that you've [configured your systems to protect your Microsoft 365 cloud environment from on-premises compromise](../fundamentals/protect-m365-from-on-premises-attacks.md).
+
+Once you have checked your Azure AD environment is ready, then proceed to [define the governance policies](identity-governance-applications-define.md) for your applications.
+
+## Next steps
+
+- [Define governance policies](identity-governance-applications-define.md)
+- [Integrate an application with Azure AD](identity-governance-applications-integrate.md)
+- [Deploy governance policies](identity-governance-applications-deploy.md)
+
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Typically, IT delegates access approval decisions to business decision makers.
Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Azure AD access reviews](access-reviews-overview.md). [Azure AD entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles.
-When a user attempts to access applications, Azure AD enforces [Conditional Access](../conditional-access/index.yml) policies. For example, Conditional Access policies can include displaying a [terms of use](../conditional-access/terms-of-use.md) and [ensuring the user has agreed to those terms](../conditional-access/require-tou.md) prior to being able to access an application.
+When a user attempts to access applications, Azure AD enforces [Conditional Access](../conditional-access/index.yml) policies. For example, Conditional Access policies can include displaying a [terms of use](../conditional-access/terms-of-use.md) and [ensuring the user has agreed to those terms](../conditional-access/require-tou.md) prior to being able to access an application. For more information, see [govern access to applications in your environment](identity-governance-applications-prepare.md).
## Privileged access lifecycle
In addition to the features listed above, additional Azure AD features frequentl
|Access requests|End users can request group membership or application access. End users, including guests from other organizations, can request access to access packages.|[Entitlement management](entitlement-management-overview.md)| |Workflow|Resource owners can define the approvers and escalation approvers for access requests and approvers for role activation requests. |[Entitlement management](entitlement-management-overview.md) and [PIM](../privileged-identity-management/pim-configure.md)| |Policy and role management|Admin can define conditional access policies for run-time access to applications. Resource owners can define policies for user's access via access packages.|[Conditional access](../conditional-access/overview.md) and [Entitlement management](entitlement-management-overview.md) policies|
-|Access certification|Admins can enable recurring access re-certification for: SaaS apps or cloud group memberships, Azure AD or Azure Resource role assignments. Automatically remove resource access, block guest access and delete guest accounts.|[Access reviews](access-reviews-overview.md), also surfaced in [PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)|
-|Fulfillment and provisioning|Automatic provisioning and deprovisioning into Azure AD connected apps, including via SCIM and into SharePoint Online sites. |[user provisioning](../app-provisioning/user-provisioning.md)|
+|Access certification|Admins can enable recurring access recertification for: SaaS apps, on-premises apps, cloud group memberships, Azure AD or Azure Resource role assignments. Automatically remove resource access, block guest access and delete guest accounts.|[Access reviews](access-reviews-overview.md), also surfaced in [PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)|
+|Fulfillment and provisioning|Automatic provisioning and deprovisioning into Azure AD connected apps, including via SCIM, LDAP, SQL and into SharePoint Online sites. |[user provisioning](../app-provisioning/user-provisioning.md)|
|Reporting and analytics|Admins can retrieve audit logs of recent user provisioning and sign on activity. Integration with Azure Monitor and 'who has access' via access packages.|[Azure AD reports](../reports-monitoring/overview-reports.md) and [monitoring](../reports-monitoring/overview-monitoring.md)| |Privileged access|Just-in-time and scheduled access, alerting, approval workflows for Azure AD roles (including custom roles) and Azure Resource roles.|[Azure AD PIM](../privileged-identity-management/pim-configure.md)| |Auditing|Admins can be alerted of creation of admin accounts.|[Azure AD PIM alerts](../privileged-identity-management/pim-how-to-configure-security-alerts.md)| ## Getting started
-Check out the Getting started tab of **Identity Governance** in the Azure portal to start using entitlement management, access reviews, Privileged Identity Management, and Terms of use.
+Check out the [Getting started tab](https://portal.azure.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/GettingStarted) of **Identity Governance** in the Azure portal to start using entitlement management, access reviews, Privileged Identity Management, and Terms of use, and see some common use cases.
![Identity Governance getting started](./media/identity-governance-overview/getting-started.png)
+There are also tutorials for [managing access to resources in entitlement management](entitlement-management-access-package-first.md), [onboarding external users to Azure AD through an approval process](entitlement-management-onboard-external-user.md), [governing access to existing applications](identity-governance-applications-prepare.md). You can also automate identity governance tasks through Microsoft Graph and PowerShell.
+ If you have any feedback about Identity Governance features, click **Got feedback?** in the Azure portal to submit your feedback. The team regularly reviews your feedback. While there is no perfect solution or recommendation for every customer, the following configuration guides also provide the baseline policies Microsoft recommends you follow to ensure a more secure and productive workforce.
+- [Plan an access reviews deployment to manage resource access lifecycle](deploy-access-reviews.md)
- [Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations) - [Securing privileged access](../roles/security-planning.md)
active-directory Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migration-resources.md
Resources to help you migrate application access and authentication to Azure Act
| Resource | Description | |:--|:-| |[Migrating your apps to Azure AD](https://aka.ms/migrateapps/whitepaper) | This white paper presents the benefits of migration, and describes how to plan for migration in four clearly-outlined phases: discovery, classification, migration, and ongoing management. YouΓÇÖll be guided through how to think about the process and break down your project into easy-to-consume pieces. Throughout the document are links to important resources that will help you along the way. |
-|[Developer tutorial: AD FS to Azure AD application migration playbook for developers](https://aka.ms/adfsplaybook) | This set of ASP.NET code samples and accompanying tutorials will help you learn how to safely and securely migrate your applications integrated with Active Directory Federation Services (AD FS) to Azure Active Directory (Azure AD). This tutorial is focused towards developers who not only need to learn configuring apps on both AD FS and Azure AD, but also become aware and confident of changes their code base will require in this process.|
+|[Developer tutorial: AD FS to Azure AD application migration playbook for developers](https://aka.ms/adfsplaybook) | This set of ASP.NET code samples and accompanying tutorials will help you learn how to safely and securely migrate your applications integrated with Active Directory Federation Services (AD FS) to Azure Active Directory (Azure AD). This tutorial is focused towards developers who not only need to learn how to configure apps on both AD FS and Azure AD, but also become aware and confident of changes their code base will require in this process.|
| [Tool: Active Directory Federation Services Migration Readiness Script](https://aka.ms/migrateapps/adfstools) | This is a script you can run on your on-premises Active Directory Federation Services (AD FS) server to determine the readiness of apps for migration to Azure AD.| | [Deployment plan: Migrating from AD FS to password hash sync](https://aka.ms/ADFSTOPHSDPDownload) | With password hash synchronization, hashes of user passwords are synchronized from on-premises Active Directory to Azure AD. This allows Azure AD to authenticate users without interacting with the on-premises Active Directory.| | [Deployment plan: Migrating from AD FS to pass-through authentication](https://aka.ms/ADFSTOPTADPDownload)|Azure AD pass-through authentication helps users sign in to both on-premises and cloud-based applications by using the same password. This feature provides your users with a better experience since they have one less password to remember. It also reduces IT helpdesk costs because users are less likely to forget how to sign in when they only need to remember one password. When people sign in using Azure AD, this feature validates users' passwords directly against your on-premises Active Directory.|
-| [Deployment plan: Enabling Single Sign-on to a SaaS app with Azure AD](https://aka.ms/SSODPDownload) | Single sign-on (SSO) helps you access all the apps and resources you need to do business, while signing in only once, using a single user account. For example, after a user has signed in, the user can move from Microsoft Office, to SalesForce, to Box without authenticating (for example, typing a password) a second time.
+| [Deployment plan: Enabling single sign-on to a SaaS app with Azure AD](https://aka.ms/SSODPDownload) | Single sign-on (SSO) helps you access all the apps and resources you need to do business, while signing in only once, using a single user account. For example, after a user has signed in, the user can move from Microsoft Office, to SalesForce, to Box without authenticating (for example, typing a password) a second time.
| [Deployment plan: Extending apps to Azure AD with Application Proxy](https://aka.ms/AppProxyDPDownload)| Providing access from employee laptops and other devices to on-premises applications has traditionally involved virtual private networks (VPNs) or demilitarized zones (DMZs). Not only are these solutions complex and hard to make secure, but they are costly to set up and manage. Azure AD Application Proxy makes it easier to access on-premises applications. |
-| [Deployment plans](../fundamentals/active-directory-deployment-plans.md) | Find more deployment plans for deploying features such as multi-factor authentication, Conditional Access, user provisioning, seamless SSO, self-service password reset, and more! |
-| [Migrating apps from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/mediahandler/files/resourcefiles/migrating-applications-from-symantec-siteminder-to-azure-active-directory/Migrating-applications-from-Symantec-SiteMinder-to-Azure-Active-Directory.pdf) | Get step by step guidance on application migration and integration options with an example, that walks you through migrating applications from Symantec SiteMinder to Azure AD. |
+| [Deployment plans](../fundamentals/active-directory-deployment-plans.md) | Find more deployment plans for deploying features such as Azure AD multi-factor authentication, Conditional Access, user provisioning, seamless SSO, self-service password reset, and more! |
+| [Migrating apps from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/mediahandler/files/resourcefiles/migrating-applications-from-symantec-siteminder-to-azure-active-directory/Migrating-applications-from-Symantec-SiteMinder-to-Azure-Active-Directory.pdf) | Get step by step guidance on application migration and integration options with an example that walks you through migrating applications from Symantec SiteMinder to Azure AD. |
+| [Identity governance for applications](../governance/identity-governance-applications-prepare.md)| This guide outlines what you need to do if you're migrating identity governance for an application from a previous identity governance technology, to connect Azure AD to that application.|
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
Title: Publish your application
-description: Learn how to publish your application in the Azure Active Directory application gallery.
+ Title: Submit a request to publish your application
+description: Learn how to publish your application in Azure Active Directory application gallery.
Previously updated : 1/18/2022 Last updated : 6/2/2022 +
-# Request to Publish your application in the Azure Active Directory application gallery
+# Submit a request to publish your application in Azure Active Directory application gallery
-You can publish your application in the Azure Active Directory (Azure AD) application gallery. When your application is published, it's made available as an option for users when they add applications to their tenant. For more information, see [Overview of the Azure Active Directory application gallery](overview-application-gallery.md).
+You can publish applications you develop in the *Azure Active Directory* (Azure AD) application gallery, which is a catalog of thousands of apps. When you publish your applications, they're made publicly available for users to add to their tenants. For more information, see [Overview of the Azure Active Directory application gallery](overview-application-gallery.md).
-To publish your application in the gallery, you need to complete the following tasks:
+To publish your application in the Azure AD gallery, you need to complete the following tasks:
- Make sure that you complete the prerequisites. - Create and publish documentation.
To publish your application in the gallery, you need to complete the following t
- Join the Microsoft partner network. ## Prerequisites-- To publish your application in the gallery, you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).-- Support for single sign-on (SSO). To learn more about the supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md).
- - For password SSO, make sure that your application supports form authentication so that password vaulting can be used.
- - For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/) to be listed in the gallery. The enterprise gallery applications must support multiple user configurations and not any specific user.
- - For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be properly implemented for the application.
-- Supporting provisioning is optional, but highly recommended. To learn more about the Azure AD SCIM implementation, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+To publish your application in the gallery, you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).
+- Implement support for *single sign-on* (SSO). To learn more about supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md).
+ - For password SSO, make sure that your application supports form authentication so that password vaulting can be used.
+ - For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/). Enterprise gallery applications must support multiple user configurations and not any specific user.
+ - For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be correctly implemented.
+- Provisioning is optional yet highly recommended. To learn more about Azure AD SCIM, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
-You can get a free test account with all the premium Azure AD features - 90 days free and can get extended as long as you do dev work with it: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
+You can sign up for a free, test Development account. It's free for 90 days and you get all of the premium Azure AD features with it. You can also extend the account if you use it for development work: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
## Create and publish documentation
-### Documentation on your site
+### Provide app documentation for your site
-Ease of adoption is a significant factor in enterprise software decisions. Clear easy-to-follow documentation supports your users in their adoption journey and reduces support costs.
+Ease of adoption is an important factor for those that make decisions about enterprise software. Documentation that is clear and easy to follow helps your users adopt technology and it reduces support costs.
-Your documentation should at a minimum include the following items:
+Create documentation that includes the following information at minimum:
-- Introduction to your SSO functionality
- - Protocols supported
+- An introduction to your SSO functionality
+ - Protocols
- Version and SKU
- - Supported identity providers list with documentation links
+ - List of supported identity providers with documentation links
- Licensing information for your application - Role-based access control for configuring SSO - SSO Configuration Steps - UI configuration elements for SAML with expected values from the provider - Service provider information to be passed to identity providers-- If OIDC/OAuth, list of permissions required for consent with business justifications
+- If you use OIDC/OAuth, a list of permissions required for consent, with business justifications
- Testing steps for pilot users - Troubleshooting information, including error codes and messages - Support mechanisms for users-- Details about your SCIM endpoint, including the resources and attributes supported
+- Details about your SCIM endpoint, including supported resources and attributes
-### Documentation on the Microsoft site
+### App documentation on the Microsoft site
-When your application is added to the gallery, documentation is created that explains the step-by-step process. For an example, see [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). This documentation is created based on your submission to the gallery, and you can easily update it if you make changes to your application using your GitHub account.
+When your application is added to the gallery, documentation is created that explains the step-by-step process. For an example, see [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). This documentation is created based on your submission to the gallery. You can easily update the documentation if you make changes to your application by using your GitHub account.
## Submit your application
-After you've tested that your application integration works with Azure AD, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign into the portal you are presented with one of two screens.
+After you've tested that your application works with Azure AD, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign in to the portal you are presented with one of two screens.
- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team will add the account in the Microsoft Application Network portal. - If you see a "Request Access" page, then fill in the business justification and select **Request Access**.
-After the account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page. If you see the **Your sign-in was blocked** error while logging in, see [Troubleshoot sign-in to the Microsoft Application Network portal](troubleshoot-app-publishing.md).
+After your account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page. If you see the "Your sign-in was blocked" error while logging in, see [Troubleshoot sign-in to the Microsoft Application Network portal](troubleshoot-app-publishing.md).
### Implementation-specific options
-On the Application Registration Form, select the feature that you want to enable. Select **OpenID Connect & OAuth 2.0**, **SAML 2.0/WS-Fed**, or **Password SSO(UserName & Password)** depending on the feature that your application supports.
+On the application **Registration** form, select the feature that you want to enable. Select **OpenID Connect & OAuth 2.0**, **SAML 2.0/WS-Fed**, or **Password SSO(UserName & Password)** depending on the feature that your application supports.
-If you're implementing a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) 2.0 endpoint for user provisioning, select **User Provisioning (SCIM 2.0)**. Download the schema to provide in the onboarding request. For more information, see [Export provisioning configuration and roll back to a known good state](../app-provisioning/export-import-provisioning-configuration.md). The schema that you configured is used when testing the non-gallery application to build the gallery application.
+If you're implementing a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) 2.0 endpoint for user provisioning, select **User Provisioning (SCIM 2.0)**. Download the schema to provide in the onboarding request. For more information, see [Export provisioning configuration and roll back to a known good state](../app-provisioning/export-import-provisioning-configuration.md). The schema that you configured is used when testing the non-gallery application to build the gallery application.
+
+If you wish to register an MDM application in the Azure AD gallery, select **Register an MDM app**.
You can track application requests by customer name at the Microsoft Application Network portal. For more information, see [Application requests by Customers](https://microsoft.sharepoint.com/teams/apponboarding/Apps/SitePages/AppRequestsByCustomers.aspx). ### Timelines
-The timeline for the process of listing a SAML 2.0 or WS-Fed application in the gallery is 7 to 10 business days.
+Listing an SAML 2.0 or WS-Fed application in the gallery takes 7 to 10 business days.
:::image type="content" source="./media/howto-app-gallery-listing/timeline.png" alt-text="Screenshot that shows the timeline for listing a SAML application.":::
-The timeline for the process of listing an OpenID Connect application in the gallery is 2 to 5 business days.
+Listing an OpenID Connect application in the gallery takes 2 to 5 business days.
:::image type="content" source="./media/howto-app-gallery-listing/timeline2.png" alt-text="Screenshot that shows the timeline for listing an OpenID Connect application.":::
-The timeline for the process of listing a SCIM provisioning application in the gallery is variable and depends on numerous factors.
+Listing an SCIM provisioning application in the gallery varies, depending on numerous factors.
-Not all applications can be onboarded. Per the terms and conditions, the choice may be made to not list an application. Onboarding applications is at the sole discretion of the onboarding team. If your application is declined, you should use the non-gallery provisioning application to satisfy your provisioning needs.
+Not all applications are onboarded. Per the terms and conditions, a decision can be made not to list an application. Onboarding applications is at the sole discretion of the onboarding team.
Here's the flow of customer-requested applications. :::image type="content" source="./media/howto-app-gallery-listing/customer-request-2.png" alt-text="Screenshot that shows the customer-requested apps flow.":::
-For any escalations, send email to the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com), and a response is sent as soon as possible.
+To escalate issues of any kind, send an email to the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). A response is typically sent as soon as possible.
+
+## Update or Remove the application from the Gallery
+
+You can submit your application update request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign into the portal you are presented with one of two screens.
+
+- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team will add the account in the Microsoft Application Network portal.
+
+- If you see a "Request Access" page, then fill in the business justification and select **Request Access**.
+
+After the account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page and select **Update my applicationΓÇÖs listing in the gallery** and select one of the following options as per your choice -
+
+* If you want to update applications SSO feature, select **Update my applicationΓÇÖs Federated SSO feature**.
+
+* If you want to update Password SSO feature, select **Update my applicationΓÇÖs Password SSO feature**.
+
+* If you want to upgrade your listing from Password SSO to Federated SSO, select **Upgrade my application from Password SSO to Federated SSO**.
+
+* If you want to update MDM listing, select **Update my MDM app**.
+
+* If you want to improve User Provisioning feature, select **Improve my applicationΓÇÖs User Provisioning feature**.
+
+* If you want to remove the application from Azure AD gallery, select **Remove my application listing from the gallery**.
+
+If you see the **Your sign-in was blocked** error while logging in, see [Troubleshoot sign-in to the Microsoft Application Network portal](troubleshoot-app-publishing.md).
+ ## Join the Microsoft partner network
-The Microsoft Partner Network provides instant access to exclusive resources, programs, tools, and connections. To join the network and create your go to market plan, see [Reach commercial customers](https://partner.microsoft.com/explore/commercial#gtm).
+The Microsoft Partner Network provides instant access to exclusive programs, tools, connections, and resources. To join the network and create your go-to-market plan, see [Reach commercial customers](https://partner.microsoft.com/explore/commercial#gtm).
## Next steps -- Learn more about managing enterprise applications in [What is application management in Azure Active Directory?](what-is-application-management.md)
+- Learn more about managing enterprise applications with [What is application management in Azure Active Directory?](what-is-application-management.md)
active-directory Anyone Home Crm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/anyone-home-crm-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Anyone Home CRM | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Anyone Home CRM'
description: Learn how to configure single sign-on between Azure Active Directory and Anyone Home CRM.
Previously updated : 05/22/2020 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Anyone Home CRM
+# Tutorial: Azure AD SSO integration with Anyone Home CRM
In this tutorial, you'll learn how to integrate Anyone Home CRM with Azure Active Directory (Azure AD). When you integrate Anyone Home CRM with Azure AD, you can:
In this tutorial, you'll learn how to integrate Anyone Home CRM with Azure Activ
* Enable your users to be automatically signed-in to Anyone Home CRM with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Anyone Home CRM single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Anyone Home CRM supports **IDP** initiated SSO
-* Once you configure Anyone Home CRM you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Anyone Home CRM supports **IDP** initiated SSO.
-## Adding Anyone Home CRM from the gallery
+## Add Anyone Home CRM from the gallery
To configure the integration of Anyone Home CRM into Azure AD, you need to add Anyone Home CRM from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Anyone Home CRM** in the search box. 1. Select **Anyone Home CRM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Anyone Home CRM
+## Configure and test Azure AD SSO for Anyone Home CRM
Configure and test Azure AD SSO with Anyone Home CRM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Anyone Home CRM.
-To configure and test Azure AD SSO with Anyone Home CRM, complete the following building blocks:
+To configure and test Azure AD SSO with Anyone Home CRM, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Anyone Home CRM, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Anyone Home CRM** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Anyone Home CRM** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://app.anyonehome.com/webroot/files/simplesamlphp/www/module.php/saml/sp/metadata.php/<Anyone_Home_Provided_Unique_Value>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Anyone Home CRM**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Anyone Home CRM. Work
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Anyone Home CRM tile in the Access Panel, you should be automatically signed in to the Anyone Home CRM for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Anyone Home CRM for which you set up the SSO.
-- [Try Anyone Home CRM with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Anyone Home CRM tile in the My Apps, you should be automatically signed in to the Anyone Home CRM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Anyone Home CRM with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Anyone Home CRM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Chronicx Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chronicx-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ChronicX® | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ChronicX®'
description: Learn how to configure single sign-on between Azure Active Directory and ChronicX®.
Previously updated : 02/20/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with ChronicX®
+# Tutorial: Azure AD SSO integration with ChronicX®
-In this tutorial, you learn how to integrate ChronicX® with Azure Active Directory (Azure AD).
-Integrating ChronicX® with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ChronicX® with Azure Active Directory (Azure AD). When you integrate ChronicX® with Azure AD, you can:
-* You can control in Azure AD who has access to ChronicX®.
-* You can enable your users to be automatically signed-in to ChronicX® (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ChronicX®.
+* Enable your users to be automatically signed-in to ChronicX® with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with ChronicX®, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* ChronicX® single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ChronicX® single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ChronicX® supports **SP** initiated SSO
-* ChronicX® supports **Just In Time** user provisioning
-
-## Adding ChronicX® from the gallery
-
-To configure the integration of ChronicX® into Azure AD, you need to add ChronicX® from the gallery to your list of managed SaaS apps.
-
-**To add ChronicX® from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* ChronicX® supports **SP** initiated SSO.
+* ChronicX® supports **Just In Time** user provisioning.
-4. In the search box, type **ChronicX®**, select **ChronicX®** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![ChronicX® in the results list](common/search-new-app.png)
+## Add ChronicX® from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ChronicX® based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ChronicX® needs to be established.
-
-To configure and test Azure AD single sign-on with ChronicX®, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ChronicX® Single Sign-On](#configure-chronicx-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ChronicX® test user](#create-chronicx-test-user)** - to have a counterpart of Britta Simon in ChronicX® that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of ChronicX® into Azure AD, you need to add ChronicX® from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ChronicX®** in the search box.
+1. Select **ChronicX®** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with ChronicX®, perform the following steps:
+## Configure and test Azure AD SSO for ChronicX®
-1. In the [Azure portal](https://portal.azure.com/), on the **ChronicX®** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with ChronicX® using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ChronicX®.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with ChronicX®, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ChronicX SSO](#configure-chronicx-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ChronicX test user](#create-chronicx-test-user)** - to have a counterpart of B.Simon in ChronicX® that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **ChronicX®** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![ChronicX® Domain and URLs single sign-on information](common/sp-identifier.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.chronicx.com/ups/processlogonSSO.jsp`
-
- b. In the **Identifier (Entity ID)** text box, type a URL:
+ a. In the **Identifier (Entity ID)** text box, type the value:
`ups.chronicx.com`
+
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<subdomain>.chronicx.com/ups/processlogonSSO.jsp`
> [!NOTE] >The Sign-on URL value is not real. Update the value with the actual Sign-On URL. Contact [ChronicX® Client support team](https://www.casebank.com/contact-us/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-6. On the **Set up ChronicX®** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- a. Login URL
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
- b. Azure Ad Identifier
+1. On the **Set up ChronicX®** section, copy the appropriate URL(s) as per your requirement.
- c. Logout URL
-
-### Configure ChronicX Single Sign-On
-
-To configure single sign-on on **ChronicX®** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ChronicX® support team](https://www.casebank.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ChronicX®.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ChronicX®**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **ChronicX®**.
-
- ![The ChronicX® link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ChronicX®.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ChronicX®**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure ChronicX SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **ChronicX®** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ChronicX® support team](https://www.casebank.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create ChronicX test user
In this section, a user called Britta Simon is created in ChronicX®. ChronicX®
> [!Note] > If you need to create a user manually, contact [ChronicX® support team](https://www.casebank.com/contact-us/).
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ChronicX® tile in the Access Panel, you should be automatically signed in to the ChronicX® for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to ChronicX® Sign-On URL where you can initiate the login flow.
-## Additional Resources
+* Go to ChronicX® Sign-On URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ChronicX® tile in the My Apps, this will redirect to ChronicX® Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ChronicX® you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cpqsync By Cincom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cpqsync-by-cincom-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with CPQSync by Cincom | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and CPQSync by Cincom.
+ Title: 'Tutorial: Azure AD SSO integration with Cincom CPQ'
+description: Learn how to configure single sign-on between Azure Active Directory and Cincom CPQ.
Previously updated : 08/08/2019 Last updated : 06/28/2022
-# Tutorial: Integrate CPQSync by Cincom with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Cincom CPQ
-In this tutorial, you'll learn how to integrate CPQSync by Cincom with Azure Active Directory (Azure AD). When you integrate CPQSync by Cincom with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Cincom CPQ with Azure Active Directory (Azure AD). When you integrate Cincom CPQ with Azure AD, you can:
-* Control in Azure AD who has access to CPQSync by Cincom.
-* Enable your users to be automatically signed-in to CPQSync by Cincom with their Azure AD accounts.
+* Control in Azure AD who has access to Cincom CPQ.
+* Enable your users to be automatically signed-in to Cincom CPQ with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* CPQSync by Cincom single sign-on (SSO) enabled subscription.
+* Cincom CPQ single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* CPQSync by Cincom supports **SP and IDP** initiated SSO
+* Cincom CPQ supports **SP and IDP** initiated SSO.
-## Adding CPQSync by Cincom from the gallery
+## Add Cincom CPQ from the gallery
-To configure the integration of CPQSync by Cincom into Azure AD, you need to add CPQSync by Cincom from the gallery to your list of managed SaaS apps.
+To configure the integration of Cincom CPQ into Azure AD, you need to add Cincom CPQ from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **CPQSync by Cincom** in the search box.
-1. Select **CPQSync by Cincom** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Cincom CPQ** in the search box.
+1. Select **Cincom CPQ** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for CPQSync by Cincom
+## Configure and test Azure AD SSO for Cincom CPQ
-Configure and test Azure AD SSO with CPQSync by Cincom using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CPQSync by Cincom.
+Configure and test Azure AD SSO with Cincom CPQ using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cincom CPQ.
-To configure and test Azure AD SSO with CPQSync by Cincom, complete the following building blocks:
+To configure and test Azure AD SSO with Cincom CPQ, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-2. **[Configure CPQSync by Cincom SSO](#configure-cpqsync-by-cincom-sso)** - to configure the Single Sign-On settings on application side.
- 1. **[Create CPQSync by Cincom test user](#create-cpqsync-by-cincom-test-user)** - to have a counterpart of B.Simon in CPQSync by Cincom that is linked to the Azure AD representation of user.
+2. **[Configure Cincom CPQ SSO](#configure-cincom-cpq-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Cincom CPQ test user](#create-cincom-cpq-test-user)** - to have a counterpart of B.Simon in Cincom CPQ that is linked to the Azure AD representation of user.
3. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **CPQSync by Cincom** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Cincom CPQ** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://cincom.oktapreview.com/sso/saml2/<CUSTOMURL>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://cincom.okta.com/` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [CPQSync by Cincom Client support team](https://supportweb.cincom.com/default.aspx) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Cincom CPQ Client support team](https://supportweb.cincom.com/default.aspx) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificateraw.png)
+ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate")
-6. On the **Set up CPQSync by Cincom** section, copy the appropriate URL(s) based on your requirement.
+6. On the **Set up Cincom CPQ** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CPQSync by Cincom.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cincom CPQ.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **CPQSync by Cincom**.
+1. In the applications list, select **Cincom CPQ**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure CPQSync by Cincom SSO
+## Configure Cincom CPQ SSO
-To configure single sign-on on **CPQSync by Cincom** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [CPQSync by Cincom support team](https://supportweb.cincom.com/default.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Cincom CPQ** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Cincom CPQ support team](https://supportweb.cincom.com/default.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create CPQSync by Cincom test user
+### Create Cincom CPQ test user
-In this section, you create a user called B.Simon in CPQSync by Cincom. Work with [CPQSync by Cincom support team](https://supportweb.cincom.com/default.aspx) to add the users in the CPQSync by Cincom platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Cincom CPQ. Work with [Cincom CPQ support team](https://supportweb.cincom.com/default.aspx) to add the users in the Cincom CPQ platform. Users must be created and activated before you use single sign-on.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Cincom CPQ Sign-On URL where you can initiate the login flow.
+
+* Go to Cincom CPQ Sign-On URL directly and initiate the login flow from there.
-When you click the CPQSync by Cincom tile in the Access Panel, you should be automatically signed in to the CPQSync by Cincom for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Cincom CPQ for which you set up the SSO.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Cincom CPQ tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Cincom CPQ for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Cincom CPQ you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Firmplay Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/firmplay-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with FirmPlay - Employee Advocacy for Recruiting | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with FirmPlay - Employee Advocacy for Recruiting'
description: Learn how to configure single sign-on between Azure Active Directory and FirmPlay - Employee Advocacy for Recruiting.
Previously updated : 04/01/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with FirmPlay - Employee Advocacy for Recruiting
+# Tutorial: Azure AD SSO integration with FirmPlay - Employee Advocacy for Recruiting
-In this tutorial, you learn how to integrate FirmPlay - Employee Advocacy for Recruiting with Azure Active Directory (Azure AD).
-Integrating FirmPlay - Employee Advocacy for Recruiting with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate FirmPlay - Employee Advocacy for Recruiting with Azure Active Directory (Azure AD). When you integrate FirmPlay - Employee Advocacy for Recruiting with Azure AD, you can:
-* You can control in Azure AD who has access to FirmPlay - Employee Advocacy for Recruiting.
-* You can enable your users to be automatically signed-in to FirmPlay - Employee Advocacy for Recruiting (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to FirmPlay - Employee Advocacy for Recruiting.
+* Enable your users to be automatically signed-in to FirmPlay - Employee Advocacy for Recruiting with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
To configure Azure AD integration with FirmPlay - Employee Advocacy for Recruiti
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* FirmPlay - Employee Advocacy for Recruiting supports **SP** initiated SSO
+* FirmPlay - Employee Advocacy for Recruiting supports **SP** initiated SSO.
-## Adding FirmPlay - Employee Advocacy for Recruiting from the gallery
+## Add FirmPlay - Employee Advocacy for Recruiting from the gallery
To configure the integration of FirmPlay - Employee Advocacy for Recruiting into Azure AD, you need to add FirmPlay - Employee Advocacy for Recruiting from the gallery to your list of managed SaaS apps.
-**To add FirmPlay - Employee Advocacy for Recruiting from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **FirmPlay - Employee Advocacy for Recruiting**, select **FirmPlay - Employee Advocacy for Recruiting** from result panel then click **Add** button to add the application.
-
- ![FirmPlay - Employee Advocacy for Recruiting in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **FirmPlay - Employee Advocacy for Recruiting** in the search box.
+1. Select **FirmPlay - Employee Advocacy for Recruiting** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with FirmPlay - Employee Advocacy for Recruiting based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in FirmPlay - Employee Advocacy for Recruiting needs to be established.
+## Configure and test Azure AD SSO for FirmPlay - Employee Advocacy for Recruiting
-To configure and test Azure AD single sign-on with FirmPlay - Employee Advocacy for Recruiting, you need to complete the following building blocks:
+Configure and test Azure AD SSO with FirmPlay - Employee Advocacy for Recruiting using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in FirmPlay - Employee Advocacy for Recruiting.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure FirmPlay - Employee Advocacy for Recruiting Single Sign-On](#configure-firmplayemployee-advocacy-for-recruiting-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create FirmPlay - Employee Advocacy for Recruiting test user](#create-firmplayemployee-advocacy-for-recruiting-test-user)** - to have a counterpart of Britta Simon in FirmPlay - Employee Advocacy for Recruiting that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with FirmPlay - Employee Advocacy for Recruiting, perform the following steps:
-### Configure Azure AD single sign-on
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure FirmPlay - Employee Advocacy for Recruiting SSO](#configure-firmplayemployee-advocacy-for-recruiting-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create FirmPlay - Employee Advocacy for Recruiting test user](#create-firmplayemployee-advocacy-for-recruiting-test-user)** - to have a counterpart of Britta Simon in FirmPlay - Employee Advocacy for Recruiting that is linked to the Azure AD representation of user.
+6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with FirmPlay - Employee Advocacy for Recruiting, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **FirmPlay - Employee Advocacy for Recruiting** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **FirmPlay - Employee Advocacy for Recruiting** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ ![Screenshot showing the edit Basic SAML Configuration screen.](common/edit-urls.png)
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![FirmPlay - Employee Advocacy for Recruiting Domain and URLs single sign-on information](common/sp-signonurl.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<your-subdomain>.firmplay.com/`
To configure Azure AD single sign-on with FirmPlay - Employee Advocacy for Recru
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure FirmPlay - Employee Advocacy for Recruiting Single Sign-On
-
-To configure single sign-on on **FirmPlay - Employee Advocacy for Recruiting** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [FirmPlay - Employee Advocacy for Recruiting support team](mailto:engineering@firmplay.com). They set this setting to have the SAML SSO connection set properly on both sides.
+### Create an Azure AD test user
-### Create an Azure AD test user
+In this section, you'll create a test user in the Azure portal called B.Simon.
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to FirmPlay - Employee Advocacy for Recruiting.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to FirmPlay - Employee Advocacy for Recruiting.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **FirmPlay - Employee Advocacy for Recruiting**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **FirmPlay - Employee Advocacy for Recruiting**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure FirmPlay - Employee Advocacy for Recruiting SSO
-2. In the applications list, select **FirmPlay - Employee Advocacy for Recruiting**.
-
- ![The FirmPlay - Employee Advocacy for Recruiting link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **FirmPlay - Employee Advocacy for Recruiting** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [FirmPlay - Employee Advocacy for Recruiting support team](mailto:engineering@firmplay.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create FirmPlay - Employee Advocacy for Recruiting test user
-In this section, you create a user called Britta Simon in FirmPlay - Employee Advocacy for Recruiting. Work with [FirmPlay - Employee Advocacy for Recruiting support team](mailto:engineering@firmplay.com) to add the users in the FirmPlay - Employee Advocacy for Recruiting platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in FirmPlay - Employee Advocacy for Recruiting. Work with [FirmPlay - Employee Advocacy for Recruiting support team](mailto:engineering@firmplay.com) to add the users in the FirmPlay - Employee Advocacy for Recruiting platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the FirmPlay - Employee Advocacy for Recruiting tile in the Access Panel, you should be automatically signed in to the FirmPlay - Employee Advocacy for Recruiting for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to FirmPlay - Employee Advocacy for Recruiting Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to FirmPlay - Employee Advocacy for Recruiting Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the FirmPlay - Employee Advocacy for Recruiting tile in the My Apps, this will redirect to FirmPlay - Employee Advocacy for Recruiting Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure FirmPlay - Employee Advocacy for Recruiting you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Foreseecxsuite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foreseecxsuite-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ForeSee CX Suite | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ForeSee CX Suite'
description: Learn how to configure single sign-on between Azure Active Directory and ForeSee CX Suite.
Previously updated : 04/01/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with ForeSee CX Suite
+# Tutorial: Azure AD SSO integration with ForeSee CX Suite
-In this tutorial, you learn how to integrate ForeSee CX Suite with Azure Active Directory (Azure AD).
-Integrating ForeSee CX Suite with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ForeSee CX Suite with Azure Active Directory (Azure AD). When you integrate ForeSee CX Suite with Azure AD, you can:
-* You can control in Azure AD who has access to ForeSee CX Suite.
-* You can enable your users to be automatically signed-in to ForeSee CX Suite (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to ForeSee CX Suite.
+* Enable your users to be automatically signed-in to ForeSee CX Suite with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Prerequisites
To configure Azure AD integration with ForeSee CX Suite, you need the following
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ForeSee CX Suite supports **SP** initiated SSO
+* ForeSee CX Suite supports **SP** initiated SSO.
-* ForeSee CX Suite supports **Just In Time** user provisioning
+* ForeSee CX Suite supports **Just In Time** user provisioning.
-## Adding ForeSee CX Suite from the gallery
+## Add ForeSee CX Suite from the gallery
To configure the integration of ForeSee CX Suite into Azure AD, you need to add ForeSee CX Suite from the gallery to your list of managed SaaS apps.
-**To add ForeSee CX Suite from the gallery, perform the following steps:**
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ForeSee CX Suite** in the search box.
+1. Select **ForeSee CX Suite** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
+## Configure and test Azure AD SSO for ForeSee CX Suite
- ![The Azure Active Directory button](common/select-azuread.png)
+Configure and test Azure AD SSO with ForeSee CX Suite using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ForeSee CX Suite.
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
+To configure and test Azure AD SSO with ForeSee CX Suite, perform the following steps:
- ![The Enterprise applications blade](common/enterprise-applications.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure ForeSee CX Suite SSO](#configure-foresee-cx-suite-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ForeSee CX Suite test user](#create-foresee-cx-suite-test-user)** - to have a counterpart of B.Simon in ForeSee CX Suite that is linked to the Azure AD representation of user.
+3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-3. To add new application, click **New application** button on the top of dialog.
+## Configure Azure AD SSO
- ![The New application button](common/add-new-app.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-4. In the search box, type **ForeSee CX Suite**, select **ForeSee CX Suite** from result panel then click **Add** button to add the application.
+1. In the Azure portal, on the **ForeSee CX Suite** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![ForeSee CX Suite in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ForeSee CX Suite based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ForeSee CX Suite needs to be established.
-
-To configure and test Azure AD single sign-on with ForeSee CX Suite, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ForeSee CX Suite Single Sign-On](#configure-foresee-cx-suite-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ForeSee CX Suite test user](#create-foresee-cx-suite-test-user)** - to have a counterpart of Britta Simon in ForeSee CX Suite that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with ForeSee CX Suite, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **ForeSee CX Suite** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing the edit Basic SAML Configuration screen.](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
To configure Azure AD single sign-on with ForeSee CX Suite, perform the followin
c. After the metadata file is successfully uploaded, the **Identifier** value gets auto populated in Basic SAML Configuration section.
- ![ForeSee CX Suite Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign-on URL** text box, type a URL:
+ d. In the **Sign-on URL** text box, type a URL:
`https://cxsuite.foresee.com/`
- b. In the **Identifier** textbox, type a URL using the following pattern: https:\//www.okta.com/saml2/service-provider/\<UniqueID>
+ e. In the **Identifier** textbox, type a URL using the following pattern: https:\//www.okta.com/saml2/service-provider/\<UniqueID>
> [!Note] > If the **Identifier** value do not get auto polulated, then please fill in the value manually according to above pattern. The Identifier value is not real. Update this value with the actual Identifier. Contact [ForeSee CX Suite Client support team](mailto:support@foresee.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
To configure Azure AD single sign-on with ForeSee CX Suite, perform the followin
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure ForeSee CX Suite Single Sign-On
-
-To configure single sign-on on **ForeSee CX Suite** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ForeSee CX Suite support team](mailto:support@foresee.com). They set this setting to have the SAML SSO connection set properly on both sides.
+### Create an Azure AD test user
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
2. Select **New user** at the top of the screen.-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+3. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 2. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 3. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 4. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ForeSee CX Suite.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ForeSee CX Suite**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ForeSee CX Suite.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
2. In the applications list, select **ForeSee CX Suite**.
+3. In the app's overview page, find the **Manage** section and select **Users and groups**.
+4. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+6. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+7. In the **Add Assignment** dialog, click the **Assign** button.
- ![The ForeSee CX Suite link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure ForeSee CX Suite SSO
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **ForeSee CX Suite** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ForeSee CX Suite support team](mailto:support@foresee.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create ForeSee CX Suite test user
-In this section, you create a user called Britta Simon in ForeSee CX Suite. Work with [ForeSee CX Suite support team](mailto:support@foresee.com) to add the users or the domain that must be added to an allow list for the ForeSee CX Suite platform. If the domain is added by the team, users will get automatically provisioned to the ForeSee CX Suite platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in ForeSee CX Suite. Work with [ForeSee CX Suite support team](mailto:support@foresee.com) to add the users or the domain that must be added to an allowlist for the ForeSee CX Suite platform. If the domain is added by the team, users will get automatically provisioned to the ForeSee CX Suite platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ForeSee CX Suite tile in the Access Panel, you should be automatically signed in to the ForeSee CX Suite for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to ForeSee CX Suite Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to ForeSee CX Suite Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ForeSee CX Suite tile in the My Apps, this will redirect to ForeSee CX Suite Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ForeSee CX Suite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory G Suite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
-9. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in G Suite for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the G Suite API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. Select the **Save** button to commit any changes.
+
+> [!NOTE]
+> GSuite Provisioning currently only supports the use of primaryEmail as the matching attribute.
+ |Attribute|Type| |||
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Insigniasamlsso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insigniasamlsso-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Insignia SAML SSO | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Insignia SAML SSO | Microsoft Docs'
description: Learn how to configure single sign-on between Azure Active Directory and Insignia SAML SSO.
Previously updated : 03/26/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with Insignia SAML SSO
+# Tutorial: Azure AD SSO integration with Insignia SAML SSO
-In this tutorial, you learn how to integrate Insignia SAML SSO with Azure Active Directory (Azure AD).
-Integrating Insignia SAML SSO with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Insignia SAML SSO with Azure Active Directory (Azure AD). When you integrate Insignia SAML SSO with Azure AD, you can:
-* You can control in Azure AD who has access to Insignia SAML SSO.
-* You can enable your users to be automatically signed-in to Insignia SAML SSO (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to Insignia SAML SSO.
+* Enable your users to be automatically signed-in to Insignia SAML SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Prerequisites To configure Azure AD integration with Insignia SAML SSO, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Insignia SAML SSO single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Insignia SAML SSO single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Insignia SAML SSO supports **SP** initiated SSO
+* Insignia SAML SSO supports **SP** initiated SSO.
-## Adding Insignia SAML SSO from the gallery
+## Add Insignia SAML SSO from the gallery
To configure the integration of Insignia SAML SSO into Azure AD, you need to add Insignia SAML SSO from the gallery to your list of managed SaaS apps.
-**To add Insignia SAML SSO from the gallery, perform the following steps:**
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Insignia SAML SSO** in the search box.
+1. Select **Insignia SAML SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
+## Configure and test Azure AD SSO for Insignia SAML SSO
- ![The Azure Active Directory button](common/select-azuread.png)
+Configure and test Azure AD SSO with Insignia SAML SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Insignia SAML SSO.
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
+To configure and test Azure AD SSO with Insignia SAML SSO, perform the following steps:
- ![The Enterprise applications blade](common/enterprise-applications.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure Insignia SAML SSO](#configure-insignia-saml-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Insignia SAML SSO test user](#create-insignia-saml-sso-test-user)** - to have a counterpart of B.Simon in Insignia SAML SSO that is linked to the Azure AD representation of user.
+3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-3. To add new application, click **New application** button on the top of dialog.
+## Configure Azure AD SSO
- ![The New application button](common/add-new-app.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-4. In the search box, type **Insignia SAML SSO**, select **Insignia SAML SSO** from result panel then click **Add** button to add the application.
+1. In the Azure portal, on the **Insignia SAML SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Insignia SAML SSO in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Insignia SAML SSO based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Insignia SAML SSO needs to be established.
-
-To configure and test Azure AD single sign-on with Insignia SAML SSO, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Insignia SAML SSO Single Sign-On](#configure-insignia-saml-sso-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Insignia SAML SSO test user](#create-insignia-saml-sso-test-user)** - to have a counterpart of Britta Simon in Insignia SAML SSO that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with Insignia SAML SSO, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Insignia SAML SSO** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing the edit Basic SAML Configuration screen.](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Insignia SAML SSO Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ a. In the **Sign on URL** text box, type a URL using one of the following patterns:
- ```http
- https://<customername>.insigniails.com/ils
- https://<customername>.insigniails.com/
- https://<customername>.insigniailsusa.com/
- ```
+ | Sign on URL|
+ ||
+ | `https://<customername>.insigniails.com/ils` |
+ | `https://<customername>.insigniails.com/` |
+ | `https://<customername>.insigniailsusa.com/` |
+ |
b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://<customername>.insigniailsusa.com/<uniqueid>`
To configure Azure AD single sign-on with Insignia SAML SSO, perform the followi
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Insignia SAML SSO Single Sign-On
-
-To configure single sign-on on **Insignia SAML SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Insignia SAML SSO support team](http://www.insigniasoftware.com/insignia/Techsupport.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
2. Select **New user** at the top of the screen.-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+3. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 2. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 3. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 4. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Insignia SAML SSO.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Insignia SAML SSO**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Insignia SAML SSO.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
2. In the applications list, select **Insignia SAML SSO**.
+3. In the app's overview page, find the **Manage** section and select **Users and groups**.
+4. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+6. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+7. In the **Add Assignment** dialog, click the **Assign** button.
- ![The Insignia SAML SSO link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure Insignia SAML SSO
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Insignia SAML SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Insignia SAML SSO support team](http://www.insigniasoftware.com/insignia/Techsupport.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Insignia SAML SSO test user In this section, you create a user called Britta Simon in Insignia SAML SSO. Work with [Insignia SAML SSO support team](http://www.insigniasoftware.com/insignia/Techsupport.aspx) to add the users in the Insignia SAML SSO platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Insignia SAML SSO tile in the Access Panel, you should be automatically signed in to the Insignia SAML SSO for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Insignia SAML SSO Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to Insignia SAML SSO Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Insignia SAML SSO tile in the My Apps, this will redirect to Insignia SAML SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Insignia SAML SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Iqualify Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iqualify-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with iQualify LMS | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with iQualify LMS'
description: Learn how to configure single sign-on between Azure Active Directory and iQualify LMS.
Previously updated : 03/14/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with iQualify LMS
+# Tutorial: Azure AD SSO integration with iQualify LMS
-In this tutorial, you learn how to integrate iQualify LMS with Azure Active Directory (Azure AD).
-Integrating iQualify LMS with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate iQualify LMS with Azure Active Directory (Azure AD). When you integrate iQualify LMS with Azure AD, you can:
-* You can control in Azure AD who has access to iQualify LMS.
-* You can enable your users to be automatically signed-in to iQualify LMS (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to iQualify LMS.
+* Enable your users to be automatically signed-in to iQualify LMS with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with iQualify LMS, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* iQualify LMS single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* iQualify LMS single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* iQualify LMS supports **SP and IDP** initiated SSO
-* iQualify LMS supports **Just In Time** user provisioning
+* iQualify LMS supports **SP and IDP** initiated SSO.
+* iQualify LMS supports **Just In Time** user provisioning.
-## Adding iQualify LMS from the gallery
+## Add iQualify LMS from the gallery
To configure the integration of iQualify LMS into Azure AD, you need to add iQualify LMS from the gallery to your list of managed SaaS apps.
-**To add iQualify LMS from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **iQualify LMS**, select **iQualify LMS** from result panel then click **Add** button to add the application.
-
- ![iQualify LMS in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with iQualify LMS based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in iQualify LMS needs to be established.
-
-To configure and test Azure AD single sign-on with iQualify LMS, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **iQualify LMS** in the search box.
+1. Select **iQualify LMS** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure iQualify LMS Single Sign-On](#configure-iqualify-lms-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create iQualify LMS test user](#create-iqualify-lms-test-user)** - to have a counterpart of Britta Simon in iQualify LMS that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for iQualify LMS
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with iQualify LMS using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in iQualify LMS.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with iQualify LMS, perform the following steps:
-To configure Azure AD single sign-on with iQualify LMS, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure iQualify LMS SSO](#configure-iqualify-lms-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create iQualify LMS test user](#create-iqualify-lms-test-user)** - to have a counterpart of B.Simon in iQualify LMS that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **iQualify LMS** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **iQualify LMS** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. On the **Basic SAML Configuration** section, perform the following steps:
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ a. In the **Identifier** text box, type a URL using one the following patterns:
-4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
+ | **Identifier** |
+ ||
+ | Production Environment: `https://<yourorg>.iqualify.com/` |
+ | Test Environment: `https://<yourorg>.iqualify.io` |
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
-
- 1. In the **Identifier** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
- * Production Environment: `https://<yourorg>.iqualify.com/`
- * Test Environment: `https://<yourorg>.iqualify.io`
+ | **Reply URL** |
+ |--|
+ | Production Environment: `https://<yourorg>.iqualify.com/auth/saml2/callback` |
+ | Test Environment: `https://<yourorg>.iqualify.io/auth/saml2/callback` |
- 2. In the **Reply URL** text box, type a URL using the following pattern:
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- * Production Environment: `https://<yourorg>.iqualify.com/auth/saml2/callback`
- * Test Environment: `https://<yourorg>.iqualify.io/auth/saml2/callback`
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
-5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
-
- * Production Environment: `https://<yourorg>.iqualify.com/login`
- * Test Environment: `https://<yourorg>.iqualify.io/login`
+ | **Sign-on URL** |
+ |-|
+ | Production Environment: `https://<yourorg>.iqualify.com/login` |
+ | Test Environment: `https://<yourorg>.iqualify.io/login` |
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [iQualify LMS Client support team](https://www.iqualify.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-6. Your iQualify LMS application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open **User Attributes** dialog.
+1. Your iQualify LMS application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open **User Attributes** dialog.
- ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
+ ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png "Attributes")
-7. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
+1. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
| Name | Source Attribute| | | |
To configure Azure AD single sign-on with iQualify LMS, perform the following st
a. Click **Add new claim** to open the **Manage user claims** dialog.
- ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png)
+ ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png "Claims")
- ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png)
+ ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png "Values")
b. In the **Name** textbox, type the attribute name shown for that row.
To configure Azure AD single sign-on with iQualify LMS, perform the following st
> [!Note] > The **person_id** attribute is **Optional**
-8. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up iQualify LMS** section, copy the appropriate URL(s) as per your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
- ![The Certificate download link](common/certificatebase64.png)
+### Create an Azure AD test user
-9. On the **Set up iQualify LMS** section, copy the appropriate URL(s) as per your requirement.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- a. Login URL
+### Assign the Azure AD test user
- b. Azure AD Identifier
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to iQualify LMS.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **iQualify LMS**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure iQualify LMS Single Sign-On
+## Configure iQualify LMS SSO
1. Open a new browser window, and then sign in to your iQualify environment as an administrator. 1. Once you are logged in, click on your avatar at the top right, then click on **Account settings**
- ![Account settings](./media/iqualify-tutorial/setting1.png)
+ ![Screenshot shows the Account settings.](./media/iqualify-tutorial/settings.png "Account")
1. In the account settings area, click on the ribbon menu on the left and click on **INTEGRATIONS**
- ![INTEGRATIONS](./media/iqualify-tutorial/setting2.png)
+ ![Screenshot shows integration area of the application.](./media/iqualify-tutorial/menu.png "Profile")
1. Under INTEGRATIONS, click on the **SAML** icon.
- ![SAML icon](./media/iqualify-tutorial/setting3.png)
+ ![Screenshot shows the SAML icon under integrations.](./media/iqualify-tutorial/icon.png "Integration")
1. In the **SAML Authentication Settings** dialog box, perform the following steps:
- ![SAML Authentication Settings](./media/iqualify-tutorial/setting4.png)
+ ![Screenshot shows the SAML Authentication Settings](./media/iqualify-tutorial/details.png "Authentication")
a. In the **SAML SINGLE SIGN-ON SERVICE URL** box, paste the **Login URL** value copied from the Azure AD application configuration window.
To configure Azure AD single sign-on with iQualify LMS, perform the following st
f. Click **UPDATE**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to iQualify LMS.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **iQualify LMS**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **iQualify LMS**.
-
- ![The iQualify LMS link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create iQualify LMS test user In this section, a user called Britta Simon is created in iQualify LMS. iQualify LMS supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in iQualify LMS, a new one is created after authentication.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration using the My Apps.
-When you click the iQualify LMS tile in the Access Panel, you should get login page of your iQualify LMS application.
+When you click the iQualify LMS tile in the My Apps, you should get login page of your iQualify LMS application.
- ![login page](./media/iqualify-tutorial/login.png)
+ ![Screenshot shows the login page of application.](./media/iqualify-tutorial/login.png "Configure")
Click **Sign in with Azure AD** button and you should get automatically signed-on to your iQualify LMS application.
-For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional Resources
--- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure iQualify LMS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Novatus Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/novatus-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Novatus | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Novatus'
description: Learn how to configure single sign-on between Azure Active Directory and Novatus.
Previously updated : 03/05/2019 Last updated : 06/29/2022
-# Tutorial: Azure Active Directory integration with Novatus
+# Tutorial: Azure AD SSO integration with Novatus
-In this tutorial, you learn how to integrate Novatus with Azure Active Directory (Azure AD).
-Integrating Novatus with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Novatus with Azure Active Directory (Azure AD). When you integrate Novatus with Azure AD, you can:
-* You can control in Azure AD who has access to Novatus.
-* You can enable your users to be automatically signed-in to Novatus (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Novatus.
+* Enable your users to be automatically signed-in to Novatus with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Novatus, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Novatus single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Novatus single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Novatus supports **SP** initiated SSO
+* Novatus supports **SP** initiated SSO.
-* Novatus supports **Just In Time** user provisioning
+* Novatus supports **Just In Time** user provisioning.
-## Adding Novatus from the gallery
+## Add Novatus from the gallery
To configure the integration of Novatus into Azure AD, you need to add Novatus from the gallery to your list of managed SaaS apps.
-**To add Novatus from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Novatus**, select **Novatus** from result panel then click **Add** button to add the application.
-
- ![Novatus in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Novatus** in the search box.
+1. Select **Novatus** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Novatus based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Novatus needs to be established.
+## Configure and test Azure AD SSO for Novatus
-To configure and test Azure AD single sign-on with Novatus, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Novatus using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Novatus.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Novatus Single Sign-On](#configure-novatus-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Novatus test user](#create-novatus-test-user)** - to have a counterpart of Britta Simon in Novatus that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Novatus, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure Novatus SSO](#configure-novatus-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Novatus test user](#create-novatus-test-user)** - to have a counterpart of B.Simon in Novatus that is linked to the Azure AD representation of user.
+3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Novatus, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Novatus** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Novatus** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing the edit Basic SAML Configuration screen.](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Novatus Domain and URLs single sign-on information](common/sp-signonurl.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://sso.novatuscontracts.com/<companyname>`
To configure Azure AD single sign-on with Novatus, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Novatus Single Sign-On
-
-To configure single sign-on on **Novatus** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Novatus support team](mailto:jvinci@novatusinc.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
+### Create an Azure AD test user
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
2. Select **New user** at the top of the screen.-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+3. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 2. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 3. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 4. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Novatus.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Novatus**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Novatus.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
2. In the applications list, select **Novatus**.
+3. In the app's overview page, find the **Manage** section and select **Users and groups**.
+4. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+6. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+7. In the **Add Assignment** dialog, click the **Assign** button.
- ![The Novatus link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure Novatus SSO
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Novatus** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Novatus support team](mailto:jvinci@novatusinc.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Novatus test user
In this section, a user called Britta Simon is created in Novatus. Novatus suppo
>If you need to create a user manually, you need to contact the [Novatus support team](mailto:jvinci@novatusinc.com). >
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Novatus tile in the Access Panel, you should be automatically signed in to the Novatus for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Novatus Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Novatus Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Novatus tile in the My Apps, this will redirect to Novatus Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Novatus you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Ns1 Sso Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ns1-sso-azure-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with NS1 SSO for Azure | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with NS1 SSO for Azure'
description: Learn how to configure single sign-on between Azure Active Directory and NS1 SSO for Azure.
Previously updated : 02/12/2020 Last updated : 06/22/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with NS1 SSO for Azure
+# Tutorial: Azure AD SSO integration with NS1 SSO for Azure
In this tutorial, you'll learn how to integrate NS1 SSO for Azure with Azure Active Directory (Azure AD). When you integrate NS1 SSO for Azure with Azure AD, you can:
In this tutorial, you'll learn how to integrate NS1 SSO for Azure with Azure Act
* Enable your users to be automatically signed in to NS1 SSO for Azure with their Azure AD accounts. * Manage your accounts in one central location, the Azure portal.
-To learn more about software as a service (SaaS) app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * NS1 SSO for Azure single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. * NS1 SSO for Azure supports SP and IDP initiated SSO.
-* After you configure NS1 SSO for Azure, you can enforce session control. This protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from conditional access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
## Add NS1 SSO for Azure from the gallery To configure the integration of NS1 SSO for Azure into Azure AD, you need to add NS1 SSO for Azure from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) by using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal by using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Go to **Enterprise Applications**, and then select **All Applications**. 1. To add a new application, select **New application**. 1. In the **Add from the gallery** section, type **NS1 SSO for Azure** in the search box. 1. Select **NS1 SSO for Azure** from the results panel, and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for NS1 SSO for Azure
+## Configure and test Azure AD SSO for NS1 SSO for Azure
Configure and test Azure AD SSO with NS1 SSO for Azure by using a test user called **B.Simon**. For SSO to work, establish a linked relationship between an Azure AD user and the related user in NS1 SSO for Azure.
Here are the general steps to configure and test Azure AD SSO with NS1 SSO for A
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **NS1 SSO for Azure** application integration page, find the **Manage** section. Select **single sign-on**.
+1. In the Azure portal, on the **NS1 SSO for Azure** application integration page, find the **Manage** section. Select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Screenshot of Set up single sign-on with SAML page, with pencil icon highlighted](common/edit-urls.png)
+ ![Screenshot of set up single sign-on with SAML page, with pencil icon highlighted.](common/edit-urls.png)
-1. In the **Basic SAML Configuration** section, if you want to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. In the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type the following URL: `https://api.nsone.net/saml/metadata`
- b. In the **Reply URL** text box, type a URL that uses the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://api.nsone.net/saml/sso/<ssoid>` 1. Select **Set additional URLs**, and perform the following step if you want to configure the application in **SP** initiated mode:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. The NS1 SSO for Azure application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes & Claims** section on the application integration page. On the **Set up Single Sign-On with SAML** page, select the pencil icon to open the **User Attributes** dialog box.
- ![Screenshot of User Attributes & Claims section, with pencil icon highlighted](./media/ns1-sso-for-azure-tutorial/attribute-edit-option.png)
+ ![Screenshot of User Attributes & Claims section, with pencil icon highlighted.](./media/ns1-sso-for-azure-tutorial/attribute-edit-option.png)
1. Select the attribute name to edit the claim.
- ![Screenshot of User Attributes & Claims section, with attribute name highlighted](./media/ns1-sso-for-azure-tutorial/attribute-claim-edit.png)
+ ![Screenshot of User Attributes & Claims section, with attribute name highlighted.](./media/ns1-sso-for-azure-tutorial/attribute-claim-edit.png)
1. Select **Transformation**.
- ![Screenshot of Manage claim section, with Transformation highlighted](./media/ns1-sso-for-azure-tutorial/prefix-edit.png)
+ ![Screenshot of Manage claim section, with Transformation highlighted.](./media/ns1-sso-for-azure-tutorial/prefix-edit.png)
1. In the **Manage transformation** section, perform the following steps:
- ![Screenshot of Manage transformation section, with various fields highlighted](./media/ns1-sso-for-azure-tutorial/prefix-added.png)
+ ![Screenshot of Manage transformation section, with various fields highlighted.](./media/ns1-sso-for-azure-tutorial/prefix-added.png)
1. Select **ExactMailPrefix()** as **Transformation**.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, select the copy button. This copies the **App Federation Metadata Url** and saves it on your computer.
- ![Screenshot of the SAML Signing Certificate, with the copy button highlighted](common/copy-metadataurl.png)
+ ![Screenshot of the SAML Signing Certificate, with the copy button highlighted.](common/copy-metadataurl.png)
### Create an Azure AD test user
In this section, you enable B.Simon to use Azure single sign-on by granting acce
1. In the Azure portal, select **Enterprise Applications** > **All applications**. 1. In the applications list, select **NS1 SSO for Azure**. 1. In the app's overview page, find the **Manage** section, and select **Users and groups**.-
- ![Screenshot of the Manage section, with Users and groups highlighted](common/users-groups-blade.png)
- 1. Select **Add user**. In the **Add Assignment** dialog box, select **Users and groups**.-
- ![Screenshot of Users and groups page, with Add user highlighted](common/add-assign-user.png)
- 1. In the **Users and groups** dialog box, select **B.Simon** from the users list. Then choose the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Then choose the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog box, select **Assign**.
In this section, you create a user called B.Simon in NS1 SSO for Azure. Work wit
## Test SSO
-In this section, you test your Azure AD single sign-on configuration by using Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
-When you select the NS1 SSO for Azure tile in Access Panel, you should be automatically signed in to the NS1 SSO for Azure for which you set up SSO. For more information, see [Introduction to Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to NS1 SSO for Azure Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to NS1 SSO for Azure Sign-on URL directly and initiate the login flow from there.
-- [Tutorials for integrating SaaS applications with Azure Active Directory](./tutorial-list.md)
+#### IDP initiated:
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the NS1 SSO for Azure for which you set up the SSO.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the NS1 SSO for Azure tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the NS1 SSO for Azure for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [Try NS1 SSO for Azure with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+Once you configure NS1 SSO for Azure you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sevone Network Monitoring System Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sevone-network-monitoring-system-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with SevOne Network Monitoring System (NMS)'
+description: Learn how to configure single sign-on between Azure Active Directory and SevOne Network Monitoring System (NMS).
++++++++ Last updated : 06/28/2022++++
+# Tutorial: Azure AD SSO integration with SevOne Network Monitoring System (NMS)
+
+In this tutorial, you'll learn how to integrate SevOne Network Monitoring System (NMS) with Azure Active Directory (Azure AD). When you integrate SevOne Network Monitoring System (NMS) with Azure AD, you can:
+
+* Control in Azure AD who has access to SevOne Network Monitoring System (NMS).
+* Enable your users to be automatically signed-in to SevOne Network Monitoring System (NMS) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SevOne Network Monitoring System (NMS) single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* SevOne Network Monitoring System (NMS) supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add SevOne Network Monitoring System (NMS) from the gallery
+
+To configure the integration of SevOne Network Monitoring System (NMS) into Azure AD, you need to add SevOne Network Monitoring System (NMS) from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SevOne Network Monitoring System (NMS)** in the search box.
+1. Select **SevOne Network Monitoring System (NMS)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for SevOne Network Monitoring System (NMS)
+
+Configure and test Azure AD SSO with SevOne Network Monitoring System (NMS) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at SevOne Network Monitoring System (NMS).
+
+To configure and test Azure AD SSO with SevOne Network Monitoring System (NMS), perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SevOne Network Monitoring System (NMS) SSO](#configure-sevone-network-monitoring-system-nms-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SevOne Network Monitoring System (NMS) test user](#create-sevone-network-monitoring-system-nms-test-user)** - to have a counterpart of B.Simon in SevOne Network Monitoring System (NMS) that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **SevOne Network Monitoring System (NMS)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://azwcusehnmspas01.corp.microsoft.com/sso/callback`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://azwcusehnmspas01.corp.microsoft.com/sso/callback`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://azwcusehnmspas01.corp.microsoft.com/sso/callback`
+
+ d. In the **Relay State** text box, type the value:
+ `sevonenms`
+
+1. SevOne Network Monitoring System (NMS) application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attribute mappings.](common/default-attributes.png "Attributes")
+
+1. In addition to above, SevOne Network Monitoring System (NMS) application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | displayname | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up SevOne Network Monitoring System (NMS)** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SevOne Network Monitoring System (NMS).
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SevOne Network Monitoring System (NMS)**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure SevOne Network Monitoring System (NMS) SSO
+
+To configure single sign-on on **SevOne Network Monitoring System (NMS)** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [SevOne Network Monitoring System (NMS) support team](mailto:support@sevone.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create SevOne Network Monitoring System (NMS) test user
+
+In this section, you create a user called Britta Simon at SevOne Network Monitoring System (NMS). Work with [SevOne Network Monitoring System (NMS) support team](mailto:support@sevone.com) to add the users in the SevOne Network Monitoring System (NMS) platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to SevOne Network Monitoring System (NMS) Sign-On URL where you can initiate the login flow.
+
+* Go to SevOne Network Monitoring System (NMS) Sign-On URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the SevOne Network Monitoring System (NMS) tile in the My Apps, this will redirect to SevOne Network Monitoring System (NMS) Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure SevOne Network Monitoring System (NMS) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Weekdone Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/weekdone-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Weekdone | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Weekdone'
description: Learn how to configure single sign-on between Azure Active Directory and Weekdone.
Previously updated : 03/28/2019 Last updated : 06/28/2022
-# Tutorial: Azure Active Directory integration with Weekdone
+# Tutorial: Azure AD SSO integration with Weekdone
-In this tutorial, you learn how to integrate Weekdone with Azure Active Directory (Azure AD).
-Integrating Weekdone with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Weekdone with Azure Active Directory (Azure AD). When you integrate Weekdone with Azure AD, you can:
-* You can control in Azure AD who has access to Weekdone.
-* You can enable your users to be automatically signed-in to Weekdone (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Weekdone.
+* Enable your users to be automatically signed-in to Weekdone with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Weekdone, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Weekdone single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Weekdone single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Weekdone supports **SP** and **IDP** initiated SSO
+* Weekdone supports **SP** and **IDP** initiated SSO.
-* Weekdone supports **Just In Time** user provisioning
+* Weekdone supports **Just In Time** user provisioning.
-## Adding Weekdone from the gallery
+## Add Weekdone from the gallery
To configure the integration of Weekdone into Azure AD, you need to add Weekdone from the gallery to your list of managed SaaS apps.
-**To add Weekdone from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Weekdone**, select **Weekdone** from result panel then click **Add** button to add the application.
-
- ![Weekdone in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Weekdone based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Weekdone needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Weekdone** in the search box.
+1. Select **Weekdone** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Weekdone, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Weekdone
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Weekdone Single Sign-On](#configure-weekdone-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Weekdone test user](#create-weekdone-test-user)** - to have a counterpart of Britta Simon in Weekdone that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Weekdone using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Weekdone.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Weekdone, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Weekdone SSO](#configure-weekdone-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Weekdone test user](#create-weekdone-test-user)** - to have a counterpart of B.Simon in Weekdone that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Weekdone, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Weekdone** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **Weekdone** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
-
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://weekdone.com/a/<tenant>/metadata`
To configure Azure AD single sign-on with Weekdone, perform the following steps:
b. In the **Reply URL** text box, type a URL using the following pattern: `https://weekdone.com/a/<tenantname>`
-5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
+1. Click **Set additional URLs** and perform the following step, if you wish to configure the application in **SP** initiated mode:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://weekdone.com/a/<tenantname>`
To configure Azure AD single sign-on with Weekdone, perform the following steps:
6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
7. On the **Set up Weekdone** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Weekdone Single Sign-On
-
-To configure single sign-on on **Weekdone** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Weekdone support team](mailto:hello@weekdone.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Weekdone.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Weekdone**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Weekdone.
-2. In the applications list, select **Weekdone**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Weekdone**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The Weekdone link in the Applications list](common/all-applications.png)
+## Configure Weekdone SSO
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Weekdone** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Weekdone support team](mailto:hello@weekdone.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Weekdone test user
In this section, a user called Britta Simon is created in Weekdone. Weekdone sup
>[!NOTE] >If you need to create a user manually, you need to contact the [Weekdone Client support team](mailto:hello@weekdone.com).
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Weekdone Sign-On URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Weekdone Sign-On URL directly and initiate the login flow from there.
-When you click the Weekdone tile in the Access Panel, you should be automatically signed in to the Weekdone for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Weekdone for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Weekdone tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Weekdone for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Weekdone you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
When the feature has been registered, refresh the registration of the *Microsoft
az provider register --namespace Microsoft.ContainerService ```
-## Create an AKS cluster with API Server VNet Integration using Managed VNet
+## Create an AKS Private cluster with API Server VNet Integration using Managed VNet
AKS clusters with API Server VNet Integration can be configured in either managed VNet or bring-your-own VNet mode.
az aks create -n <cluster-name> \
Where `--enable-private-cluster` is a mandatory flag for a private cluster, and `--enable-apiserver-vnet-integration` configures API Server VNet integration for Managed VNet mode.
-## Create an AKS cluster with API Server VNet Integration using bring-your-own VNet
+## Create an AKS Private cluster with API Server VNet Integration using bring-your-own VNet
When using bring-your-own VNet, an API server subnet must be created and delegated to `Microsoft.ContainerService/managedClusters`. This grants the AKS service permissions to inject the API server pods and internal load balancer into that subnet. The subnet may not be used for any other workloads, but may be used for multiple AKS clusters located in the same virtual network. An AKS cluster will require from 2-7 IP addresses depending on cluster scale. The minimum supported API server subnet size is a /28.
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
Title: Scale an Azure Kubernetes Service (AKS) cluster
description: Learn how to scale the number of nodes in an Azure Kubernetes Service (AKS) cluster. Previously updated : 09/16/2020 Last updated : 06/29/2022 # Scale the node count in an Azure Kubernetes Service (AKS) cluster
-If the resource needs of your applications change, you can manually scale an AKS cluster to run a different number of nodes. When you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications. When you scale up, AKS waits until nodes are marked `Ready` by the Kubernetes cluster before pods are scheduled on them.
+If the resource needs of your applications change, you can manually scale an AKS cluster to run a different number of nodes. When you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications. When you scale up, AKS waits until nodes are marked **Ready** by the Kubernetes cluster before pods are scheduled on them.
## Scale the cluster nodes
+> [!NOTE]
+> Removing nodes from a node pool using the kubectl command is not supported. Doing so can create scaling issues with your AKS cluster.
+ ### [Azure CLI](#tab/azure-cli) First, get the *name* of your node pool using the [az aks show][az-aks-show] command. The following example gets the node pool name for the cluster named *myAKSCluster* in the *myResourceGroup* resource group:
You can also autoscale `User` node pools to 0 nodes, by setting the `--min-count
To scale a user pool to 0, you can use the [Update-AzAksNodePool][update-azaksnodepool] in alternative to the above `Set-AzAksCluster` command, and set 0 as your node count.
-```azurepowershell-interactive
+```azurepowershell-interactive
Update-AzAksNodePool -Name <your node pool name> -ClusterName myAKSCluster -ResourceGroupName myResourceGroup -NodeCount 0 ```
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
If you're using Azure Firewall like on this [example](limit-egress-traffic.md#re
If you are using cluster autoscaler, when you start your cluster back up your current node count may not be between the min and max range values you set. This behavior is expected. The cluster starts with the number of nodes it needs to run its workloads, which isn't impacted by your autoscaler settings. When your cluster performs scaling operations, the min and max values will impact your current node count and your cluster will eventually enter and remain in that desired range until you stop your cluster.
+## Windows containers have connectivity issues after a cluster upgrade operation
+
+For older clusters with Calico network policies applied before Windows Calico support, Windows Calico will be enabled by default after a cluster upgrade. After Windows Calico is enabled on Windows, you may have connectivity issues if the Calico network policies denied ingress/egress. You can mitigate this issue by creating a new Calico policy on the cluster that allows all ingress/egress for Windows using either PodSelector or IPBlock.
+ ## Azure Storage and AKS Troubleshooting ### Failure when setting uid and `GID` in mountOptions for Azure Disk
As a result, to mitigate this issue you can:
AKS is investigating the capability to mutate active labels on a node pool to improve this mitigation. - <!-- LINKS - internal --> [view-master-logs]: monitor-aks-reference.md#resource-logs [cluster-autoscaler]: cluster-autoscaler.md
aks Uptime Sla https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/uptime-sla.md
Title: Azure Kubernetes Service (AKS) with Uptime SLA
description: Learn about the optional Uptime SLA offering for the Azure Kubernetes Service (AKS) API Server. Previously updated : 01/08/2021 Last updated : 06/29/2022 # Azure Kubernetes Service (AKS) Uptime SLA
-Uptime SLA is a tier to enable a financially backed, higher SLA for an AKS cluster. Clusters with Uptime SLA, also regarded as Paid tier in AKS REST APIs, come with greater amount of control plane resources and automatically scale to meet the load of your cluster. Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters that use [Availability Zones][availability-zones] and 99.9% of availability for clusters that don't use Availability Zones. AKS uses master node replicas across update and fault domains to ensure SLA requirements are met.
+Uptime SLA is a tier to enable a financially backed, higher SLA for an AKS cluster. Clusters with Uptime SLA, also referred to as [Paid SKU tier][paid-sku-tier] in AKS REST APIs, come with greater amount of control plane resources and automatically scale to meet the load of your cluster. Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters that use [Availability Zones][availability-zones], and 99.9% of availability for clusters that don't use Availability Zones. AKS uses master node replicas across update and fault domains to ensure SLA requirements are met.
-AKS recommends use of Uptime SLA in production workloads to ensure availability of control plane components. Clusters on free tier by contrast come with fewer replicas and limited resources for the control plane and are not suitable for production workloads.
+AKS recommends use of Uptime SLA in production workloads to ensure availability of control plane components. By contrast, clusters on the **Free SKU tier** support fewer replicas and limited resources for the control plane and are not suitable for production workloads.
-Customers can still create unlimited number of free clusters with a service level objective (SLO) of 99.5% and opt for the preferred SLO.
+You can still create unlimited number of free clusters with a service level objective (SLO) of 99.5% and opt for the preferred SLO.
> [!IMPORTANT] > For clusters with egress lockdown, see [limit egress traffic](limit-egress-traffic.md) to open appropriate ports.
Uptime SLA is a paid feature and is enabled per cluster. Uptime SLA pricing is d
## Before you begin
-* Install the [Azure CLI](/cli/azure/install-azure-cli) version 2.8.0 or later
+[Azure CLI](/cli/azure/install-azure-cli) version 2.8.0 or later and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Creating a new cluster with Uptime SLA
-To create a new cluster with the Uptime SLA, you use the Azure CLI.
+To create a new cluster with the Uptime SLA, you use the Azure CLI. Create a new cluster in an existing resource group or create a new one. To learn more about resource groups and working with them, see [managing resource groups using the Azure CLI][manage-resource-group-cli].
-The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node enables the Uptime SLA. This operation takes several minutes to complete:
```azurecli-interactive
-# Create a resource group
-az group create --name myResourceGroup --location eastus
-```
-
-Use the [`az aks create`][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This operation takes several minutes to complete:
-
-```azurecli-interactive
-# Create an AKS cluster with uptime SLA
az aks create --resource-group myResourceGroup --name myAKSCluster --uptime-sla --node-count 1 ```
-After a few minutes, the command completes and returns JSON-formatted information about the cluster. The following JSON snippet shows the paid tier for the SKU, indicating your cluster is enabled with Uptime SLA:
+After a few minutes, the command completes and returns JSON-formatted information about the cluster. The following example output of the JSON snippet shows the paid tier for the SKU, indicating your cluster is enabled with Uptime SLA:
```output },
After a few minutes, the command completes and returns JSON-formatted informatio
## Modify an existing cluster to use Uptime SLA
-You can optionally update your existing clusters to use Uptime SLA.
-
-If you created an AKS cluster with the previous steps, delete the resource group:
-
-```azurecli-interactive
-# Delete the existing cluster by deleting the resource group
-az group delete --name myResourceGroup --yes --no-wait
-```
-
-Create a new resource group:
-
-```azurecli-interactive
-# Create a resource group
-az group create --name myResourceGroup --location eastus
-```
-
-Create a new cluster, and don't use Uptime SLA:
+You can update your existing clusters to use Uptime SLA.
-```azurecli-interactive
-# Create a new cluster without uptime SLA
-az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1
-```
+> [!NOTE]
+> Updating your cluster to enable the Uptime SLA does not disrupt its normal operation or impact its availability.
-Use the [`az aks update`][az-aks-update] command to update the existing cluster:
+The following command uses the [az aks update][az-aks-update] command to update the existing cluster:
```azurecli-interactive # Update an existing cluster to use Uptime SLA az aks update --resource-group myResourceGroup --name myAKSCluster --uptime-sla ```
-The following JSON snippet shows the paid tier for the SKU, indicating your cluster is enabled with Uptime SLA:
+This process takes several minutes to complete. When finished, the following example JSON snippet shows the paid tier for the SKU, indicating your cluster is enabled with Uptime SLA:
```output },
The following JSON snippet shows the paid tier for the SKU, indicating your clus
## Opt out of Uptime SLA
-You can update your cluster to change to the free tier and opt out of Uptime SLA.
+At any time you can opt out of using the Uptime SLA by updating your cluster to change it back to the free tier.
-```azurecli-interactive
-# Update an existing cluster to opt out of Uptime SLA
- az aks update --resource-group myResourceGroup --name myAKSCluster --no-uptime-sla
-```
-
-## Clean up
+> [!NOTE]
+> Updating your cluster to stop using the Uptime SLA does not disrupt its normal operation or impact its availability.
-To avoid charges, clean up any resources you created. To delete the cluster, use the [`az group delete`][az-group-delete] command to delete the AKS resource group:
+The following command uses the [az aks update][az-aks-update] command to update the existing cluster:
```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
+ az aks update --resource-group myResourceGroup --name myAKSCluster --no-uptime-sla
```
-## Next steps
+This process takes several minutes to complete.
-Use [Availability Zones][availability-zones] to increase high availability with your AKS cluster workloads.
+## Next steps
-Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
+- Use [Availability Zones][availability-zones] to increase high availability with your AKS cluster workloads.
+- Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
<!-- LINKS - External --> [azure-support]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
<!-- LINKS - Internal --> [vm-skus]: ../virtual-machines/sizes.md
+[paid-sku-tier]: /rest/api/aks/managed-clusters/create-or-update#managedclusterskutier
[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[manage-resource-group-cli]: /azure-resource-manager/management/manage-resource-groups-cli
[faq]: ./faq.md [availability-zones]: ./availability-zones.md [az-aks-create]: /cli/azure/aks?#az_aks_create
Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
[az-aks-update]: /cli/azure/aks#az_aks_update [az-group-delete]: /cli/azure/group#az_group_delete [private-clusters]: private-clusters.md
+[install-azure-cli]: /cli/azure/install-azure-cli
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
Both implementations use Linux *IPTables* to enforce the specified policies. Pol
| Capability | Azure | Calico | ||-|--|
-| Supported platforms | Linux | Linux, Windows Server 2019 (preview) |
-| Supported networking options | Azure CNI | Azure CNI (Windows Server 2019 and Linux) and kubenet (Linux) |
+| Supported platforms | Linux | Linux, Windows Server 2019 and 2022 |
+| Supported networking options | Azure CNI | Azure CNI (Linux, Windows Server 2019 and 2022) and kubenet (Linux) |
| Compliance with Kubernetes specification | All policy types supported | All policy types supported | | Additional features | None | Extended policy model consisting of Global Network Policy, Global Network Set, and Host Endpoint. For more information on using the `calicoctl` CLI to manage these extended features, see [calicoctl user reference][calicoctl]. | | Support | Supported by Azure support and Engineering team | Calico community support. For more information on additional paid support, see [Project Calico support options][calico-support]. |
az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAM
Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the network policy. Using *calico* as the network policy enables Calico networking on both Linux and Windows node pools.
-If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password]. To use Calico with Windows node pools, you also need to register the `Microsoft.ContainerService/EnableAKSWindowsCalico`.
-
-Register the `EnableAKSWindowsCalico` feature flag using the [az feature register][az-feature-register] command as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableAKSWindowsCalico"
-```
-
- You can check on the registration status using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAKSWindowsCalico')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password].
> [!IMPORTANT] > At this time, using Calico network policies with Windows nodes is available on new clusters using Kubernetes version 1.20 or later with Calico 3.17.2 and requires using Azure CNI networking. Windows nodes on AKS clusters with Calico enabled also have [Direct Server Return (DSR)][dsr] enabled by default. > > For clusters with only Linux node pools running Kubernetes 1.20 with earlier versions of Calico, the Calico version will automatically be upgraded to 3.17.2.
-Calico networking policies with Windows nodes is currently in preview.
-- Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following commands prompt you for a username and set it WINDOWS_USERNAME for use in a later command (remember that the commands in this article are entered into a BASH shell). ```azurecli-interactive
app-service App Service Asp Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-asp-net-migration.md
Title: Migrate .NET apps to Azure App Service
-description: Discover .NET migration resources available to Azure App Service.
+description: A collection of .NET migration resources available to Azure App Service.
Previously updated : 03/29/2021 Last updated : 06/28/2022 ms.devlang: csharp
Azure App Service provides easy-to-use tools to quickly discover on-premises .NE
These tools are developed to support different kinds of scenarios, focused on discovery, assessment, and migration. Following is list of .NET migration tools and use cases.
-## Migrate from multiple servers at-scale (preview)
+## Migrate from multiple servers at-scale
-<!-- Intent: discover how to assess and migrate at scale. -->
+> [!NOTE]
+> [Learn how to migrate .NET apps to App Service using the .NET migration tutorial.](../migrate/tutorial-migrate-webapps.md)
+>
Azure Migrate recently announced at-scale, agentless discovery, and assessment of ASP.NET web apps. You can now easily discover ASP.NET web apps running on Internet Information Services (IIS) servers in a VMware environment and assess them for migration to Azure App Service. Assessments will help you determine the web app migration readiness, migration blockers, remediation guidance, recommended SKU, and hosting costs. At-scale migration resources for are found below.
+Once you have successfully assessed readiness, you should proceed with migration of ASP.NET web apps to Azure App Services.
+
+There are existing tools which enable migration of a standalone ASP.Net web app or multiple ASP.NET web apps hosted on a single IIS server as explained in [Migrate .NET apps to Azure App Service](../migrate/tutorial-migrate-webapps.md). With introduction of At-Scale or bulk migration feature integrated with Azure Migrate we are now opening up the possibilities to migrate multiple ASP.NET application hosted on multiple on-premises IIS servers.
+
+Azure Migrate provides at-scale, agentless discovery, and assessment of ASP.NET web apps. You can discover ASP.NET web apps running on Internet Information Services (IIS) servers in a VMware environment and assess them for migration to Azure App Service. Assessments will help you determine the web app migration readiness, migration blockers, remediation guidance, recommended SKU, and hosting costs. At-scale migration resources for are found below.
+
+Bulk migration provides the following key capabilities:
+
+- Bulk Migration of ASP.NET web apps to Azure App Services multitenant or App services environment
+- Migrate ASP.NET web apps assessed as "Ready" & "Ready with Conditions"
+- Migrate up to five App Service Plans (and associated web apps) as part of a single E2E migration flow
+- Ability to change suggested SKU for the target App Service Plan (Ex: Change suggested Pv3 SKU to Standard PV2 SKU)
+- Ability to change web apps suggested web apps packing density for target app service plan (Add or Remove web apps associated with an App Service Plan)
+- Change target name for app service plans and\or web apps
+- Bulk edit migration settings\attributes
+- Download CSV with details of target web app and app service plan name
+- Track progress of migration using ARM template deployment experience
+ ### At-scale migration resources | How-tos |
Azure Migrate recently announced at-scale, agentless discovery, and assessment o
| [Create an Azure App Service assessment](../migrate/how-to-create-azure-app-service-assessment.md) | | [Tutorial to assess web apps for migration to Azure App Service](../migrate/tutorial-assess-webapps.md) | | [Discover software inventory on on-premises servers with Azure Migrate](../migrate/how-to-discover-applications.md) |
+| [Migrate .NET apps to App Service](../migrate/tutorial-migrate-webapps.md) |
| **Blog** | | [Discover and assess ASP.NET apps at-scale with Azure Migrate](https://azure.microsoft.com/blog/discover-and-assess-aspnet-apps-atscale-with-azure-migrate/) | | **FAQ** |
Azure Migrate recently announced at-scale, agentless discovery, and assessment o
## Migrate from an IIS server
-<!-- Intent: discover how to assess and migrate from a single IIS server -->
- You can migrate ASP.NET web apps from single IIS server discovered through Azure Migrate's at-scale discovery experience using [PowerShell scripts](https://github.com/Azure/App-Service-Migration-Assistant/wiki/PowerShell-Scripts) [(download)](https://appmigration.microsoft.com/api/download/psscriptpreview/AppServiceMigrationScripts.zip). Watch the video for [updates on migrating to Azure App Service](/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service). ## ASP.NET web app migration
-<!-- Intent: migrate a single web app -->
Using App Service Migration Assistant, you can [migrate your standalone on-premises ASP.NET web app onto Azure App Service](https://www.youtube.com/watch?v=9LBUmkUhmXU). App Service Migration Assistant is designed to simplify your journey to the cloud through a free, simple, and fast solution to migrate applications from on-premises to the cloud. For more information about the migration assistant tool, see the [FAQ](https://github.com/Azure/App-Service-Migration-Assistant/wiki).
app-service App Service Migration Assess Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-migration-assess-net.md
+
+ Title: Assess .NET apps
+description: Assess .NET web apps before migrating to Azure App Service
+++ Last updated : 06/28/2022+
+ms.devlang: csharp
+++
+# At-scale assessment of .NET web apps
+
+Once you've discovered ASP.NET web apps you should proceed to the next step of assessing these web apps. Assessment provides you with migration readiness and sizing recommendations based on properties defined by you. Below is the list of key assessment capabilities:
+
+- Modify assessment properties as per your requirements like target Azure region, application isolation requirements, and reserved instance pricing.
+- Provide App Service SKU recommendation and display monthly cost estimates
+- Provide per web app migration readiness information and provide detailed information on blockers and errors.
+
+You can create multiple assessments for the same web apps with different sets of assessment properties
+
+For more information on web apps assessment, see:
+- [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](https://channel9.msdn.com/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate)
+- [Create an Azure App Service assessment](../migrate/how-to-create-azure-app-service-assessment.md)
+- [Tutorial to assess web apps for migration to Azure App Service](../migrate/tutorial-assess-webapps.md)
+- [Azure App Service assessments in Azure Migrate Discovery and assessment tool](../migrate/concepts-azure-webapps-assessment-calculation.md)
+- [Assessment best practices in Azure Migrate Discovery and assessment tool](../migrate/best-practices-assessment.md)
++
+Next steps:
+[At-scale migration of .NET web apps](/learn/modules/migrate-app-service-migration-assistant/)
app-service App Service Migration Discover Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-migration-discover-net.md
+
+ Title: Discover .NET apps to Azure App Service
+description: Discover .NET migration resources available to Azure App Service.
+++ Last updated : 03/29/2021+
+ms.devlang: csharp
+++
+# At-scale discovery of .NET web apps
+
+For ASP. Net web apps discovery you need to either install a new Azure Migrate appliance or upgrade an existing Azure Migrate appliance.
+
+Once the appliance is configured, Azure Migrate initiates the discovery of web apps deployed on IIS web servers hosted within your on-premises VMware environment. Discovery of ASP.NET web apps provide the following key capabilities:
+
+- Agentless discovery of up to 20,000 web apps with a single Azure Migrate appliance
+- Provide a rich & interactive dashboard with a list of IIS web servers and underlying VM infra details. Web apps discovery surfaces information such as:
+ - web app name
+ - web server type and version
+ - URLs
+ - binding port
+ - application pool
+- If web app discovery has failed then the discovery dashboard allows easy navigation to review relevant error messages, possible causes of failure and suggested remediation actions
+
+For more information about web apps discovery please refer to:
+
+- [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](https://channel9.msdn.com/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate)
+- [Discover and assess ASP.NET apps at-scale with Azure Migrate](https://azure.microsoft.com/blog/discover-and-assess-aspnet-apps-atscale-with-azure-migrate/)
+- [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](https://channel9.msdn.com/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate)
+- [Discover software inventory on on-premises servers with Azure Migrate](../migrate/how-to-discover-applications.md)
+- [Discover web apps and SQL Server instances](../migrate/how-to-discover-sql-existing-project.md)
++
+Next steps:
+[At-scale assessment of .NET web apps](/learn/modules/migrate-app-service-migration-assistant/)
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
A few features that were available in earlier versions of App Service Environmen
- Send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25. - Monitor your traffic with Network Watcher or network security group (NSG) flow logs.-- Configure an IP-based Transport Layer Security (TLS) or Secure Sockets Layer (SSL) binding with your apps.
+- Configure individual custom domain [IP SSL bindings](..\configure-ssl-bindings.md#create-binding) with your apps.
- Configure a custom domain suffix. - Perform a backup and restore operation on a storage account behind a firewall.
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
A private endpoint is a network interface that uses a private IP address from th
1. Select **Create**. 1. On the **Basics** tab, configure a resource group, name, and region for the Private Endpoint. Select **Next**. 1. On the **Resource** tab, select **Next**.
-1. On the **Virtual Network** tab, configure a virtual network and subnet where the private endpoint network interface should be provisioned to. Configure whether the private endpoint should have a dynamic or static IP address. Last, configure if you want a new private link zone to be created to automatically manage IP addressing. Select **Next**.
+1. On the **Virtual Network** tab, configure a virtual network and subnet where the private endpoint network interface should be provisioned to. Configure whether the private endpoint should have a dynamic or static IP address. Select **Next**.
1. On the **Tags** tab, optionally configure resource tags. Select **Next**. 1. Select **Create**.
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
recommendations: false
# Form Recognizer custom template model
-Custom templateΓÇöformerly custom form-are easy-to-train models that accurately extract labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
+Custom template (formerly custom form) are easy-to-train models that accurately extract labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
Custom template models share the same labeling format and strategy as custom neural models, with support for more field types and languages.
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
The [Read API](concept-read.md) supports detecting the following languages in yo
> extracted for a given language, see previous sections.
+> [!NOTE]
+> **Detected languages vs extracted languages**
+>
+> This section lists the languages we can detect from the documents using the Read model, if present. Please note that this list differs from list of languages we support extracting text from, which is specified in the above sections for each model.
+ | Language | Code | ||| | Afrikaans | `af` |
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 06/13/2022 Last updated : 06/29/2022 <!-- markdownlint-disable MD024 -->
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
## June 2022
+### [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) June Update
+
+The June release is the latest update to the Form Recognizer Studio. There are considerable UX and accessbility improvements addressed in this update:
+
+* 🆕 **Code sample for Javascript and C#**. Studio code tab now includes sample codes written in Javascript and C# in addition to the already existing Python code.
+* 🆕 **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload UI.
+* 🆕 **New feature for custom projects**. Custom projects now support creating storage account and file directories when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
+ ### Form Recognizer v3.0 preview release
-The **2022-06-30-preview** release is the latest update to the Form Recognizer service for v3.0 capabilities. There are considerable updates across the feature APIs:
+The **2022-06-30-preview** release is the latest update to the Form Recognizer service for v3.0 capabilities and presents extensive updates across the feature APIs:
* [🆕 **Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction). * [🆕 **Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
azure-arc Create Data Controller Indirect Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-azure-portal.md
Follow the steps below to create an Azure Arc data controller using the Azure po
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: > [!NOTE]
-> The example commands below assume that you created a data controller and Kubernetes namespace with the name 'arc'. If you used a different namespace/data controller name, you can replace 'arc' with your name.
+> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.
```console
-kubectl get datacontroller/arc --namespace arc
+kubectl get datacontroller/arc-dc --namespace arc
``` ```console
azure-arc Create Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-cli.md
Once you have run the command, continue on to [Monitoring the creation status](#
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: > [!NOTE]
-> The example commands below assume that you created a data controller and Kubernetes namespace with the name `arc`. If you used a different namespace/data controller name, you can replace `arc` with your name.
+> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.
```console
-kubectl get datacontroller/arc --namespace arc
+kubectl get datacontroller/arc-dc --namespace arc
``` ```console
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Title: Create a data controller using Kubernetes tools
-description: Create a data controller using Kubernetes tools
+ Title: Create a Data Controller using Kubernetes tools
+description: Create a Data Controller using Kubernetes tools
Last updated 11/03/2021
-# Create Azure Arc-enabled data controller using Kubernetes tools
+# Create Azure Arc data controller using Kubernetes tools
-A data controller manages Azure Arc-enabled data services for a Kubernetes cluster. This article describes how to use Kubernetes tools to create a data controller.
## Prerequisites Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information.
-To create the data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
+To create the Azure Arc data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) > [!NOTE]
-> Some of the steps to create the data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
+> Some of the steps to create the Azure Arc data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
### Cleanup from past installations
-If you installed the data controller in the past on the same cluster and deleted the data controller, there may be some cluster level objects that would still need to be deleted.
+If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted.
For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`.
-Run the following commands to delete the data controller cluster level objects:
+Run the following commands to delete the Azure Arc data controller cluster level objects:
```console # Cleanup azure arc data service artifacts
kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{names
## Overview
-Creating the data controller has the following high level steps:
+Creating the Azure Arc data controller has the following high level steps:
-1. Create a namespace in which the data controller will be created.
-1. Create the deployer service account.
+ > [!IMPORTANT]
+ > Some of the steps below require Kubernetes cluster administrator permissions.
+
+1. Create the custom resource definitions for the Arc data controller, Azure SQL managed instance, and PostgreSQL Hyperscale.
+1. Create a namespace in which the data controller will be created.
1. Create the bootstrapper service including the replica set, service account, role, and role binding. 1. Create a secret for the data controller administrator username and password.
+1. Create the webhook deployment job, cluster role and cluster role binding.
1. Create the data controller.
+## Create the custom resource definitions
+
+Run the following command to create the custom resource definitions.
+
+ > [!IMPORTANT]
+ > Requires Kubernetes cluster administrator permissions.
+
+```console
+kubectl create -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
+```
+ ## Create a namespace in which the data controller will be created Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created. In this example and the remainder of the examples in this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout.
openshift.io/sa.scc.supplemental-groups: 1000700001/10000
openshift.io/sa.scc.uid-range: 1000700001/10000 ```
-If other people who are not cluster administrators will be using this namespace, create a namespace admin role and grant that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
--
-## Create the deployer service account
-
- > [!IMPORTANT]
- > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
-
-Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
-
-```console
-kubectl apply --namespace arc -f arcdata-deployer.yaml
-```
-
+If other people will be using this namespace that are not cluster administrators, we recommend creating a namespace admin role and granting that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
## Create the bootstrapper service
-The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller.
+The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller, SQL managed instances, or PostgreSQL Hyperscale server groups.
-Run the following command to create a "bootstrap" job to install the bootstrapper along with related cluster-scope and namespaced objects, such as custom resource definitions (CRDs), the service account and bootstrapper role.
+Run the following command to create a bootstrapper service, a service account for the bootstrapper service, and a role and role binding for the bootstrapper service account.
```console
-kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/bootstrap.yaml
+kubectl create --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml
```
-The [uninstall.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/uninstall.yaml) is for uninstalling the bootstrapper and related Kubernetes objects, except the CRDs.
-
-Verify that the bootstrapper pod is running using the following command.
+Verify that the bootstrapper pod is running using the following command. You may need to run it a few times until the status changes to `Running`.
```console
-kubectl get pod --namespace arc -l app=bootstrapper
+kubectl get pod --namespace arc
```
-If the status is not _Running_, run the command a few times until the status is _Running_.
-
-The bootstrap.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following:
+The bootstrapper.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment does not have access directly to the Microsoft Container Registry, you can do the following:
- Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md).-- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry.-- Change the image URL for the bootstrapper image in the bootstrap.yaml file.-- Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret.
+- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-lin) for your private container registry.
+- Add an image pull secret to the bootstrapper container. See example below.
+- Change the image location for the bootstrapper image. See example below.
+
+The example below assumes that you created a image pull secret name `arc-private-registry`.
+
+```yaml
+#Just showing only the relevant part of the bootstrapper.yaml template file here
+ spec:
+ serviceAccountName: sa-bootstrapper
+ nodeSelector:
+ kubernetes.io/os: linux
+ imagePullSecrets:
+ - name: arc-private-registry #Create this image pull secret if you are using a private container registry
+ containers:
+ - name: bootstrapper
+ image: mcr.microsoft.com/arcdata/arc-bootstrapper:v1.1.0_2021-11-02 #Change this registry location if you are using a private container registry.
+ imagePullPolicy: Always
+```
## Create secrets for the metrics and logs dashboards
kubectl create --namespace arc -f C:\arc-data-services\controller-login-secret.y
Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify during Kubernetes native tools deployment](monitor-certificates.md).
+## Create the webhook deployment job, cluster role and cluster role binding
+
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/web-hook.yaml) locally on your computer so that you can modify some of the settings.
+
+Edit the file and replace `{{namespace}}` in all places with the name of the namespace you created in the previous step. **Save the file.**
+
+Run the following command to create the cluster role and cluster role bindings.
+
+ > [!IMPORTANT]
+ > Requires Kubernetes cluster administrator permissions.
+
+```console
+kubectl create -n arc -f <path to the edited template file on your computer>
+```
+ ## Create the data controller Now you are ready to create the data controller itself.
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
Edit the following as needed:
Edit the following as needed:
- **name**: The default name of the data controller is `arc`, but you can change it if you want. - **displayName**: Set this to the same value as the name attribute at the top of the file. - **registry**: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and [pushing them to a private container registry](offline-deployment.md), enter the IP address or DNS name of your registry here.-- **dockerRegistry**: The secret to use to pull the images from a private container registry if required.
+- **dockerRegistry**: The image pull secret to use to pull the images from a private container registry if required.
- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images. - **imageTag**: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version. - **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate.
If you encounter any troubles with creation, please see the [troubleshooting gui
## Next steps - [Create a SQL managed instance using Kubernetes-native tools](./create-sql-managed-instance-using-kubernetes-native-tools.md)-- [Create a PostgreSQL Hyperscale server group using Kubernetes-native tools](./create-postgresql-hyperscale-server-group-kubernetes-native-tools.md)
+- [Create a PostgreSQL Hyperscale server group using Kubernetes-native tools](./create-postgresql-hyperscale-server-group-kubernetes-native-tools.md)
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
Run the notebook by clicking **Run All**.
Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal.
-Open the Azure portal by using this special URL: [https://portal.azure.com/?Microsoft_Azure_HybridData_Platform=BugBash](https://portal.azure.com/?Microsoft_Azure_HybridData_Platform=BugBash).
+Open the Azure portal by using this special URL: [https://ms.portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home](https://ms.portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home).
Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate and enter the desired tag in the **Image tag** field. Fill out the rest of the custom cluster configuration template fields as normal.
At this time, pre-release testing is supported for certain customers and partner
## Next steps
-[Release notes - Azure Arc-enabled data services](release-notes.md)
+[Release notes - Azure Arc-enabled data services](release-notes.md)
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
Title: Upgrade indirectly connected data controller for Azure Arc using Kubernetes tools
-description: Article describes how to upgrade an indirectly connected data controller for Azure Arc using Kubernetes tools
+ Title: Upgrade indirectly connected Azure Arc data controller using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
Last updated 05/27/2022
-# Upgrade an indirectly connected Azure Arc-enabled data controller using Kubernetes tools
+# Upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
This article explains how to upgrade an indirectly connected Azure Arc-enabled data controller with Kubernetes tools.
During a data controller upgrade, portions of the data control plane such as Cus
In this article, you'll apply a .yaml file to:
-1. Create the service account for running upgrade.
-1. Upgrade the bootstrapper.
-1. Upgrade the data controller.
+1. Specify a service account.
+1. Set the cluster roles.
+1. Set the cluster role bindings.
+1. Set the job.
> [!NOTE] > Some of the data services tiers and modes are generally available and some are in preview.
In this article, you'll apply a .yaml file to:
## Prerequisites
-Prior to beginning the upgrade of the data controller, you'll need:
+Prior to beginning the upgrade of the Azure Arc data controller, you'll need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected
You need an indirectly connected data controller with the `imageTag: v1.0.0_2021
## Install tools
-To upgrade the data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
+To upgrade the Azure Arc data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
The examples in this article use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or helm if you're familiar with those tools and Kubernetes yaml/json.
Found 2 valid versions. The current datacontroller version is <current-version>
... ```
+## Create or download .yaml file
+
+To upgrade the data controller, you'll apply a yaml file to the Kubernetes cluster. The example file for the upgrade is available in GitHub at <https://github.com/microsoft/azure_arc/blob/main/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml>.
+
+You can download the file - and other Azure Arc related demonstration files - by cloning the repository. For example:
+
+```azurecli
+git clone https://github.com/microsoft/azure-arc
+```
+
+For more information, see [Cloning a repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) in the GitHub docs.
+
+The following steps use files from the repository.
+
+In the yaml file, you'll replace ```{{namespace}}``` with your namespace.
+ ## Upgrade data controller This section shows how to upgrade an indirectly connected data controller.
This section shows how to upgrade an indirectly connected data controller.
### Upgrade
-You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the data controller.
+You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
+### Specify the service account
-### Create the service account for running upgrade
+The upgrade requires an elevated service account for the upgrade job.
- > [!IMPORTANT]
- > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
+To specify the service account:
-Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
+1. Describe the service account in a .yaml file. The following example sets a name for `ServiceAccount` as `sa-arc-upgrade-worker`:
-```console
-kubectl apply --namespace arc -f arcdata-deployer.yaml
-```
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="2-4":::
+1. Edit the file as needed.
-### Upgrade the bootstrapper
+### Set the cluster roles
-The following command creates a job for upgrading the bootstrapper and related Kubernetes objects.
+A cluster role (`ClusterRole`) grants the service account permission to perform the upgrade.
-```console
-kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/upgrade/yaml/bootstrapper-upgrade-job.yaml
-```
+1. Describe the cluster role and rules in a .yaml file. The following example defines a cluster role for `arc:cr-upgrade-worker` and allows all API groups, resources, and verbs.
+
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="7-9":::
+
+1. Edit the file as needed.
+
+### Set the cluster role binding
+
+A cluster role binding (`ClusterRoleBinding`) links the service account and the cluster role.
+
+1. Describe the cluster role binding in a .yaml file. The following example describes a cluster role binding for the service account.
+
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="20-21":::
+
+1. Edit the file as needed.
+
+### Specify the job
+
+A job creates a pod to execute the upgrade.
+
+1. Describe the job in a .yaml file. The following example creates a job called `arc-bootstrapper-upgrade-job`.
+
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="31-48":::
+
+1. Edit the file for your environment.
### Upgrade the data controller
-The following command patches the image tag to upgrade the data controller.
+Specify the image tag to upgrade the data controller to.
-```console
-kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/upgrade/yaml/data-controller-upgrade.yaml
-```
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="50-56":::
+### Apply the resources
+
+Run the following kubectl command to apply the resources to your cluster.
+
+``` bash
+kubectl apply -n <namespace> -f upgrade-indirect-k8s.yaml
+```
## Monitor the upgrade status
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Last updated 03/03/2021 -- description: "Control agent upgrades for Azure Arc-enabled Kubernetes" keywords: "Kubernetes, Arc, Azure, K8s, containers, agent, upgrade"
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
Last updated 04/05/2021 -- description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kubernetes clusters."
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Last updated 06/03/2022 -- description: "Use Cluster Connect to securely connect to Azure Arc-enabled Kubernetes clusters"
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
- If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example: ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
+ kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
``` - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
- If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example: ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
+ kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
``` - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace): ```console
- kubectl create serviceaccount admin-user
+ kubectl create serviceaccount demo-user
``` 1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example: ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --serviceaccount default:admin-user
+ kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
```
-1. Get the service account's token using the following commands:
+1. Create a service account token:
```console
- SECRET_NAME=$(kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')
+ kubectl apply -f - <<EOF
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: demo-user-secret
+ annotations:
+ kubernetes.io/service-account.name: demo-user
+ type: kubernetes.io/service-account-token
+ EOF
``` ```console
- TOKEN=$(kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
+ $TOKEN=(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
``` ### [Azure PowerShell](#tab/azure-powershell)
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace): ```console
- kubectl create serviceaccount admin-user
+ kubectl create serviceaccount demo-user
``` 1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example: ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --serviceaccount default:admin-user
+ kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
```
-1. Get the service account's token using the following commands:
+1. Create a service account token by :
```console
- $SECRET_NAME = (kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')
+ kubectl apply -f demo-user-secret.yaml
+ ```
+
+ Contents of `demo-user-secret.yaml`:
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: demo-user-secret
+ annotations:
+ kubernetes.io/service-account.name: demo-user
+ type: kubernetes.io/service-account-token
``` ```console
- $TOKEN = ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String((kubectl get secret $SECRET_NAME -o jsonpath='{$.data.token}'))))
+ $TOKEN = ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String((kubectl get secret demo-user-secret -o jsonpath='{$.data.token}'))))
```
azure-arc Conceptual Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md
Last updated 03/03/2021 -- description: "This article provides an architectural overview of Azure Arc-enabled Kubernetes agents" keywords: "Kubernetes, Arc, Azure, containers"
azure-arc Conceptual Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-azure-rbac.md
Last updated 04/05/2021 -- description: "This article provides a conceptual overview of Azure RBAC capability on Azure Arc-enabled Kubernetes"
azure-arc Conceptual Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-cluster-connect.md
Last updated 04/05/2021 -- description: "This article provides a conceptual overview of Cluster Connect capability of Azure Arc-enabled Kubernetes"
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
Last updated 05/24/2022 -- description: "This article provides a conceptual overview of GitOps and configurations capability of Azure Arc-enabled Kubernetes." keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps"
azure-arc Conceptual Connectivity Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-connectivity-modes.md
Last updated 11/23/2021 -- description: "This article provides an overview of the connectivity modes supported by Azure Arc-enabled Kubernetes" keywords: "Kubernetes, Arc, Azure, containers"
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-custom-locations.md
Last updated 05/25/2021 -- description: "This article provides a conceptual overview of Custom Locations capability of Azure Arc-enabled Kubernetes"
azure-arc Conceptual Data Exchange https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-data-exchange.md
Last updated 11/23/2021 -- description: "This article provides information on data exchanged between Azure Arc-enabled Kubernetes cluster and Azure" keywords: "Kubernetes, Arc, Azure, containers"
azure-arc Conceptual Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-extensions.md
Last updated 11/24/2021 -- description: "This article provides a conceptual overview of cluster extensions capability of Azure Arc-enabled Kubernetes"
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes"
Last updated 10/19/2021 -- description: "Use custom locations to deploy Azure PaaS services on Azure Arc-enabled Kubernetes clusters"
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Last updated 05/24/2022 -- description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes"
azure-arc Kubernetes Resource View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/kubernetes-resource-view.md
Last updated 10/31/2021 -- description: Learn how to interact with Kubernetes resources to manage an Azure Arc-enabled Kubernetes cluster from the Azure portal.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Title: "Overview of Azure Arc-enabled Kubernetes"
-- Last updated 05/03/2022 description: "This article provides an overview of Azure Arc-enabled Kubernetes."
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/plan-at-scale-deployment.md
Last updated 04/12/2021 -- description: Onboard large number of clusters to Azure Arc-enabled Kubernetes for configuration management
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
Last updated 03/03/2021 -- description: "Describes Arc validation program for Kubernetes distributions" keywords: "Kubernetes, Arc, Azure, K8s, validation"
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 05/11/2022 Last updated : 06/29/2022
The azcmagent tool is used to configure the Azure Connected Machine agent during
* **disconnect** - Disconnect the machine from Azure Arc. * **show** - View agent status and its configuration properties (Resource Group name, Subscription ID, version, etc.), which can help when troubleshooting an issue with the agent. Include the `-j` parameter to output the results in JSON format. * **config** - View and change settings to enable features and control agent behavior.
-* **check** - Validate network connectivity.
* **logs** - Create a .zip file in the current directory containing logs to assist you while troubleshooting. * **version** - Show the Connected Machine agent version. * **-useStderr** - Direct error and verbose output to stderr. Include the `-json` parameter to output the results in JSON format.
azure-fluid-relay Azure Function Token Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/azure-function-token-provider.md
fluid.url: https://fluidframework.com/docs/build/tokenproviders/
In the [Fluid Framework](https://fluidframework.com/), TokenProviders are responsible for creating and signing tokens that the `@fluidframework/azure-client` uses to make requests to the Azure Fluid Relay service. The Fluid Framework provides a simple, insecure TokenProvider for development purposes, aptly named **InsecureTokenProvider**. Each Fluid service must implement a custom TokenProvider based on the particulars service's authentication and security considerations.
-Each Azure Fluid Relay service tenant you create is assigned a **tenant ID** and its own unique **tenant secret key**. The secret key is a **shared secret**. Your app/service knows it, and the Azure Fluid Relay service knows it. TokenProviders must know the secret key to sign requests, but the secret key cannot be included in client code.
+Each Azure Fluid Relay resource you create is assigned a **tenant ID** and its own unique **tenant secret key**. The secret key is a **shared secret**. Your app/service knows it, and the Azure Fluid Relay service knows it. TokenProviders must know the secret key to sign requests, but the secret key cannot be included in client code.
## Implement an Azure Function to sign tokens
-One option for building a secure token provider is to create HTTPS endpoint and create a TokenProvider implementation that makes authenticated HTTPS requests to that endpoint to retrieve tokens. This enables you to store the *tenant secret key* in a secure location, such as [Azure Key Vault](../../key-vault/general/overview.md).
+One option for building a secure token provider is to create HTTPS endpoint and create a TokenProvider implementation that makes authenticated HTTPS requests to that endpoint to retrieve tokens. This path enables you to store the *tenant secret key* in a secure location, such as [Azure Key Vault](../../key-vault/general/overview.md).
The complete solution has two pieces:
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRe
export default httpTrigger; ```
-The `generateToken` function, found in the `@fluidframework/azure-service-utils` package, generates a token for the given user that is signed using the tenant's secret key. This enables the token to be returned to the client without exposing the secret. Instead, the token is generated server-side using the secret to provide scoped access to the given document. The example ITokenProvider below makes HTTP requests to this Azure Function to retrieve tokens.
+The `generateToken` function, found in the `@fluidframework/azure-service-utils` package, generates a token for the given user that is signed using the tenant's secret key. This method enables the token to be returned to the client without exposing the secret. Instead, the token is generated server-side using the secret to provide scoped access to the given document. The example ITokenProvider below makes HTTP requests to this Azure Function to retrieve tokens.
### Deploy the Azure Function
Azure Functions can be deployed in several ways. See the **Deploy** section of t
### Implement the TokenProvider
-TokenProviders can be implemented in many ways, but must implement two separate API calls: `fetchOrdererToken` and `fetchStorageToken`. These APIs are responsible for fetching tokens for the Fluid orderer and storage services respectively. Both functions return `TokenResponse` objects representing the token value. The Fluid Framework runtime calls these two APIs as needed to retrieve tokens.
-
+TokenProviders can be implemented in many ways, but must implement two separate API calls: `fetchOrdererToken` and `fetchStorageToken`. These APIs are responsible for fetching tokens for the Fluid orderer and storage services respectively. Both functions return `TokenResponse` objects representing the token value. The Fluid Framework runtime calls these two APIs as needed to retrieve tokens. Note that while your application code is using only one service endpoint to establish conectivity with the Azure Fluid Relay service, the azure-client internally in conjunction with the service translate that one endpoint to an orderer and storage endpoint pair. Those two endpoints are used from that point on for that session. That is why you need to implement the two separate functions for fetching tokens, one for each.
To ensure that the tenant secret key is kept secure, it is stored in a secure backend location and is only accessible from within the Azure Function. To retrieve tokens, you need to make a `GET` or `POST` request to your deployed Azure Function, providing the `tenantID` and `documentId`, and `userID`/`userName`. The Azure Function is responsible for the mapping between the tenant ID and a tenant key secret to appropriately generate and sign the token.
azure-fluid-relay Deploy Fluid Static Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/deploy-fluid-static-web-apps.md
import { AzureClient, AzureFunctionTokenProvider } from "@fluidframework/azure-c
const config = { tenantId: "myTenantId", tokenProvider: new AzureFunctionTokenProvider("https://myAzureAppUrl"+"/api/GetAzureToken", { userId: "test-user",userName: "Test User" }),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; const clientProps = {
import { AzureClient } from "@fluidframework/azure-client";
const config = { tenantId: "myTenantId", tokenProvider: new AzureFunctionTokenProvider("https://myStaticWebAppUrl/api/GetAzureToken", { userId: "test-user",userName: "Test User" }),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; const clientProps = {
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
The following table explains the binding configuration properties that you set i
The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+Queries executed by the input binding are [parameterized](/dotnet/api/microsoft.data.sqlclient.sqlparameter) in Microsoft.Data.SqlClient to reduce the risk of [SQL injection](/sql/relational-databases/security/sql-injection) from the parameter values passed into the binding.
++ ::: zone-end ## Next steps
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
The following table explains the binding configuration properties that you set i
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-python" The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+The output bindings uses the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database.
+ ::: zone-end ## Next steps
azure-government Compliance Tic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/compliance-tic.md
Previously updated : 12/01/2020
+recommendations: false
Last updated : 06/28/2022 # Trusted Internet Connections guidance
-This article explains how U.S. government agencies can use security features in Azure cloud services to help achieve compliance with the Trusted Internet Connections (TIC) initiative. It applies to both Azure and Azure Government cloud service environments and covers TIC implications for Azure infrastructure as a service (IaaS) and Azure platform as a service (PaaS) cloud service models.
+This article explains how you can use security features in Azure cloud services to help achieve compliance with the Trusted Internet Connections (TIC) initiative. It applies to both Azure and Azure Government cloud service environments, and covers TIC implications for Azure infrastructure as a service (IaaS) and Azure platform as a service (PaaS) cloud service models.
## Trusted Internet Connections overview
-The purpose of the TIC initiative is to enhance network security across the U.S. federal government. This objective was initially realized by consolidating external connections and routing all network traffic through approved devices at TIC access points. In the intervening years, cloud computing became well established, paving the way for modern security architectures and a shift away from the primary focus on perimeter security. Accordingly, the TIC initiative evolved to provide federal agencies with increased flexibility to use modern security capabilities.
+The purpose of the TIC initiative is to enhance network security across the US federal government. This objective was initially realized by consolidating external connections and routing all network traffic through approved devices at TIC access points. In the intervening years, cloud computing became well established, paving the way for modern security architectures and a shift away from the primary focus on perimeter security. Accordingly, the TIC initiative evolved to provide federal agencies with increased flexibility to use modern security capabilities.
### TIC 2.0 guidance
-The TIC initiative was originally outlined in the Office of Management and Budget (OMB) [Memorandum M-08-05](https://georgewbush-whitehouse.archives.gov/omb/memoranda/fy2008/m08-05.pdf) released in November 2007, and referred to in this article as TIC 2.0 guidance. The TIC program was envisioned to improve federal network perimeter security and incident response functions. TIC was originally designed to perform network analysis of all inbound and outbound .gov traffic to identify specific patterns in network data flows and uncover behavioral anomalies, such as botnet activity. Agencies were mandated to consolidate their external network connections and route all traffic through intrusion detection and prevention devices known as EINSTEIN. The devices are hosted at a limited number of network endpoints, which are referred to as *trusted internet connections*.
+The TIC initiative was originally outlined in the Office of Management and Budget (OMB) [Memorandum M-08-05](https://georgewbush-whitehouse.archives.gov/omb/memoranda/fy2008/m08-05.pdf) released in November 2007, and referred to in this article as TIC 2.0 guidance. The TIC program was envisioned to improve federal network perimeter security and incident response functions. TIC was originally designed to perform network analysis of all inbound and outbound .gov traffic. The goal was to identify specific patterns in network data flows and uncover behavioral anomalies, such as botnet activity. Agencies were mandated to consolidate their external network connections and route all traffic through intrusion detection and prevention devices known as EINSTEIN. The devices are hosted at a limited number of network endpoints, which are referred to as *trusted internet connections*.
The objective of TIC is for agencies to know:
The objective of TIC is for agencies to know:
Under TIC 2.0, all agency external connections must route through an OMB-approved TIC. Federal agencies are required to participate in the TIC program as a TIC Access Provider (TICAP), or by contracting services with one of the major Tier 1 internet service providers. These providers are referred to as Managed Trusted Internet Protocol Service (MTIPS) providers. TIC 2.0 includes mandatory critical capabilities that are performed by the agency and MTIPS provider. In TIC 2.0, the EINSTEIN version 2 intrusion detection and EINSTEIN version 3 accelerated (3A) intrusion prevention devices are deployed at each TICAP and MTIPS. The agency establishes a *Memorandum of Understanding* with the Department of Homeland Security (DHS) to deploy EINSTEIN capabilities to federal systems.
-As part of its responsibility to protect the .gov network, DHS requires the raw data feeds of agency net flow data to correlate incidents across the federal enterprise and perform analyses by using specialized tools. DHS routers provide the ability to collect IP network traffic as it enters or exits an interface. Network administrators can analyze the net flow data to determine the source and destination of traffic, the class of service, and other parameters. Net flow data is considered to be "non-content data" similar to the header, source IP, destination IP, and so on. Non-content data allows DHS to learn about the content: who was doing what and for how long.
+As part of its responsibility to protect the .gov network, DHS requires the raw data feeds of agency net flow data to correlate incidents across the federal enterprise and perform analyses by using specialized tools. DHS routers enable collection of IP network traffic as it enters or exits an interface. Network administrators can analyze the net flow data to determine the source and destination of traffic, the class of service, and other parameters. Net flow data is considered to be "non-content data" similar to the header, source IP, destination IP, and so on. Non-content data allows DHS to learn about the content: who was doing what and for how long.
-The TIC 2.0 initiative also includes security policies, guidelines, and frameworks that assume an on-premises infrastructure. As government agencies move to the cloud to achieve cost savings, operational efficiency, and innovation, the implementation requirements of TIC 2.0 can slow down network traffic. The speed and agility with which government users can access their cloud-based data is limited as a result.
+The TIC 2.0 initiative also includes security policies, guidelines, and frameworks that assume an on-premises infrastructure. Government agencies move to the cloud to achieve cost savings, operational efficiency, and innovation. However, the implementation requirements of TIC 2.0 can slow down network traffic. The speed and agility with which government users can access their cloud-based data is limited as a result.
### TIC 3.0 guidance
-In September 2019, OMB released [Memorandum M-19-26](https://www.whitehouse.gov/wp-content/uploads/2019/09/M-19-26.pdf) that rescinded prior TIC-related memorandums and introduced [TIC 3.0 guidance](https://www.cisa.gov/trusted-internet-connections). The previous OMB memorandums required agency traffic to flow through a physical TIC access point, which has proven to be an obstacle to the adoption of cloud-based infrastructure. For example, TIC 2.0 focused exclusively on perimeter security by channeling all incoming and outgoing agency data through a TIC access point. In contrast, TIC 3.0 recognizes the need to account for multiple and diverse security architectures rather than a single perimeter security approach. This flexibility allows agencies to choose how to implement security capabilities in a way that fits best into their overall network architecture, risk management approach, and more.
+In September 2019, OMB released [Memorandum M-19-26](https://www.whitehouse.gov/wp-content/uploads/2019/09/M-19-26.pdf) that rescinded prior TIC-related memorandums and introduced [TIC 3.0 guidance](https://www.cisa.gov/trusted-internet-connections). The previous OMB memorandums required agency traffic to flow through a physical TIC access point, which has proven to be an obstacle to the adoption of cloud-based infrastructure. For example, TIC 2.0 focused exclusively on perimeter security by channeling all incoming and outgoing agency data through a TIC access point. In contrast, TIC 3.0 recognizes the need to account for multiple and diverse security architectures rather than a single perimeter security approach. This flexibility allows agencies to choose how to implement security capabilities in a way that fits best into their overall network architecture, risk management approach, and more.
-To enable this flexibility, the Cybersecurity & Infrastructure Security Agency (CISA) works with federal agencies to conduct pilots in diverse agency environments, which results in the development of TIC 3.0 use cases. For TIC 3.0 implementations, CISA encourages agencies to leverage [TIC 3.0 Core Guidance Documents](https://www.cisa.gov/publication/tic-30-core-guidance-documents) in conjunction with the National Institute of Standards and Technology (NIST) [Cybersecurity Framework](https://www.nist.gov/cyberframework) (CSF) and [NIST SP 800-53](https://nvd.nist.gov/800-53/Rev4) *Security and Privacy Controls for Federal Information Systems and Organizations*. These documents can help agencies design a secure network architecture and determine appropriate requirements from cloud service providers.
+To enable this flexibility, the Cybersecurity & Infrastructure Security Agency (CISA) works with federal agencies to conduct pilots in diverse agency environments, which result in the development of TIC 3.0 use cases. For TIC 3.0 implementations, CISA encourages agencies to use [TIC 3.0 Core Guidance Documents](https://www.cisa.gov/publication/tic-30-core-guidance-documents) with the National Institute of Standards and Technology (NIST) [Cybersecurity Framework](https://www.nist.gov/cyberframework) (CSF) and [NIST SP 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final) *Security and Privacy Controls for Federal Information Systems and Organizations*. These documents can help agencies design a secure network architecture and determine appropriate requirements from cloud service providers.
-TIC 3.0 complements other federal initiatives focused on cloud adoption such as the Federal Risk and Authorization Management Program (FedRAMP), which is based on the NIST SP 800-53 standard augmented by FedRAMP controls and control enhancements. Agencies can leverage existing Azure and Azure Government FedRAMP High provisional authorizations (P-ATO) issued by the FedRAMP Joint Authorization Board, as well as Azure and Azure Government support for the NIST CSF, as described in [Azure compliance documentation](../../compliance/index.yml). To assist agencies with TIC 3.0 implementation when selecting cloud-based security capabilities, CISA has mapped TIC capabilities to the NIST CSF and NIST SP 800-53. For example, TIC 3.0 security objectives can be mapped to the five functions of the NIST CSF, including Identify, Protect, Detect, Respond, and Recover. The TIC security capabilities are mapped to the NIST CSF in the TIC 3.0 Security Capabilities Catalog available from [TIC 3.0 Core Guidance Documents](https://www.cisa.gov/publication/tic-30-core-guidance-documents).
+TIC 3.0 complements other federal initiatives focused on cloud adoption such as the Federal Risk and Authorization Management Program (FedRAMP), which is based on the NIST SP 800-53 standard augmented by FedRAMP controls and control enhancements. Agencies can use existing Azure and Azure Government [FedRAMP High](/azure/compliance/offerings/offering-fedramp) provisional authorizations to operate (P-ATO) issued by the FedRAMP Joint Authorization Board. They can also use Azure and Azure Government support for the [NIST CSF](/azure/compliance/offerings/offering-nist-csf). To assist agencies with TIC 3.0 implementation when selecting cloud-based security capabilities, CISA has mapped TIC capabilities to the NIST CSF and NIST SP 800-53. For example, TIC 3.0 security objectives can be mapped to the five functions of the NIST CSF, including Identify, Protect, Detect, Respond, and Recover. The TIC security capabilities are mapped to the NIST CSF in the TIC 3.0 Security Capabilities Catalog available from [TIC 3.0 Core Guidance Documents](https://www.cisa.gov/publication/tic-30-core-guidance-documents).
-TIC 3.0 is non-prescriptive cybersecurity guidance developed to provide agencies with flexibility to implement security capabilities that match their specific risk tolerance levels. While the guidance requires agencies to comply with all applicable telemetry requirements such as the National Cybersecurity Protection System (NCPS) and Continuous Diagnosis and Mitigation (CDM), TIC 3.0 currently only requires agencies to self-attest on their adherence to the TIC guidance.
+TIC 3.0 is a non-prescriptive cybersecurity guidance developed to provide agencies with flexibility to implement security capabilities that match their specific risk tolerance levels. While the guidance requires agencies to comply with all applicable telemetry requirements such as the National Cybersecurity Protection System (NCPS) and Continuous Diagnosis and Mitigation (CDM), TIC 3.0 currently only requires agencies to self-attest on their adherence to the TIC guidance.
-With TIC 3.0, agencies have the option to maintain the legacy TIC 2.0 implementation that uses TIC access points while adopting TIC 3.0 capabilities. CISA provided guidance on how to implement the traditional TIC model in TIC 3.0, known as the [Traditional TIC Use Case](https://www.cisa.gov/publication/tic-30-core-guidance-documents).
+With TIC 3.0, agencies can maintain the legacy TIC 2.0 implementation that uses TIC access points while adopting TIC 3.0 capabilities. CISA provided guidance on how to implement the traditional TIC model in TIC 3.0, known as the [Traditional TIC Use Case](https://www.cisa.gov/publication/tic-30-core-guidance-documents).
-The rest of this article provides customer guidance that is pertinent to Azure capabilities needed for legacy TIC 2.0 implementations; however, some of this guidance is also useful for TIC 3.0 requirements.
+The rest of this article provides guidance that is pertinent to Azure capabilities needed for legacy TIC 2.0 implementations; however, some of this guidance is also useful for TIC 3.0 requirements.
## Azure networking options There are four main options to connect to Azure -- **Direct internet connection:** Connect to Azure services directly through an open internet connection. The medium and the connection are public. Application and transport-level encryption are relied on to ensure privacy. Bandwidth is limited by a site's connectivity to the internet. Use more than one active provider to ensure resiliency.-- **Virtual Private Network (VPN):** Connect to your Azure virtual network privately by using a VPN gateway. The medium is public because it traverses a site's standard internet connection, but the connection is encrypted in a tunnel to ensure privacy. Bandwidth is limited depending on the VPN devices and the configuration you choose. Azure point-to-site connections usually are limited to 100 Mbps. Site-to-site connections range from 100 Mbps to 10 Gbps.-- **Azure ExpressRoute:** ExpressRoute is a direct connection to Microsoft services. ExpressRoute uses a provider at a peering location to connect to Microsoft Enterprise edge routers. ExpressRoute uses different peering types for IaaS and PaaS/SaaS services, private peering and Microsoft peering. Bandwidth ranges from 50 Mbps to 10 Gbps.-- **Azure ExpressRoute Direct:** ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering location. ExpressRoute Direct removes a third-party connectivity provider from the required hops. Bandwidth ranges from 10 Gbps to 100 Gbps.
+- **Direct internet connection** ΓÇô Connect to Azure services directly through an open internet connection. The medium and the connection are public. Application and transport-level encryption are relied on to ensure data protection. Bandwidth is limited by a site's connectivity to the internet. Use more than one active provider to ensure resiliency.
+- **Virtual Private Network (VPN)** ΓÇô Connect to your Azure virtual network privately by using a VPN gateway. The medium is public because it traverses a site's standard internet connection, but the connection is encrypted in a tunnel to ensure data protection. Bandwidth is limited depending on the VPN devices and the configuration you choose. Azure point-to-site connections usually are limited to 100 Mbps. Site-to-site connections range from 100 Mbps to 10 Gbps.
+- **Azure ExpressRoute** ΓÇô ExpressRoute is a direct connection to Microsoft services. ExpressRoute uses a provider at a peering location to connect to Microsoft Enterprise edge routers. ExpressRoute uses different peering types for IaaS and PaaS/SaaS services, private peering and Microsoft peering. Bandwidth ranges from 50 Mbps to 10 Gbps.
+- **Azure ExpressRoute Direct** ΓÇô ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering location. ExpressRoute Direct removes a third-party connectivity provider from the required hops. Bandwidth ranges from 10 Gbps to 100 Gbps.
-To enable the connection from the *agency* to Azure or Microsoft 365, without routing traffic through the agency TIC, the agency must use an encrypted tunnel or a dedicated connection to the cloud service provider (CSP). The CSP services can ensure that connectivity to the agency cloud assets isn't offered via the public Internet for direct agency personnel access.
+To enable the connection from the *agency* to Azure or Microsoft 365, without routing traffic through the agency TIC, the agency must use:
-For Azure only, the second option (VPN) and third option (ExpressRoute) can meet these requirements when they're used in conjunction with services that limit access to the Internet.
+- An encrypted tunnel, or
+- A dedicated connection to the cloud service provider (CSP).
+
+The CSP services can ensure that connectivity to the agency cloud assets isn't offered via the public Internet for direct agency personnel access.
+
+For Azure only, the second option (VPN) and third option (ExpressRoute) can meet these requirements when they're used with services that limit access to the Internet.
Microsoft 365 is compliant with TIC guidance by using either [ExpressRoute with Microsoft Peering](../../expressroute/expressroute-circuit-peerings.md) enabled or an Internet connection that encrypts all traffic by using the Transport Layer Security (TLS) 1.2. Agency end users on the agency network can connect via their agency network and TIC infrastructure through the Internet. All remote Internet access to Microsoft 365 is blocked and routes through the agency.
Microsoft 365 is compliant with TIC guidance by using either [ExpressRoute with
Compliance with TIC policy by using Azure IaaS is relatively simple because Azure customers manage their own virtual network routing.
-The main requirement to help assure compliance with the TIC 2.0 reference architecture is to ensure your virtual network is a private extension of the agency network. To be a *private* extension, the policy requires that no traffic leave your network except via the on-premises TIC network connection. This process is known as *forced tunneling*. For TIC 2.0 compliance, the process routes all traffic from any system in the CSP environment through an on-premises gateway on an organization's network out to the Internet through the TIC.
+The main requirement to help assure compliance with the TIC 2.0 reference architecture is to ensure your virtual network is a private extension of the agency network. To be a *private* extension, the policy requires that no traffic is allowed to leave your network except via the on-premises TIC network connection. This process is known as *forced tunneling*. For TIC 2.0 compliance, the process routes all traffic from any system in the CSP environment through an on-premises gateway on an organization's network out to the Internet through the TIC.
Azure IaaS TIC compliance is divided into two major steps:
Azure IaaS TIC compliance is divided into two major steps:
### Azure IaaS TIC compliance: Configuration
-To configure a TIC-compliant architecture with Azure, you must first prevent direct Internet access to your virtual network and then force Internet traffic through the on-premises network.
+To configure a TIC-compliant architecture with Azure, you must first prevent direct Internet access to your virtual network, and then force Internet traffic through the on-premises network.
#### Prevent direct Internet access
Azure automatically creates system routes and assigns the routes to each subnet
:::image type="content" source="./media/tic-diagram-c.png" alt-text="TIC force tunneling" border="false":::
-All traffic that leaves the virtual network needs to route through the on-premises connection, to ensure that all traffic traverses the agency TIC. You create custom routes by creating user-defined routes, or by exchanging Border Gateway Protocol (BGP) routes between your on-premises network gateway and an Azure VPN gateway. For more information about user-defined routes, see [Virtual network traffic routing: User-defined routes](../../virtual-network/virtual-networks-udr-overview.md#user-defined). For more information about the BGP, see [Virtual network traffic routing: Border Gateway Protocol](../../virtual-network/virtual-networks-udr-overview.md#border-gateway-protocol).
+All traffic that leaves the virtual network needs to route through the on-premises connection, to ensure that all traffic traverses the agency TIC. You create custom routes by creating user-defined routes, or by exchanging Border Gateway Protocol (BGP) routes between your on-premises network gateway and an Azure VPN gateway.
+
+- For more information about user-defined routes, see [Virtual network traffic routing: User-defined routes](../../virtual-network/virtual-networks-udr-overview.md#user-defined).
+- For more information about the BGP, see [Virtual network traffic routing: Border Gateway Protocol](../../virtual-network/virtual-networks-udr-overview.md#border-gateway-protocol).
#### Add user-defined routes
Azure offers several ways to audit TIC compliance.
#### View effective routes
-Confirm that your default route is propagated by observing the *effective routes* for a particular virtual machine, a specific NIC, or a user-defined route table in the [Azure portal](../../virtual-network/diagnose-network-routing-problem.md#diagnose-using-azure-portal) or in [Azure PowerShell](../../virtual-network/diagnose-network-routing-problem.md#diagnose-using-powershell). The **Effective Routes** show the relevant user-defined routes, BGP advertised routes, and system routes that apply to the relevant entity, as shown in the following figure:
+Confirm your default route propagation by observing the *effective routes* for a particular virtual machine, a specific NIC, or a user-defined route table in the [Azure portal](../../virtual-network/diagnose-network-routing-problem.md#diagnose-using-azure-portal) or in [Azure PowerShell](../../virtual-network/diagnose-network-routing-problem.md#diagnose-using-powershell). The **Effective Routes** show the relevant user-defined routes, BGP advertised routes, and system routes that apply to the relevant entity, as shown in the following figure:
:::image type="content" source="./media/tic-screen-1.png" alt-text="Effective routes" border="false":::
Azure PaaS services, such as Azure Storage, are accessible through an internet-r
When Azure PaaS services are integrated with a virtual network, the service is privately accessible from that virtual network. You can apply custom routing for 0.0.0.0/0 via user-defined routes or BGP. Custom routing ensures that all Internet-bound traffic routes on-premises to traverse the TIC. Integrate Azure services into virtual networks by using the following patterns: -- **Deploy a dedicated instance of a service:** An increasing number of PaaS services are deployable as dedicated instances with virtual network-attached endpoints, sometimes called *VNet injection*. You can deploy an App Service Environment in *isolated mode* to allow the network endpoint to be constrained to a virtual network. The App Service Environment can then host many Azure PaaS services, such as Azure Web Apps, Azure API Management, and Azure Functions. For more information, see [Deploy dedicated Azure services into virtual networks](../../virtual-network/virtual-network-for-azure-services.md).-- **Use virtual network service endpoints:** An increasing number of PaaS services allow the option to move their endpoint to a virtual network private IP instead of a public address. For more information, see [Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).-- **Use Azure Private Link:** Provide a shared service with a private endpoint in your virtual network. Traffic between your virtual network and the service travels across the Microsoft backbone network and does not traverse the public Internet. For more information, see [Azure Private Link](../../private-link/private-link-overview.md).
+- **Deploy a dedicated instance of a service** ΓÇô An increasing number of PaaS services are deployable as dedicated instances with virtual network-attached endpoints, sometimes called *VNet injection*. You can deploy an App Service Environment in *isolated mode* to allow the network endpoint to be constrained to a virtual network. The App Service Environment can then host many Azure PaaS services, such as Web Apps, API Management, and Functions. For more information, see [Deploy dedicated Azure services into virtual networks](../../virtual-network/virtual-network-for-azure-services.md).
+- **Use virtual network service endpoints** ΓÇô An increasing number of PaaS services allow the option to move their endpoint to a virtual network private IP instead of a public address. For more information, see [Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
+- **Use Azure Private Link** ΓÇô Provide a shared service with a private endpoint in your virtual network. Traffic between your virtual network and the service travels across the Microsoft backbone network and doesn't traverse the public Internet. For more information, see [Azure Private Link](../../private-link/private-link-overview.md).
### Virtual network integration details
The following diagram shows the general network flow for access to Azure PaaS se
:::image type="content" source="./media/tic-diagram-e.png" alt-text="PaaS connectivity options for TIC" border="false"::: 1. A private connection is made to Azure by using ExpressRoute. ExpressRoute private peering with forced tunneling is used to force all customer virtual network traffic over ExpressRoute and back to on-premises. Microsoft Peering isn't required.
-2. Azure VPN Gateway, when used in conjunction with ExpressRoute and Microsoft Peering, can overlay end-to-end IPSec encryption between the customer virtual network and the on-premises edge.
+2. Azure VPN Gateway, when used with ExpressRoute and Microsoft Peering, can overlay end-to-end IPSec encryption between the customer virtual network and the on-premises edge.
3. Network connectivity to the customer virtual network is controlled by using network security groups that allow customers to permit/deny traffic based on IP, port, and protocol. 4. Traffic to and from the customer private virtual network is monitored through Azure Network Watcher and data is analyzed using Log Analytics and Microsoft Defender for Cloud. 5. The customer virtual network extends to the PaaS service by creating a service endpoint for the customer's service.
-6. The PaaS service endpoint is secured to **default deny all** and to only allow access from specified subnets within the customer virtual network. Securing service resources to a virtual network provides improved security by fully removing public Internet access to resources and allowing traffic only from your virtual network.
+6. The PaaS service endpoint is secured to **default deny all** and to only allow access from specified subnets within the customer virtual network. Securing service resources to a virtual network provides improved security by fully removing public Internet access to resources and allowing traffic only from your virtual network.
7. Other Azure services that need to access resources within the customer virtual network should either be: - Deployed directly into the virtual network, or - Allowed selectively based on the guidance from the respective Azure service.
Virtual network injection enables customers to selectively deploy dedicated inst
#### Option B: Use virtual network service endpoints (service tunnel)
-An increasing number of Azure multitenant services offer *service endpoints*. Service endpoints are an alternate method for integrating to Azure virtual networks. Virtual network service endpoints extend your virtual network IP address space and the identity of your virtual network to the service over a direct connection. Traffic from the virtual network to the Azure service always stays within the Azure backbone network.
+An increasing number of Azure multi-tenant services offer *service endpoints*. Service endpoints are an alternate method for integrating to Azure virtual networks. Virtual network service endpoints extend your virtual network IP address space and the identity of your virtual network to the service over a direct connection. Traffic from the virtual network to the Azure service always stays within the Azure backbone network.
After you enable a service endpoint for a service, use policies exposed by the service to restrict connections for the service to that virtual network. Access checks are enforced in the platform by the Azure service. Access to a locked resource is granted only if the request originates from the allowed virtual network or subnet, or from the two IPs that are used to identify your on-premises traffic if you use ExpressRoute. Use this method to effectively prevent inbound/outbound traffic from directly leaving the PaaS service.
After you enable a service endpoint for a service, use policies exposed by the s
#### Option C: Use Azure Private Link
-Customers can use [Azure Private Link](../../private-link/private-link-overview.md) to access Azure PaaS services and Azure-hosted customer/partner services over a [private endpoint](../../private-link/private-endpoint-overview.md) in their virtual network, ensuring that traffic between their virtual network and the service travels across the Microsoft global backbone network. This approach eliminates the need to expose the service to the public Internet. Customers can also create their own [private link service](../../private-link/private-link-service-overview.md) in their own virtual network and deliver it to their customers.
+You can use [Azure Private Link](../../private-link/private-link-overview.md) to access Azure PaaS services and Azure-hosted customer or partner services over a [private endpoint](../../private-link/private-endpoint-overview.md) in your virtual network, ensuring that traffic between your virtual network and the service travels across the Microsoft global backbone network. This approach eliminates the need to expose the service to the public Internet. You can also create your own [private link service](../../private-link/private-link-service-overview.md) in your own virtual network and deliver it to your customers.
-Azure private endpoint is a network interface that connects customers privately and securely to a service powered by Azure Private Link. Private endpoint uses a private IP address from customerΓÇÖs virtual network, effectively bringing the service into customerΓÇÖs virtual network.
+Azure private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network.
## Tools for network situational awareness
Azure provides cloud-native tools to help ensure that you have the situational a
### Azure Policy
-[Azure Policy](../../governance/policy/overview.md) is an Azure service that provides your organization with better ability to audit and enforce compliance initiatives. Customers can plan and test their Azure Policy rules now to assure future TIC compliance.
+[Azure Policy](../../governance/policy/overview.md) is an Azure service that provides your organization with better ability to audit and enforce compliance initiatives. You can plan and test your Azure Policy rules now to assure future TIC compliance.
Azure Policy is targeted at the subscription level. The service provides a centralized interface where you can perform compliance tasks, including:+ - Manage initiatives - Configure policy definitions - Audit compliance
Networks in regions that are monitored by Network Watcher can conduct next hop t
## Conclusions
-You can easily configure network access to help comply with TIC 2.0 guidance, as well as leverage Azure support for the NIST CSF and NIST SP 800-53 to address TIC 3.0 requirements.
+You can easily configure network access to help comply with TIC 2.0 guidance and use Azure support for the NIST CSF and NIST SP 800-53 to address TIC 3.0 requirements.
+
+## Next steps
+
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [Azure Government overview](../documentation-government-welcome.md)
+- [Azure Government security](../documentation-government-plan-security.md)
+- [Azure Government compliance](../documentation-government-plan-compliance.md)
+- [FedRAMP High](/azure/compliance/offerings/offering-fedramp)
+- [DoD Impact Level 4](/azure/compliance/offerings/offering-dod-il4)
+- [DoD Impact Level 5](/azure/compliance/offerings/offering-dod-il5)
+- [Azure Government isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md)
+- [Secure Azure Computing Architecture](./secure-azure-computing-architecture.md)
+- [Azure guidance for secure isolation](../azure-secure-isolation-guidance.md)
+- [Azure Policy overview](../../governance/policy/overview.md)
+- [Azure Policy regulatory compliance built-in initiatives](../../governance/policy/samples/index.md#regulatory-compliance)
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
Last updated 05/10/2022-+
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
description: Use data collection endpoints to uniquely configure ingestion setti
Previously updated : 3/16/2022 Last updated : 3/16/2022
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Last updated 02/09/2022 ++
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
Last updated 5/19/2022 + # Azure Monitor agent overview
To configure the agent to use private links for network communications with Azur
## Next steps - [Install the Azure Monitor agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To collect data from virtual machines using the Azure Monitor agent, you'll:
1. Create [data collection rules (DCR)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor agent sends to which destinations. 1. Associate the data collection rule to specific virtual machines.
-## How data collection rule associations work
-
-You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
-
-For example, consider an environment with a set of virtual machines running a line of business application and other virtual machines running SQL Server. You might have:
--- One default data collection rule that applies to all virtual machines.-- Separate data collection rules that collect data specifically for the line of business application and for SQL Server.
-
-The following diagram illustrates the associations for the virtual machines to the data collection rules.
-
-![A diagram showing one virtual machine hosting a line of business application and one virtual machine hosting SQL Server. Both virtual machine are associated with data collection rule named central-i t-default. The virtual machine hosting the line of business application is also associated with a data collection rule called lob-app. The virtual machine hosting SQL Server is associated with a data collection rule called s q l.](media/data-collection-rule-azure-monitor-agent/associations.png)
-
+ You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
## Create data collection rule and association
-To send data to Log Analytics, create the data collection rule in the **same region** where your Log Analytics workspace resides. You can still associate the rule to machines in other supported regions.
+To send data to Log Analytics, create the data collection rule in the **same region** as your Log Analytics workspace. You can still associate the rule to machines in other supported regions.
### [Portal](#tab/portal)
To send data to Log Analytics, create the data collection rule in the **same reg
### [API](#tab/api)
-1. Manually create the DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
+1. Create a DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
2. Create the rule using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
azure-monitor Data Sources Event Tracing Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-event-tracing-windows.md
Last updated 02/07/2022
+ms. reviewer: shseth
# Collecting Event Tracing for Windows (ETW) Events for analysis Azure Monitor Logs
azure-monitor Diagnostics Extension To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-to-application-insights.md
Title: Send Azure Diagnostics data to Application Insights
description: Update the Azure Diagnostics public configuration to send data to Application Insights. Last updated 03/31/2022+
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Title: Azure AD authentication for Application Insights (Preview)
+ Title: Azure AD authentication for Application Insights
description: Learn how to enable Azure Active Directory (Azure AD) authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources. Last updated 08/02/2021
ms.devlang: csharp, java, javascript, python
-# Azure AD authentication for Application Insights (Preview)
+# Azure AD authentication for Application Insights
-Application Insights now supports Azure Active Directory (Azure AD) authentication. By using Azure AD, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
+Application Insights now supports [Azure Active Directory (Azure AD) authentication](../../active-directory/authentication/overview-authentication.md#what-is-azure-active-directory-authentication). By using Azure AD, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
-Typically, using various authentication systems can be cumbersome and pose risk since it's difficult to manage credentials at a large scale. You can now choose to opt-out of local authentication and ensure only telemetry that is exclusively authenticated using [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your Application Insights resource. This feature is a step to enhance the security and reliability of the telemetry used to make both critical operational (alerting/autoscale etc.) and business decisions.
+Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt-out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated using [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make both critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts), [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-microsoft-azure), etc.) and business decisions.
-> [!IMPORTANT]
-> Azure AD authentication is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+## Prerequisites
-Below are SDKs/scenarios not supported in the Public Preview:
-- [Application Insights Java 2.x SDK](java-2x-agent.md) ΓÇô Azure AD authentication is only available for Application Insights Java Agent >=3.2.0. -- [ApplicationInsights JavaScript Web SDK](javascript.md). -- [Application Insights OpenCensus Python SDK](opencensus-python.md) with Python version 3.4 and 3.5.-- [Certificate/secret based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use Managed Identities instead. -- On by default Codeless monitoring (for languages) for App Service, VM/Virtual machine scale sets, Azure Functions etc.-- [Availability tests](availability-overview.md).-- [Profiler](profiler-overview.md).--
-## Prerequisites to enable Azure AD authentication ingestion
+The following are prerequisites to enable Azure AD authenticated ingestion.
- Familiarity with: - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md). - [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md). - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.md). - You have an "Owner" role to the resource group to grant access using [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+- Understand the [unsupported scenarios](#unsupported-scenarios).
## Configuring and enabling Azure AD based authentication
var config = new TelemetryConfiguration
var credential = new DefaultAzureCredential(); config.SetAzureTokenCredential(credential); + ``` Below is an example of configuring the `TelemetryConfiguration` using .NET Core:
services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions
ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/" }); ```++ ### [Node.js](#tab/nodejs) > [!NOTE]
appInsights.defaultClient.config.aadTokenCredential = credential;
``` + ### [Java](#tab/java) > [!NOTE]
appInsights.defaultClient.config.aadTokenCredential = credential;
#### System-assigned Managed Identity
-Below is an example on how to configure Java agent to use system-assigned managed identity for authentication with Azure AD.
+Below is an example of how to configure Java agent to use system-assigned managed identity for authentication with Azure AD.
```JSON {
Below is an example on how to configure Java agent to use system-assigned manage
#### User-assigned managed identity
-Below is an example on how to configure Java agent to use user-assigned managed identity for authentication with Azure AD.
+Below is an example of how to configure Java agent to use user-assigned managed identity for authentication with Azure AD.
```JSON {
Below is an example on how to configure Java agent to use user-assigned managed
#### Client secret
-Below is an example on how to configure Java agent to use service principal for authentication with Azure AD. We recommend users to use this type of authentication only during development. The ultimate goal of adding authentication feature is to eliminate secrets.
+Below is an example of how to configure Java agent to use service principal for authentication with Azure AD. We recommend users to use this type of authentication only during development. The ultimate goal of adding authentication feature is to eliminate secrets.
```JSON {
Below is an example on how to configure Java agent to use service principal for
:::image type="content" source="media/azure-ad-authentication/client-secret-cs.png" alt-text="Screenshot of Client secret with client secret." lightbox="media/azure-ad-authentication/client-secret-cs.png"::: + ### [Python](#tab/python) > [!NOTE]
is included starting with beta version [opencensus-ext-azure 1.1b0](https://pypi
Construct the appropriate [credentials](/python/api/overview/azure/identity-readme#credentials) and pass it into the constructor of the Azure Monitor exporter. Make sure your connection string is set up with the instrumentation key and ingestion endpoint of your resource.
-Below are the following types of authentication that are supported by the Opencensus Azure Monitor exporters. Managed identities are recommended to be used in production environments.
+Below are the following types of authentication that are supported by the `Opencensus` Azure Monitor exporters. Managed identities are recommended in production environments.
#### System-assigned managed identity
tracer = Tracer(
) ... ```+ ## Disable local authentication
-After the Azure AD authentication is enabled, you can choose to disable local authentication. This will allow you to ingest telemetry authenticated exclusively by Azure AD and impacts data access (for example, through API Keys).
+After the Azure AD authentication is enabled, you can choose to disable local authentication. This configuration will allow you to ingest telemetry authenticated exclusively by Azure AD and impacts data access (for example, through API Keys).
You can disable local authentication by using the Azure portal, Azure Policy, or programmatically.
You can disable local authentication by using the Azure portal, Azure Policy, or
1. From your Application Insights resource, select **Properties** under the *Configure* heading in the left-hand menu. Then select **Enabled (click to change)** if the local authentication is enabled.
- :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot of Properties under the *Configure* selected and enabled (click to change) local authentication button.":::
+ :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot of Properties under the *Configure* selected and enabled (select to change) local authentication button.":::
1. Select **Disabled** and apply changes.
You can disable local authentication by using the Azure portal, Azure Policy, or
1. Once your resource has disabled local authentication, you'll see the corresponding info in the **Overview** pane.
- :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot of overview tab with the disabled(click to change) highlighted.":::
+ :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot of overview tab with the disabled (select to change) highlighted.":::
### Azure Policy
Below is an example Azure Resource Manager template that you can use to create a
```
+## Unsupported scenarios
+
+The following SDK's and features are unsupported for use with Azure AD authenticated ingestion.
+
+- [Application Insights Java 2.x SDK](java-2x-agent.md)<br>
+ Azure AD authentication is only available for Application Insights Java Agent >=3.2.0.
+- [ApplicationInsights JavaScript Web SDK](javascript.md).
+- [Application Insights OpenCensus Python SDK](opencensus-python.md) with Python version 3.4 and 3.5.
+
+- [Certificate/secret based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use Managed Identities instead.
+- On-by-default Codeless monitoring (for languages) for App Service, VM/Virtual machine scale sets, Azure Functions etc.
+- [Availability tests](availability-overview.md).
+- [Profiler](profiler-overview.md).
+ ## Troubleshooting This section provides distinct troubleshooting scenarios and steps that users can take to resolve any issue before they raise a support ticket.
This section provides distinct troubleshooting scenarios and steps that users ca
The ingestion service will return specific errors, regardless of the SDK language. Network traffic can be collected using a tool such as Fiddler. You should filter traffic to the IngestionEndpoint set in the Connection String.
-#### HTTP/1.1 400 Authentication not support
+#### HTTP/1.1 400 Authentication not supported
-This indicates that the Application Insights resource has been configured for Azure AD only, but the SDK hasn't been correctly configured and is sending to the incorrect API.
+This error indicates that the resource has been configured for Azure AD only. The SDK hasn't been correctly configured and is sending to the incorrect API.
> [!NOTE] > "v2/track" does not support Azure AD. When the SDK is correctly configured, telemetry will be sent to "v2.1/track".
Next steps should be to review the SDK configuration.
#### HTTP/1.1 401 Authorization required
-This indicates that the SDK has been correctly configured, but was unable to acquire a valid token. This may indicate an issue with Azure Active Directory.
+This error indicates that the SDK has been correctly configured, but was unable to acquire a valid token. This error may indicate an issue with Azure Active Directory.
Next steps should be to identify exceptions in the SDK logs or network errors from Azure Identity. #### HTTP/1.1 403 Unauthorized
-This indicates that the SDK has been configured with credentials that haven't been given permission to the Application Insights resource or subscription.
+This error indicates that the SDK has been configured with credentials that haven't been given permission to the Application Insights resource or subscription.
Next steps should be to review the Application Insights resource's access control. The SDK must be configured with a credential that has been granted the "Monitoring Metrics Publisher" role.
Next steps should be to review the Application Insights resource's access contro
The Application Insights .NET SDK emits error logs using event source. To learn more about collecting event source logs visit, [Troubleshooting no data- collect logs with PerfView](asp-net-troubleshoot-no-data.md#PerfView). If the SDK fails to get a token, the exception message is logged as:
-"Failed to get AAD Token. Error message: "
+`Failed to get AAD Token. Error message: `
### [Node.js](#tab/nodejs)
-Internal logs could be turned on using the following setup. Once this is enabled, error logs will be shown in the console, including any error related to Azure AD integration. For example, failure to generate the token when wrong credentials are supplied or errors when ingestion endpoint fails to authenticate using the provided credentials.
+Internal logs could be turned on using the following setup. Once enabled, error logs will be shown in the console including any error related to Azure AD integration. For example, failure to generate the token when wrong credentials are supplied or errors when ingestion endpoint fails to authenticate using the provided credentials.
```javascript let appInsights = require("applicationinsights");
If using fiddler, you might see the following response header: `HTTP/1.1 401 Una
#### CredentialUnavailableException
-If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid `clientId` in your User Assigned Managed Identity configuration
+If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid `clientId` in your User Assigned Managed Identity configuration
#### Failed to send telemetry
-If the following WARN message is seen in the log file, `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. This might be because of the provided credentials don't grant the access to ingest the telemetry into the component
+If the following WARN message is seen in the log file, `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. This warning might be because of the provided credentials don't grant the access to ingest the telemetry into the component
If using fiddler, you might see the following response header: `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`.
Root cause might be one of the following reasons:
#### Invalid TenantId
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier <TENANT-ID> is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong `tenantId` in your client secret configuration.
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier <TENANT-ID> is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid/wrong `tenantId` in your client secret configuration.
#### Invalid client secret
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid `clientSecret` in your client secret configuration.
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid `clientSecret` in your client secret configuration.
#### Invalid ClientId
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong "clientId" in your client secret configuration
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid/wrong "clientId" in your client secret configuration
- This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
+ This scenario can occur if the application hasn't been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
### [Python](#tab/python) #### Error starts with "credential error" (with no status code)
-Something is incorrect about the credential you're using and the client isn't able to obtain a token for authorization. It's usually due to lacking the required data for the state. An example would be passing in a system ManagedIdentityCredential but the resource isn't configured to use system-managed identity.
+Something is incorrect about the credential you're using and the client isn't able to obtain a token for authorization. It's due to lacking the required data for the state. An example would be passing in a system ManagedIdentityCredential but the resource isn't configured to use system-managed identity.
#### Error starts with "authentication error" (with no status code)
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
By default, IP addresses are temporarily collected but not stored in Application
When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup by using [GeoLite2 from MaxMind](https://dev.maxmind.com/geoip/geoip2/geolite2/). Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
+Geolocation data can be removed in the following ways.
+
+* [Remove the client IP initializer](../app/configuration-with-applicationinsights-config.md)
+* [Use a custom initializer](../app/api-filtering-sampling.md)
+ > [!NOTE] > Application Insights uses an older version of the GeoLite2 database. If you experience accuracy issues with IP to geolocation mappings, then as a workaround you can disable IP masking and utilize another geomapping service to convert the client_IP field of the underlying telemetry to a more accurate geolocation. We are currently working on an update to improve the geolocation accuracy.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Download the [applicationinsights-agent-3.3.0.jar](https://github.com/microsoft/
> If you're upgrading from 3.2.x to 3.3.0: > > - Starting from 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
+> - Exception records are no longer recorded for failed dependencies, they are only recorded for failed requests.
> > If you're upgrading from 3.1.x: >
Java 3.x includes the following instrumentation libraries.
* JMS consumers * Kafka consumers * Netty/WebFlux
+* Quartz
* Servlets * Spring scheduling
Autocollected dependencies without downstream distributed trace propagation:
### Autocollected logs
+* Log4j (including MDC/Thread Context properties)
+* Logback (including MDC properties)
+* JBoss Logging (including MDC properties)
* java.util.logging
-* Log4j, which includes MDC properties
-* SLF4J/Logback, which includes MDC properties
### Autocollected metrics
Telemetry emitted by these Azure SDKs is automatically collected by default:
* [Azure Communication Identity](/java/api/overview/azure/communication-identity-readme) 1.0.0+ * [Azure Communication Phone Numbers](/java/api/overview/azure/communication-phonenumbers-readme) 1.0.0+ * [Azure Communication SMS](/java/api/overview/azure/communication-sms-readme) 1.0.0+
-* [Azure Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.13.0+
+* [Azure Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.22.0+
* [Azure Digital Twins - Core](/java/api/overview/azure/digitaltwins-core-readme) 1.1.0+ * [Azure Event Grid](/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+ * [Azure Event Hubs](/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+
Telemetry emitted by these Azure SDKs is automatically collected by default:
* [Azure Storage - Queues](/java/api/overview/azure/storage-queue-readme) 12.9.0+ * [Azure Text Analytics](/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+
-[//]: # "the above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html"
+[//]: # "Cosmos 4.22.0+ due to https://github.com/Azure/azure-sdk-for-java/pull/25571"
+
+[//]: # "the remaining above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html"
[//]: # "and version synched manually against the oldest version in maven central built on azure-core 1.14.0" [//]: # "" [//]: # "var table = document.querySelector('#tg-sb-content > div > table')"
If you want to attach custom dimensions to your logs, use [Log4j 1.2 MDC](https:
## Troubleshooting
-See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/java-standalone-troubleshoot).
+See the dedicated [troubleshooting article](java-standalone-troubleshoot.md).
## Release notes
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
# Configuration options - Azure Monitor Application Insights for Java
-> [!WARNING]
-> **If you are upgrading from 3.0 Preview**
->
-> Please review all the configuration options below carefully, as the json structure has completely changed,
-> in addition to the file name itself which went all lowercase.
- [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Connection string and role name
If you specify a relative path, it will be resolved relative to the directory wh
The file should contain only the connection string, for example: ```
-InstrumentationKey=...
+InstrumentationKey=...;IngestionEndpoint=...;LiveEndpoint=...
``` Not setting the connection string will disable the Java agent.
+If you have multiple applications deployed in the same JVM and want them to send telemetry to different instrumentation
+keys, see [Instrumentation key overrides (preview)](#instrumentation-key-overrides-preview).
+ ## Cloud role name Cloud role name is used to label the component on the application map.
If cloud role name is not set, the Application Insights resource's name will be
You can also set the cloud role name using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME` (which will then take precedence over cloud role name specified in the json configuration).
+If you have multiple applications deployed in the same JVM and want them to send telemetry to different cloud role
+names, see [Cloud role name overrides (preview)](#cloud-role-name-overrides-preview).
+ ## Cloud role instance Cloud role instance defaults to the machine name.
Starting from version 3.2.0, if you want to set a custom dimension programmatica
} ```
-## Instrumentation keys overrides (preview)
+## Instrumentation key overrides (preview)
This feature is in preview, starting from 3.2.3.
Instrumentation key overrides allow you to override the [default instrumentation
} ```
+## Cloud role name overrides (preview)
+
+This feature is in preview, starting from 3.3.0.
+
+Cloud role name overrides allow you to override the [default cloud role name](#cloud-role-name), for example:
+* Set one cloud role name for one http path prefix `/myapp1`.
+* Set another cloud role name for another http path prefix `/myapp2/`.
+
+```json
+{
+ "preview": {
+ "roleNameOverrides": [
+ {
+ "httpPathPrefix": "/myapp1",
+ "roleName": "12345678-0000-0000-0000-0FEEDDADBEEF"
+ },
+ {
+ "httpPathPrefix": "/myapp2",
+ "roleName": "87654321-0000-0000-0000-0FEEDDADBEEF"
+ }
+ ]
+ }
+}
+```
+ ## Autocollect InProc dependencies (preview)
-Starting from 3.2.0, if you want to capture controller "InProc" dependencies, please use the following configuration:
+Starting from version 3.2.0, if you want to capture controller "InProc" dependencies, please use the following configuration:
```json {
For more information, check out the [telemetry processor](./java-standalone-tele
## Auto-collected logging
-Log4j, Logback, and java.util.logging are auto-instrumented, and logging performed via these logging frameworks
-is auto-collected.
+Log4j, Logback, JBoss Logging, and java.util.logging are auto-instrumented,
+and logging performed via these logging frameworks is auto-collected.
Logging is only captured if it first meets the level that is configured for the logging framework, and second, also meets the level that is configured for Application Insights.
You can also set the level using the environment variable `APPLICATIONINSIGHTS_I
These are the valid `level` values that you can specify in the `applicationinsights.json` file, and how they correspond to logging levels in different logging frameworks:
-| level | Log4j | Logback | JUL |
-|-|--|||
-| OFF | OFF | OFF | OFF |
-| FATAL | FATAL | ERROR | SEVERE |
-| ERROR (or SEVERE) | ERROR | ERROR | SEVERE |
-| WARN (or WARNING) | WARN | WARN | WARNING |
-| INFO | INFO | INFO | INFO |
-| CONFIG | DEBUG | DEBUG | CONFIG |
-| DEBUG (or FINE) | DEBUG | DEBUG | FINE |
-| FINER | DEBUG | DEBUG | FINER |
-| TRACE (or FINEST) | TRACE | TRACE | FINEST |
-| ALL | ALL | ALL | ALL |
+| level | Log4j | Logback | JBoss | JUL |
+|-|--||--||
+| OFF | OFF | OFF | OFF | OFF |
+| FATAL | FATAL | ERROR | FATAL | SEVERE |
+| ERROR (or SEVERE) | ERROR | ERROR | ERROR | SEVERE |
+| WARN (or WARNING) | WARN | WARN | WARN | WARNING |
+| INFO | INFO | INFO | INFO | INFO |
+| CONFIG | DEBUG | DEBUG | DEBUG | CONFIG |
+| DEBUG (or FINE) | DEBUG | DEBUG | DEBUG | FINE |
+| FINER | DEBUG | DEBUG | DEBUG | FINER |
+| TRACE (or FINEST) | TRACE | TRACE | TRACE | FINEST |
+| ALL | ALL | ALL | ALL | ALL |
> [!NOTE] > If an exception object is passed to the logger, then the log message (and exception object details)
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.3.0, you can capture request and response headers on your server (request) telemetry:
+Starting from version 3.3.0, you can capture request and response headers on your server (request) telemetry:
```json {
Starting from version 3.0.3, specific auto-collected telemetry can be suppressed
"mongo": { "enabled": false },
+ "quartz": {
+ "enabled": false
+ },
"rabbitmq": { "enabled": false },
Starting from version 3.2.0, the following preview instrumentations can be enabl
"grizzly": { "enabled": true },
- "quartz": {
- "enabled": true
- },
"springIntegration": { "enabled": true },
you can configure Application Insights Java 3.x to use an HTTP proxy:
Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if those are set (and `http.nonProxyHosts` if needed).
+## Recovery from ingestion failures
+
+When sending telemetry to the Application Insights service fails, Application Insights Java 3.x will store the telemetry
+to disk and continue retrying from disk.
+
+The default limit for disk persistence is 50 Mb. If you have high telemetry volume, or need to be able to recover from
+longer network or ingestion service outages, you can increase this limit starting from version 3.3.0:
+
+```json
+{
+ "preview": {
+ "diskPersistenceMaxSizeMb": 50
+ }
+}
+```
+ ## Self-diagnostics "Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
or configuring [telemetry processors](./java-standalone-telemetry-processors.md)
## Multiple applications in a single JVM
-This use case is supported in Application Insights Java 3.x using [Instrumentation keys overrides (preview)](./java-standalone-config.md#instrumentation-keys-overrides-preview).
+This use case is supported in Application Insights Java 3.x using [Instrumentation key overrides (preview)](./java-standalone-config.md#instrumentation-key-overrides-preview).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Azure Monitor Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md
Last updated 04/03/2022+ # Monitoring Azure Monitor data reference
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
Title: Azure Monitor for existing Operations Manager customers description: Guidance for existing users of Operations Manager to transition monitoring of certain workloads to Azure Monitor as part of a transition to the cloud.- Last updated 04/05/2022+
azure-monitor Best Practices Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md
Last updated 10/18/2021+
Since you'll typically want to alert on issues for all of your critical Azure ap
## Next steps -- [Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md)
+- [Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md)
azure-monitor Best Practices Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md
Last updated 10/18/2021+
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Last updated 03/31/2022+
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Last updated 10/18/2021+
azure-monitor Best Practices Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md
Last updated 10/18/2021-+ # Azure Monitor best practices - Planning your monitoring strategy and configuration
azure-monitor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices.md
Last updated 10/18/2021+ # Azure Monitor best practices
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Starting with agent version *ciprod03022019*, Container insights integrated agent now supports monitoring GPU (graphical processing units) usage on GPU-aware Kubernetes cluster nodes, and monitor pods/containers requesting and using GPU resources.
+>[!NOTE]
+> As per the Kubernetes [upstream announcement](https://kubernetes.io/blog/2020/12/16/third-party-device-metrics-reaches-ga/#nvidia-gpu-metrics-deprecated), Kubernetes is deprecating GPU metrics that are being reported by the kubelet, for Kubernetes ver. 1.20+. This means Container Insights will no longer be able to collect the following metrics out of the box:
+> * containerGpuDutyCycle
+> * containerGpumemoryTotalBytes
+> * containerGpumemoryUsedBytes
+>
+> To continue collecting GPU metrics through Container Insights, please migrate by December 31, 2022 to your GPU vendor specific metrics exporter and configure [Prometheus scraping](./container-insights-prometheus-integration.md) to scrape metrics from the deployed vendor specific exporter.
+ ## Supported GPU vendors Container insights supports monitoring GPU clusters from following GPU vendors:
Container insights automatically starts monitoring GPU usage on nodes, and GPU r
|Metric name |Metric dimension (tags) |Description | ||||
-|containerGpuDutyCycle |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor|Percentage of time over the past sample period (60 seconds) during which GPU was busy/actively processing for a container. Duty cycle is a number between 1 and 100. |
+|containerGpuDutyCycle* |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor|Percentage of time over the past sample period (60 seconds) during which GPU was busy/actively processing for a container. Duty cycle is a number between 1 and 100. |
|containerGpuLimits |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName |Each container can specify limits as one or more GPUs. It is not possible to request or limit a fraction of a GPU. | |containerGpuRequests |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName |Each container can request one or more GPUs. It is not possible to request or limit a fraction of a GPU.|
-|containerGpumemoryTotalBytes |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor |Amount of GPU Memory in bytes available to use for a specific container. |
-|containerGpumemoryUsedBytes |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor |Amount of GPU Memory in bytes used by a specific container. |
+|containerGpumemoryTotalBytes* |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor |Amount of GPU Memory in bytes available to use for a specific container. |
+|containerGpumemoryUsedBytes* |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor |Amount of GPU Memory in bytes used by a specific container. |
|nodeGpuAllocatable |container.azm.ms/clusterId, container.azm.ms/clusterName, gpuVendor |Number of GPUs in a node that can be used by Kubernetes. | |nodeGpuCapacity |container.azm.ms/clusterId, container.azm.ms/clusterName, gpuVendor |Total Number of GPUs in a node. |
+\* Based on Kubernetes upstream changes, these metrics are no longer collected out of the box, as a temporary hotfix, for AKS, upgrade your GPU Node pool to the latest version or \*-2022.06.08 or higher. For Arc enabled Kubernetes, enable feature gate DisableAcceleratorUsageMetrics=false in Kubelet configuration of the node and restart the Kubelet. Once the upstream changes reach GA, this fix will not longer work, make plans to migrate to using your GPU vendor specific metrics exporter by December 31, 2022.
+ ## GPU performance charts Container insights includes pre-configured charts for the metrics listed earlier in the table as a GPU workbook for every cluster. See [Workbooks in Container insights](container-insights-reports.md) for a description of the workbooks available for Container insights.
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Container insights supports the following environments:
- [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 4.x - [Red Hat OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4.x
->[!NOTE]
-> Container insights support for Windows Server 2022 operating system in public preview.
- ## Supported Kubernetes versions The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Container insights is a feature designed to monitor the performance of container
Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Docker, Moby, and any CRI compatible runtime such as CRI-O and ContainerD.
->[!NOTE]
-> Container insights support for Windows Server 2022 operating system in public preview.
- Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications. Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md), and log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
To create a custom workbook based on any of these workbooks, select the **View W
- **GPU**: Interactive GPU usage charts for each GPU-aware Kubernetes cluster node.
+>[!NOTE]
+> As per the Kubernetes [upstream announcement](https://kubernetes.io/blog/2020/12/16/third-party-device-metrics-reaches-g)
+ ## Resource Monitoring workbooks - **Deployments**: Status of your deployments & Horizontal Pod Autoscaler(HPA) including custom HPA.
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/continuous-monitoring.md
Last updated 06/07/2022+
Ensuring that your development and operations have access to the same telemetry
## Next steps - Learn about the difference components of [Azure Monitor](overview.md).-- [Add continuous monitoring](./app/continuous-monitoring.md) to your release pipeline.
+- [Add continuous monitoring](./app/continuous-monitoring.md) to your release pipeline.
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
description: Monitoring data collected by Azure Monitor is separated into metric
documentationcenter: '' -- na Last updated 04/05/2022 + # Azure Monitor data platform
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Last updated 09/09/2021 + # Azure Monitor activity log
azure-monitor App Insights Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/app-insights-metrics.md
Title: Azure Application Insights log-based metrics | Microsoft Docs description: This article lists Azure Application Insights metrics with supported aggregations and dimensions. The details about log-based metrics include the underlying Kusto query statements. --+ Previously updated : 07/03/2019 Last updated : 07/03/2019
azure-monitor Classic Api Retirement Metrics Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/classic-api-retirement-metrics-autoscale.md
Title: Retire deployment APIs for Azure Monitor metrics and autoscale
description: Metrics and autoscale classic APIs, also called Azure Service Management (ASM) or RDFE deployment model being retired Last updated 11/19/2018+
azure-monitor Collect Custom Metrics Guestos Resource Manager Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md
Title: Collect Windows VM metrics in Azure Monitor with template
description: Send guest OS metrics to the Azure Monitor metric database store by using a Resource Manager template for a Windows virtual machine -+ Last updated 05/04/2020
azure-monitor Collect Custom Metrics Guestos Resource Manager Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md
Title: Collect Windows scale set metrics in Azure Monitor with template
description: Send guest OS metrics to the Azure Monitor metric store by using a Resource Manager template for a Windows virtual machine scale set -+ Last updated 09/09/2019
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
Title: Send classic Windows VM metrics to Azure Monitor metrics database
description: Send Guest OS metrics to the Azure Monitor data store for a Windows virtual machine (classic) -+ Last updated 09/09/2019
azure-monitor Collect Custom Metrics Guestos Vm Cloud Service Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md
Title: Send classic Cloud Services metrics to Azure Monitor metrics database
description: Describes the process for sending Guest OS performance metrics for Azure classic Cloud Services to the Azure Monitor metric store. -+ Last updated 09/09/2019
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Title: Collect custom metrics for Linux VM with the InfluxData Telegraf agent
description: Instructions on how to deploy the InfluxData Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor. -+ Last updated 06/16/2022
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
description: Overview of data collection endpoints (DCEs) in Azure Monitor inclu
Last updated 03/16/2022
+ms.reviwer: nikeist
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
Title: Data Collection Rules in Azure Monitor
description: Overview of data collection rules (DCRs) in Azure Monitor including their contents and structure and how you can create and work with them. Last updated 04/26/2022+
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
Last updated 02/22/2022
+ms.reviwer: nikeist
azure-monitor Data Collection Rule Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-transformations.md
Title: Data collection rule transformations
description: Use transformations in a data collection rule in Azure Monitor to filter and modify incoming data. Last updated 02/21/2022
+ms.reviwer: nikeist
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
Title: Metrics in Azure Monitor | Microsoft Docs description: Learn about metrics in Azure Monitor, which are lightweight monitoring data capable of supporting near real-time scenarios. documentationcenter: ''-+ --+ na
azure-monitor Diagnostic Settings Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings-policy.md
Last updated 05/09/2022+ # Create diagnostic settings at scale using Azure Policy
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Last updated 03/07/2022+ # Diagnostic settings in Azure Monitor
azure-monitor Metric Chart Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metric-chart-samples.md
Title: Azure Monitor metric chart example
description: Learn about visualizing your Azure Monitor data. -+ Last updated 01/29/2019
azure-monitor Metrics Aggregation Explained https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-aggregation-explained.md
Last updated 08/31/2021+ # Azure Monitor Metrics aggregation and display explained
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
Title: Advanced features of Metrics Explorer
description: Metrics are a series of measured values and counts that Azure collects. Learn to use Metrics Explorer to investigate the health and usage of resources. - Last updated 06/09/2022
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
Last updated 06/01/2021+ # Custom metrics in Azure Monitor (preview)
azure-monitor Metrics Dynamic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-dynamic-scope.md
Title: View multiple resources in the Azure metrics explorer
description: Learn how to visualize multiple resources by using the Azure metrics explorer. -+ Last updated 12/14/2020
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
Title: Getting started with Azure metrics explorer
description: Learn how to create your first metric chart with Azure metrics explorer. - Last updated 02/21/2022 + # Getting started with Azure Metrics Explorer
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
Title: Send metrics to the Azure Monitor metric database using REST API
description: Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API -+ Last updated 09/24/2018
azure-monitor Metrics Supported Export Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported-export-diagnostic-settings.md
description: Discussion of NULL vs. zero values when exporting metrics and a poi
Last updated 07/22/2020+ # Azure Monitor platform metrics exportable via Diagnostic Settings
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Last updated 06/01/2022 + # Supported metrics with Azure Monitor
azure-monitor Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-troubleshoot.md
Title: Troubleshooting Azure Monitor metric charts
description: Troubleshoot the issues with creating, customizing, or interpreting metric charts -+ Last updated 06/09/2022- # Troubleshooting metrics charts
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
Last updated 09/15/2021+
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
Title: Overview of Azure platform logs | Microsoft Docs
description: Overview of logs in Azure Monitor, which provide rich, frequent data about the operation of an Azure resource. - Last updated 12/19/2019-+ # Overview of Azure platform logs
azure-monitor Portal Disk Metrics Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/portal-disk-metrics-deprecation.md
Last updated 03/12/2020+ # Disk metrics deprecation in the Azure portal
azure-monitor Resource Logs Blob Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-blob-format.md
Title: Prepare for format change to Azure Monitor resource logs
description: Azure resource logs moved to use append blobs on November 1, 2018. -+ Last updated 07/06/2018
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs
description: Understand the supported services and event schemas for Azure Monitor resource logs. Last updated 06/01/2022+
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
Title: Azure resource logs supported services and schemas
description: Understand the supported services and event schemas for Azure resource logs. Last updated 05/10/2021+ # Common and service-specific schemas for Azure resource logs
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
Title: Azure resource logs
description: Learn how to stream Azure resource logs to a Log Analytics workspace in Azure Monitor. - Last updated 05/09/2022 ++ # Azure resource logs
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
Last updated 09/11/2020+
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md
description: How to authenticate requests and use the Azure Monitor REST API to
Last updated 05/09/2022 + # Azure Monitoring REST API walkthrough
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Last updated 07/15/2020+ # Stream Azure monitoring data to an event hub or external partner
azure-monitor Tutorial Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-metrics.md
Last updated 11/08/2021+ # Tutorial: Analyze metrics for an Azure resource
azure-monitor Tutorial Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md
Last updated 11/08/2021+ # Tutorial: Collect and analyze resource logs from an Azure resource
azure-monitor Ad Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-assessment.md
Last updated 09/10/2019+
azure-monitor Ad Replication Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-replication-status.md
Last updated 01/24/2018+
azure-monitor Azure Key Vault Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-key-vault-deprecated.md
Last updated 03/27/2019 +
azure-monitor Azure Networking Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-networking-analytics.md
Last updated 06/21/2018 +
azure-monitor Azure Web Apps Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-web-apps-analytics.md
Last updated 07/02/2018+
azure-monitor Capacity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/capacity-performance.md
Last updated 07/13/2017+
azure-monitor Cosmosdb Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/cosmosdb-insights-overview.md
Title: Monitor Azure Cosmos DB with Azure Monitor Cosmos DB insights| Microsoft
description: This article describes the Cosmos DB insights feature of Azure Monitor that provides Cosmos DB owners with a quick understanding of performance and utilization issues with their Cosmos DB accounts. Last updated 05/11/2020+
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/dns-analytics.md
Last updated 03/20/2018+
azure-monitor Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-insights-overview.md
Last updated 11/25/2020+
azure-monitor Network Performance Monitor Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor-expressroute.md
Last updated 11/27/2018+
azure-monitor Network Performance Monitor Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor-performance-monitor.md
Last updated 02/20/2018+
azure-monitor Network Performance Monitor Service Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor-service-connectivity.md
Last updated 02/20/2018+
azure-monitor Network Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor.md
Last updated 02/20/2018+
azure-monitor Redis Cache Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/redis-cache-insights-overview.md
Title: Azure Monitor for Azure Cache for Redis | Microsoft Docs
description: This article describes the Azure Monitor for Azure Redis Cache feature, which provides cache owners with a quick understanding of performance and utilization problems. Last updated 09/10/2020+
azure-monitor Scom Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/scom-assessment.md
Last updated 06/25/2018+
azure-monitor Solution Agenthealth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-agenthealth.md
Last updated 02/06/2020+
azure-monitor Solution Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-office-365.md
Last updated 03/30/2020+
azure-monitor Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solutions.md
Last updated 06/16/2022 + # Monitoring solutions in Azure Monitor
azure-monitor Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-assessment.md
Last updated 05/05/2020+
azure-monitor Surface Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/surface-hubs.md
Last updated 01/16/2018+
azure-monitor Troubleshoot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/troubleshoot-workbooks.md
description: Provides troubleshooting guidance for Azure Monitor workbook-based
Last updated 06/17/2020+ # Troubleshooting workbook-based insights
azure-monitor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/vmware.md
Last updated 05/04/2018+
azure-monitor Wire Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/wire-data.md
Last updated 03/26/2021-+ # Wire Data 2.0 (Preview) solution in Azure Monitor (Retired)
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
Title: Overview of Log Analytics in Azure Monitor description: This overview describes Log Analytics, which is a tool in the Azure portal used to edit and run log queries for analyzing data in Azure Monitor logs. Previously updated : 10/04/2020 Last updated : 06/28/2022
azure-monitor Monitor Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor.md
Last updated 04/07/2022-+ <!-- VERSION 2.2-->
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
Last updated 04/05/2022+ # What is monitored by Azure Monitor?
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Last updated 04/27/2022+
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/partners.md
Last updated 10/27/2021+ # Azure Monitor partner integrations
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
+ # Azure Policy built-in definitions for Azure Monitor
azure-monitor Resource Manager Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md
Last updated 04/05/2022 + # Resource Manager template samples for Azure Monitor
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
Last updated 11/27/2017 ++ # Roles, permissions, and security in Azure Monitor
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
+ # Azure Policy Regulatory Compliance controls for Azure Monitor
azure-monitor Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/terminology.md
Last updated 06/07/2022+
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation"
description: "What's new in Azure Monitor documentation" Last updated 04/04/2022+ # What's new in Azure Monitor documentation
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-template-specs.md
The template spec is a resource type named `Microsoft.Resources/templateSpecs`.
```azurecli az deployment group create \
- --name templateSpecRG \
+ --resource-group templateSpecRG \
--template-file "c:\Templates\azuredeploy.json" ```
To deploy a template spec, use the same deployment commands as you would use to
```azurecli az deployment group create \
- --name storageRG \
+ --resource-group storageRG \
--template-file "c:\Templates\storage.json" ```
Rather than creating a new template spec for the revised template, add a new ver
```azurecli az deployment group create \
- --name templateSpecRG \
+ --resource-group templateSpecRG \
--template-file "c:\Templates\azuredeploy.json" ```
Rather than creating a new template spec for the revised template, add a new ver
```azurecli az deployment group create \
- --name storageRG \
+ --resource-group storageRG \
--template-file "c:\Templates\storage.json" ```
cognitive-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md
Previously updated : 04/11/2022 Last updated : 06/28/2022
The code in this guide uses remote images referenced by URL. You may want to try
#### [REST](#tab/rest)
-When analyzing a local image, you put the binary image data in the HTTP request body. For a remote image, you specify the image's URL by formatting the request body like this: `{"url":"http://example.com/images/test.jpg"}`.
+When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"http://example.com/images/test.jpg"}`.
+
+To analyze a local image, you'd put the binary image data in the HTTP request body.
#### [C#](#tab/csharp)
In your main class, save a reference to the URL of the image you want to analyze
[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze_url)]
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.computervisionclient) methods, such as **AnalyzeImageInStreamAsync**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/ImageAnalysisQuickstart.cs) for scenarios involving local images.
++ #### [Java](#tab/java) In your main class, save a reference to the URL of the image you want to analyze. [!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_urlimage)]
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVision](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.computervision) methods, such as **AnalyzeImage**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java) for scenarios involving local images.
+ #### [JavaScript](#tab/javascript) In your main function, save a reference to the URL of the image you want to analyze. [!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_describe_image)]
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClient](/javascript/api/@azure/cognitiveservices-computervision/computervisionclient) methods, such as **describeImageInStream**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/ComputerVision/ImageAnalysisQuickstart.js) for scenarios involving local images.
+ #### [Python](#tab/python) Save a reference to the URL of the image you want to analyze. [!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_remoteimage)]
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClientOperationsMixin](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.operations.computervisionclientoperationsmixin) methods, such as **analyze_image_in_stream**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/ComputerVision/ImageAnalysisQuickstart.py) for scenarios involving local images.
+
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
If you want to start consuming the output generated by the container, see the fo
* Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md). * Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information.
-## Running Spatial Analysis with a recorded video file
-
-You can use Spatial Analysis with both recorded or live video. To use Spatial Analysis for recorded video, try recording a video file and save it as a mp4 file. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
- 1. Change **Secure transfer required** to **Disabled**
- 2. Change **Allow Blob public access** to **Enabled**
-
-Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported).
-
-Select on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
-
-Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
-
-The Spatial Analysis module will start consuming video file and will continuously auto replay as well.
--
-```json
-"zonecrossing": {
- "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "Replace http url here",
- "VIDEO_SOURCE_ID": "personcountgraph",
- "VIDEO_IS_LIVE": false,
- "VIDEO_DECODE_GPU_INDEX": 0,
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
- }
- },
-
-```
- ## Troubleshooting If you encounter issues when starting or running the container, see [Telemetry and troubleshooting](spatial-analysis-logging.md) for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
In this article, you learned concepts and workflow for downloading, installing,
* Spatial Analysis is a Linux container for Docker. * Container images are downloaded from the Microsoft Container Registry. * Container images run as IoT Modules in Azure IoT Edge.
-* How to configure the container and deploy it on a host machine.
+* Configure the container and deploy it on a host machine.
## Next steps
cognitive-services Spatial Analysis Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-local.md
+
+ Title: Run Spatial Analysis on a local video file
+
+description: Use this guide to learn how to run Spatial Analysis on a recorded local video.
++++++ Last updated : 06/28/2022+++
+# Run Spatial Analysis on a local video file
+
+You can use Spatial Analysis with either recorded or live video. Use this guide to learn how to run Spatial Analysis on a recorded local video.
+
+## Prerequisites
+
+* Set up a Spatial Analysis container by following the steps in [Set up the host machine and run the container](spatial-analysis-container.md).
+
+## Analyze a video file
+
+To use Spatial Analysis for recorded video, record a video file and save it as a .mp4 file. Then take the following steps:
+
+1. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
+ 1. Change **Secure transfer required** to **Disabled**
+ 1. Change **Allow Blob public access** to **Enabled**
+
+1. Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported).
+
+1. Select on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
+
+1. Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
+
+The Spatial Analysis module will start consuming video file and will continuously auto replay as well.
++
+```json
+"zonecrossing": {
+ "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "Replace http url here",
+ "VIDEO_SOURCE_ID": "personcountgraph",
+ "VIDEO_IS_LIVE": false,
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
+ }
+ },
+
+```
+
+## Next steps
+
+* [Deploy a People Counting web application](spatial-analysis-web-app.md)
+* [Configure Spatial Analysis operations](spatial-analysis-operations.md)
+* [Logging and troubleshooting](spatial-analysis-logging.md)
+* [Camera placement guide](spatial-analysis-camera-placement.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Vision Studio is UI tool that lets you explore, build, and integrate features fr
Language Studio provides you with a platform to try several service features, and see what they return in a visual manner. It also provides you with an easy-to-use experience to create custom projects and models to work on your data. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.
-### Face transparency documentation
+### Responsible AI for Face
+
+#### Face transparency documentation
* The [transparency documentation](https://aka.ms/faceraidocs) provides guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, providing support to people who believe their results were incorrect, and identifying and addressing fluctuations in accuracy due to variations in operational conditions.
-### Retirement of sensitive attributes
+#### Retirement of sensitive attributes
* We have retired facial analysis capabilities that purport to infer emotional states and identity attributes, such as gender, age, smile, facial hair, hair and makeup. * Facial detection capabilities, (including detecting blur, exposure, glasses, headpose, landmarks, noise, occlusion, facial bounding box) will remain generally available and do not require an application.
-### Fairlearn package and Microsoft's Fairness Dashboard
+#### Fairlearn package and Microsoft's Fairness Dashboard
* [The open-source Fairlearn package and MicrosoftΓÇÖs Fairness Dashboard](https://github.com/microsoft/responsible-ai-toolbox/tree/main/notebooks/cognitive-services-examples/face-verification) aims to support customers to measure the fairness of Microsoft's facial verification algorithms on their own data, allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology.
-### Limited Access policy
+#### Limited Access policy
* As a part of aligning Face to the updated Responsible AI Standard, a new [Limited Access policy](https://aka.ms/AAh91ff) has been implemented for the Face API and Computer Vision. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. See details on Limited Access for Face [here](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context) and for Computer Vision [here](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context).
+### Computer Vision 3.2-preview deprecation
+
+The preview versions of the 3.2 API are scheduled to be retired in December of 2022. Customers are encouraged to use the generally available (GA) version of the API instead. Mind the following changes when migrating from the 3.2-preview versions:
+1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls now take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they will use the latest model.
+1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
+1. Image Analysis APIs now use a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
+ ## May 2022 ### OCR (Read) API model is generally available (GA)
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
To create a project, use the `spx csr project create` command. Construct the req
Here's an example Speech CLI command that creates a project:
-```azurecli-interactive
+```azurecli
spx csr project create --name "My Project" --description "My Project Description" --language "en-US" ```
The top-level `self` property in the response body is the project's URI. Use thi
For Speech CLI help with projects, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr project ```
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
To create an endpoint and deploy a model, use the `spx csr endpoint create` comm
Here's an example Speech CLI command to create an endpoint and deploy a model:
-```azurecli-interactive
+```azurecli
spx csr endpoint create --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US" ```
The top-level `self` property in the response body is the endpoint's URI. Use th
For Speech CLI help with endpoints, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr endpoint ```
To redeploy the custom endpoint with a new model, use the `spx csr model update`
Here's an example Speech CLI command that redeploys the custom endpoint with a new model:
-```azurecli-interactive
+```azurecli
spx csr endpoint update --endpoint YourEndpointId --model YourModelId ```
You should receive a response body in the following format:
For Speech CLI help with endpoints, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr endpoint ```
To gets logs for an endpoint, use the `spx csr endpoint list` command. Construct
Here's an example Speech CLI command that gets logs for an endpoint:
-```azurecli-interactive
+```azurecli
spx csr endpoint list --endpoint YourEndpointId ```
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
To create a test, use the `spx csr evaluation create` command. Construct the req
Here's an example Speech CLI command that creates a test:
-```azurecli-interactive
+```azurecli
spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Evaluation" --description "My Evaluation Description" ```
The top-level `self` property in the response body is the evaluation's URI. Use
For Speech CLI help with evaluations, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr evaluation ```
To get test results, use the `spx csr evaluation status` command. Construct the
Here's an example Speech CLI command that gets test results:
-```azurecli-interactive
+```azurecli
spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca ```
You should receive a response body in the following format:
For Speech CLI help with evaluations, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr evaluation ```
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
To create a test, use the `spx csr evaluation create` command. Construct the req
Here's an example Speech CLI command that creates a test:
-```azurecli-interactive
+```azurecli
spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Inspection" --description "My Inspection Description" ```
The top-level `self` property in the response body is the evaluation's URI. Use
For Speech CLI help with evaluations, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr evaluation ```
To get test results, use the `spx csr evaluation status` command. Construct the
Here's an example Speech CLI command that gets test results:
-```azurecli-interactive
+```azurecli
spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca ```
You should receive a response body in the following format:
For Speech CLI help with evaluations, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr evaluation ```
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
To get the training and transcription expiration dates for a base model, use the
Here's an example Speech CLI command to get the training and transcription expiration dates for a base model:
-```azurecli-interactive
+```azurecli
spx csr model status --model https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/b0bbc1e0-78d5-468b-9b7c-a5a43b2bb83f ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr model ```
To get the transcription expiration date for your custom model, use the `spx csr
Here's an example Speech CLI command to get the transcription expiration date for your custom model:
-```azurecli-interactive
+```azurecli
spx csr model status --model https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models/YourModelId ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr model ```
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
To create a model with datasets for training, use the `spx csr model create` com
Here's an example Speech CLI command that creates a model with datasets for training:
-```azurecli-interactive
+```azurecli
spx csr model create --project YourProjectId --name "My Model" --description "My Model Description" --dataset YourDatasetId --language "en-US" ``` > [!NOTE]
The top-level `self` property in the response body is the model's URI. Use this
For Speech CLI help with models, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr model ```
To connect a model to a project, use the `spx csr model update` command. Constru
Here's an example Speech CLI command that connects a model to a project:
-```azurecli-interactive
+```azurecli
spx csr model update --model YourModelId --project YourProjectId ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr model ```
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
To create a dataset and connect it to an existing project, use the `spx csr data
Here's an example Speech CLI command that creates a dataset and connects it to an existing project:
-```azurecli-interactive
+```azurecli
spx csr dataset create --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US" ```
The top-level `self` property in the response body is the dataset's URI. Use thi
For Speech CLI help with datasets, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr dataset ```
cognitive-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/authentication.md
# Authenticate requests to Azure Cognitive Services
-Each request to an Azure Cognitive Service must include an authentication header. This header passes along a subscription key or access token, which is used to validate your subscription for a service or group of services. In this article, you'll learn about three ways to authenticate a request and the requirements for each.
+Each request to an Azure Cognitive Service must include an authentication header. This header passes along a subscription key or authentication token, which is used to validate your subscription for a service or group of services. In this article, you'll learn about three ways to authenticate a request and the requirements for each.
* Authenticate with a [single-service](#authenticate-with-a-single-service-subscription-key) or [multi-service](#authenticate-with-a-multi-service-subscription-key) subscription key * Authenticate with a [token](#authenticate-with-an-access-token)
Some Azure Cognitive Services accept, and in some cases require, an access token
>[!WARNING] > The services that support access tokens may change over time, please check the API reference for a service before using this authentication method.
-Both single service and multi-service subscription keys can be exchanged for access tokens in JSON Web Token (JWT) format. Access tokens are valid for 10 minutes.
+Both single service and multi-service subscription keys can be exchanged for authentication tokens. Authentication tokens are valid for 10 minutes. They're stored in JSON Web Token (JWT) format and can be queried programmatically using the [JWT libraries](https://jwt.io/libraries).
Access tokens are included in a request as the `Authorization` header. The token value provided must be preceded by `Bearer`, for example: `Bearer YOUR_AUTH_TOKEN`.
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
Previously updated : 05/27/2022 Last updated : 06/28/2022
Currently, the following features are available to be used asynchronously:
When you send asynchronous requests, you will incur charges based on number of text records you include in your request, for each feature use. For example, if you send a text record for sentiment analysis and NER, it will be counted as sending two text records, and you will be charged for both according to your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
-## Send asynchronous API requests using the REST API
+## Submit an asynchronous job using the REST API
-To create an asynchronous API request, review the [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Analyze) for the JSON body you'll send in your request.
+To submit an asynchronous job, review the [reference documentation](/rest/api/language/text-analysis-runtime/submit-job) for the JSON body you'll send in your request.
1. Add your documents to the `analysisInput` object.
-1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisTasks` object.
+1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisLROTask` object.
1. You can optionally:
- 1. Choose a specific version of the model used on your data with the `model-version` value.
+ 1. Choose a specific [version of the model](model-lifecycle.md) used on your data.
1. Include additional Language Service features in the `tasks` object, to be performed on your data at the same time.
-Once you've created the JSON body for your request, add your key to the `Ocp-Apim-Subscription-Key` header. Then send your API request to the `/analyze` endpoint:
+Once you've created the JSON body for your request, add your key to the `Ocp-Apim-Subscription-Key` header. Then send your API request to job creation endpoint. For example:
```http
-https://your-endpoint/text/analytics/v3.1/analyze
+POST https://your-endpoint.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2022-05-01
``` A successful call will return a 202 response code. The `operation-location` in the response header will be the URL you will use to retrieve the API results. The value will look similar to the following URL: ```http
-https://your-endpoint.cognitiveservices.azure.com/text/analytics/v3.2-preview.1/analyze/jobs/12345678-1234-1234-1234-12345678
+GET {Endpoint}/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-05-01
```
-To [retrieve the results](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/AnalyzeStatus) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
+To [get the status and retrieve the results](/rest/api/language/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
## Send asynchronous API requests using the client library
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 06/13/2022 Last updated : 06/21/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
Previously updated : 06/13/2022 Last updated : 06/21/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 06/13/2022 Last updated : 06/27/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 06/13/2022 Last updated : 06/21/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/27/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations) * v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for: * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
+* There is a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and [reference documentation](/rest/api/language/) for information on structuring your API calls. All text analytics 3.2-preview.2 API users can begin migrating their workloads to this new endpoint.
+ * [Entity linking](./entity-linking/quickstart.md?pivots=rest-api)
+ * [Language detection](./language-detection/quickstart.md?pivots=rest-api)
+ * [Key phrase extraction](./key-phrase-extraction/quickstart.md?pivots=rest-api)
+ * [Named entity recognition](./named-entity-recognition/quickstart.md?pivots=rest-api)
+ * [PII detection](./personally-identifiable-information/quickstart.md?pivots=rest-api)
+ * [Sentiment analysis and opinion mining](./sentiment-opinion-mining/quickstart.md?pivots=rest-api)
+ * [Text analytics for health](./text-analytics-for-health/quickstart.md?pivots=rest-api)
## May 2022
confidential-ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/overview.md
Microsoft Azure confidential ledger (ACL) is a new and highly secure service for managing sensitive data records. It runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment which keeps potential attacks at bay. Furthermore, Azure confidential ledger runs on a minimalistic Trusted Computing Base (TCB), which ensures that no oneΓüáΓÇönot even MicrosoftΓüáΓÇöis "above" the ledger.
-As its name suggests, Azure confidential ledger utilizes the [Azure Confidential Computing platform](../confidential-computing/index.yml) and the [Confidential Consortium Framework](https://www.microsoft.com/research/project/confidential-consortium-framework) to provide a high integrity solution that is tamper-protected and evident. One ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The ledger's integrity is maintained through a consensus-based blockchain.
+As its name suggests, Azure confidential ledger utilizes the [Azure Confidential Computing platform](../confidential-computing/index.yml) and the [Confidential Consortium Framework](https://ccf.dev) to provide a high integrity solution that is tamper-protected and evident. One ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The ledger's integrity is maintained through a consensus-based blockchain.
Azure confidential ledger offers unique data integrity advantages, including immutability, tamper-proofing, and append-only operations. These features, which ensure that all records are kept intact, are ideal when critical metadata records must not be modified, such as for regulatory compliance and archival purposes.
The confidential ledger is exposed through REST APIs which can be integrated int
## Ledger security
-This section defines the security protections for the ledger. The ledger APIs use client certificate-based authentication. Currently, the ledger supports certificate-based authentication process with owner roles. We will be adding support for Azure Active Directory (AAD) based authentication and also role-based access (for example, owner, reader, and contributor).
+The ledger APIs support certificate-based authentication process with owner roles as well as Azure Active Directory (AAD) based authentication and also role-based access (for example, owner, reader, and contributor).
-The data to the ledger is sent through TLS 1.2 connection and the TLS 1.2 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
+The data to the ledger is sent through TLS 1.3 connection and the TLS 1.3 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
### Ledger storage
The Functional APIs allow direct interaction with your instantiated confidential
## Constraints -- Once a confidential ledger is created, you cannot change the ledger type.-- Azure confidential ledger does not support standard Azure Disaster Recovery at this time. However, Azure confidential ledger offers built-in redundancy within the Azure region, as the confidential ledger runs on multiple independent nodes.
+- Once a confidential ledger is created, you cannot change the ledger type (private or public).
- Azure confidential ledger deletion leads to a "hard delete", so your data will not be recoverable after deletion. - Azure confidential ledger names must be globally unique. Ledgers with the same name, irrespective of their type, are not allowed.
The Functional APIs allow direct interaction with your instantiated confidential
| Term | Definition | |--|--| | ACL | Azure confidential ledger |
-| Ledger | An immutable append record of transactions (also known as a Blockchain) |
-| Commit | A confirmation that a transaction has been locally committed to a node. A local commit by itself does not guarantee that a transaction is part of the ledger. |
-| Global commit | A confirmation that transaction was globally committed and is part of the ledger. |
+| Ledger | An immutable append-only record of transactions (also known as a Blockchain) |
+| Commit | A confirmation that a transaction has been appended to the ledger. |
| Receipt | Proof that the transaction was processed by the ledger. | ## Next steps
container-apps Storage Mounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md
A container app has access to different types of storage. A single app can take
| Storage type | Description | Usage examples | |--|--|--|
-| [Container file system](#container-file-system) | Temporary storage scoped to the environment | Writing a local app cache. |
+| [Container file system](#container-file-system) | Temporary storage scoped to the local container | Writing a local app cache. |
| [Temporary storage](#temporary-storage) | Temporary storage scoped to an individual replica | Sharing files between containers in a replica. For instance, the main app container can write log files that are processed by a sidecar container. | | [Azure Files](#azure-files) | Permanent storage | Writing files to a file share to make data accessible by other systems. |
The following ARM template snippets demonstrate how to add an Azure Files share
See the [ARM template API specification](azure-resource-manager-api-spec.md) for a full example.
container-registry Container Registry Auto Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md
Last updated 05/07/2021
When you use an Azure container registry as part of a development workflow, the registry can quickly fill up with images or other artifacts that aren't needed after a short period. You might want to delete all tags that are older than a certain duration or match a specified name filter. To delete multiple artifacts quickly, this article introduces the `acr purge` command you can run as an on-demand or [scheduled](container-registry-tasks-scheduled.md) ACR Task.
-The `acr purge` command is currently distributed in a public container image (`mcr.microsoft.com/acr/acr-cli:0.4`), built from source code in the [acr-cli](https://github.com/Azure/acr-cli) repo in GitHub. `acr purge` is currently in preview.
+The `acr purge` command is currently distributed in a public container image (`mcr.microsoft.com/acr/acr-cli:0.5`), built from source code in the [acr-cli](https://github.com/Azure/acr-cli) repo in GitHub. `acr purge` is currently in preview.
You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the ACR task examples in this article. If you'd like to use it locally, version 2.0.76 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
The `acr purge` container command deletes images by tag in a repository that mat
At a minimum, specify the following when you run `acr purge`:
-* `--filter` - A repository and a *regular expression* to filter tags in the repository. Examples: `--filter "hello-world:.*"` matches all tags in the `hello-world` repository, and `--filter "hello-world:^1.*"` matches tags beginning with `1`. Pass multiple `--filter` parameters to purge multiple repositories.
+* `--filter` - A repository name *regular expression* and a tag name *regular expression* to filter images in the registry. Examples: `--filter "hello-world:.*"` matches all tags in the `hello-world` repository, `--filter "hello-world:^1.*"` matches tags beginning with `1` in the `hello-world` repository, and `--filter ".*/cache:.*"` matches all tags in the repositories ending in `/cache`. You can also pass multiple `--filter` parameters.
* `--ago` - A Go-style [duration string](https://go.dev/pkg/time/) to indicate a duration beyond which images are deleted. The duration consists of a sequence of one or more decimal numbers, each with a unit suffix. Valid time units include "d" for days, "h" for hours, and "m" for minutes. For example, `--ago 2d3h6m` selects all filtered images last modified more than 2 days, 3 hours, and 6 minutes ago, and `--ago 1.5h` selects images last modified more than 1.5 hours ago. `acr purge` supports several optional parameters. The following two are used in examples in this article:
container-registry Container Registry Tasks Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-reference-yaml.md
Each of the following aliases points to a stable image in Microsoft Container Re
| Alias | Image | | -- | -- |
-| `acr` | `mcr.microsoft.com/acr/acr-cli:0.4` |
-| `az` | `mcr.microsoft.com/acr/azure-cli:f75cfff` |
-| `bash` | `mcr.microsoft.com/acr/bash:f75cfff` |
-| `curl` | `mcr.microsoft.com/acr/curl:f75cfff` |
+| `acr` | `mcr.microsoft.com/acr/acr-cli:0.5` |
+| `az` | `mcr.microsoft.com/acr/azure-cli:7ee1d7f` |
+| `bash` | `mcr.microsoft.com/acr/bash:7ee1d7f` |
+| `curl` | `mcr.microsoft.com/acr/curl:7ee1d7f` |
The following example task uses several aliases to [purge](container-registry-auto-purge.md) image tags older than 7 days in the repo `samples/hello-world` in the run registry:
container-registry Tutorial Deploy Connected Registry Nested Iot Edge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-deploy-connected-registry-nested-iot-edge-cli.md
Overall, the lower layer deployment file is similar to the top layer deployment
"modules": { "connected-registry": { "settings": {
- "image": "$upstream:8000/acr/connected-registry:0.5.0",
+ "image": "$upstream:8000/acr/connected-registry:0.7.0",
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/home/azureuser/connected-registry:/var/acr/data\"]}}" }, "type": "docker",
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
Burst capacity applies only to Azure Cosmos DB accounts using provisioned throug
## How burst capacity works > [!NOTE]
-> The current implementation of burst capacity is subject to change in the future. Usage of burst capacity is subject to system resource availability and is not guaranteed. Azure Cosmos DB may also use burst capacity for background maintenance tasks. If your workload requires consistent throughput beyond what you have provisioned, it's recommended to provision your RU/s accordingly without relying on burst capacity.
+> The current implementation of burst capacity is subject to change in the future. Usage of burst capacity is subject to system resource availability and is not guaranteed. Azure Cosmos DB may also use burst capacity for background maintenance tasks. If your workload requires consistent throughput beyond what you have provisioned, it's recommended to provision your RU/s accordingly without relying on burst capacity. Before enabling burst capacity, it is also recommended to evaluate if your partition layout can be [merged](merge.md) to permanently give more RU/s per physical partition without relying on burst capacity.
Let's take an example of a physical partition that has 100 RU/s of provisioned throughput and is idle for 5 minutes. With burst capacity, it can accumulate a maximum of 100 RU/s * 300 seconds = 30,000 RU of burst capacity. The capacity can be consumed at a maximum rate of 3000 RU/s, so if there's a sudden spike in request volume, the partition can burst up to 3000 RU/s for up 30,000 RU / 3000 RU/s = 10 seconds. Without burst capacity, any requests that are consumed beyond the provisioned 100 RU/s would have been rate limited (429).
After the 10 seconds is over, the burst capacity has been used up. If the worklo
## Getting started
-To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
-- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).-- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+
+Before submitting your request:
+- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+
+The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Burst Capacity**. Run the **Check eligibility for burst capacity preview** diagnostic.
++ ## Limitations
To get started using burst capacity, enroll in the preview by submitting a reque
To enroll in the preview, your Cosmos account must meet all the following criteria: - Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts. - If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
- - There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, Table API, or API for MongoDB.
+ - There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, or API for MongoDB.
- Your Cosmos account isn't using any unsupported connectors - Azure Data Factory - Azure Stream Analytics - Logic Apps - Azure Functions - Azure Search
+ - Azure Cosmos DB Spark connector
+ - Azure Cosmos DB data migration tool
+ - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
### SDK requirements (SQL and Table API only) #### SQL API
For Table API accounts, burst capacity is supported only when using the latest v
If you enroll in the preview, the following connectors will fail.
-* Azure Data Factory
-* Azure Stream Analytics
-* Logic Apps
-* Azure Functions
-* Azure Search
+* Azure Data Factory<sup>1</sup>
+* Azure Stream Analytics<sup>1</sup>
+* Logic Apps<sup>1</sup>
+* Azure Functions<sup>1</sup>
+* Azure Search<sup>1</sup>
+* Azure Cosmos DB Spark connector<sup>1</sup>
+* Azure Cosmos DB data migration tool
+* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-Support for these connectors is planned for the future.
+<sup>1</sup>Support for these connectors is planned for the future.
## Next steps
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
The following links show how to update containers analytical TTL by using PowerS
* [Azure Cosmos DB API for Mongo DB](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollection) * [Azure Cosmos DB SQL API](/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer)
-## <a id="disable-analytical-store"></a> Optional - Disable analytical store in a container
+## <a id="disable-analytical-store"></a> Optional - Disable analytical store in a SQL API container
-Analytical store can be disabled in SQL API containers using `Update-AzCosmosDBSqlContainer` PowerShell command, by updating `-AnalyticalStorageTtl` (analytical Time-To-Live) to `0`. Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
+Analytical store can be disabled in SQL API containers using Azure CLI or PowerShell.
+
+> [!NOTE]
+> Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
+
+> [!NOTE]
+> Please note that disabling analitical store is not available for MongoDB API collections.
++
+### Azure CLI
+
+Set `--analytical-storage-ttl` parameter to 0 using the `az cosmosdb sql container update` Azure CLI command.
+
+### PowerShell
+
+Set `-AnalyticalStorageTtl` paramenter to 0 using the `Update-AzCosmosDBSqlContainer` PowerShell command.
-Currently you can't be disabled in MongoDB API collections.
## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
Last updated 05/09/2022
# Merge partitions in Azure Cosmos DB (preview) [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems.
+Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container in place. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container and RU/s per partition is low. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems.
## Getting started
-To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
-- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).-- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+To get started using partition merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+
+Before submitting your request:
+- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+
+The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Partition Merge**. Run the **Check eligibility for partition merge preview** diagnostic.
++ ### Merging physical partitions
To enroll in the preview, your Cosmos account must meet all the following criter
* Logic Apps * Azure Functions * Azure Search
+ * Azure Cosmos DB Spark connector
+ * Azure Cosmos DB data migration tool
+ * Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
### Account resources and configuration * Merge is only available for SQL API and API for MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
Support for other SDKs is planned for the future.
If you enroll in the preview, the following connectors will fail.
-* Azure Data Factory
-* Azure Stream Analytics
-* Logic Apps
-* Azure Functions
-* Azure Search
+* Azure Data Factory<sup>1</sup>
+* Azure Stream Analytics<sup>1</sup>
+* Logic Apps<sup>1</sup>
+* Azure Functions<sup>1</sup>
+* Azure Search<sup>1</sup>
+* Azure Cosmos DB Spark connector<sup>1</sup>
+* Azure Cosmos DB data migration tool
+* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-Support for these connectors is planned for the future.
+<sup>1</sup>Support for these connectors is planned for the future.
## Next steps
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Use the following steps to migrate your account from periodic backup to continuo
Connect-AzAccount ```
- 1. Migrate your account from periodic to continuous backup mode with ``continuous30days`` tier or ``continuous7days`` days. If a tier value isn't provided, it's assumed to be ``continous30days``:
+ 1. Migrate your account from periodic to continuous backup mode with ``continuous30days`` tier or ``continuous7days`` days. If a tier value isn't provided, it's assumed to be ``continuous30days``:
```azurepowershell-interactive Update-AzCosmosDBAccount `
Use the following steps to migrate your account from periodic backup to continuo
az login ```
-1. Migrate the account to ``continuous30days`` or ``continuous7days`` tier. If tier value isn't provided, it's assumed to be ``continous30days``:
+1. Migrate the account to ``continuous30days`` or ``continuous7days`` tier. If tier value isn't provided, it's assumed to be ``continuous30days``:
```azurecli-interactive az cosmosdb update -n <myaccount> -g <myresourcegroup> --backup-policy-type continuous
az deployment group create -g <ResourceGroup> --template-file <ProvisionTemplate
## Change Continuous Mode tiers
-You can switch between ``Continous30Days`` and ``Continous7Days`` in Azure PowerShell, Azure CLI or the Azure portal.
+You can switch between ``Continuous30Days`` and ``Continous7Days`` in Azure PowerShell, Azure CLI or the Azure portal.
The Following Azure CLI command illustrates switching an existing account to ``Continous7Days``:
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
New-AzCosmosDBAccount `
### <a id="provision-powershell-mongodb-api"></a>API for MongoDB
-The following cmdlet is an example of continuous backup account configured with the ``Continous30days`` tier:
+The following cmdlet is an example of continuous backup account configured with the ``Continuous30days`` tier:
```azurepowershell New-AzCosmosDBAccount `
New-AzCosmosDBAccount `
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of an account with continuous backup policy configured with the ``Continous30days`` tier:
+The following cmdlet is an example of an account with continuous backup policy configured with the ``Continuous30days`` tier:
```azurepowershell New-AzCosmosDBAccount `
cosmos-db Create Sql Api Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-go.md
if err != nil {
// Create database client databaseClient, err := client.NewDatabase("<databaseName>") if err != nil {
- log.fatal("Failed to create database client:", err)
+ log.Fatal("Failed to create database client:", err)
} // Create container client containerClient, err := client.NewContainer("<databaseName>", "<containerName>") if err != nil {
- log.fatal("Failed to create a container client:", err)
+ log.Fatal("Failed to create a container client:", err)
} ```
-**Create a Cosmos database**
+**Create a Cosmos DB database**
```go
-databaseProperties := azcosmos.DatabaseProperties{ID: "<databaseName>"}
-
-databaseResp, err := client.CreateDatabase(context.TODO(), databaseProperties, nil)
-if err != nil {
- log.Fatal(err)
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createDatabase (client *azcosmos.Client, databaseName string) error {
+// databaseName := "adventureworks"
+
+ // sets the name of the database
+ databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
+
+ // creating the database
+ ctx := context.TODO()
+ databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
+ if err != nil {
+ log.Fatal(err)
+ }
+ return nil
} ``` **Create a container** ```go
-database, err := client.NewDatabase("<databaseName>") //returns struct that represents a database.
-if err != nil {
- log.Fatal(err)
-}
-
-properties := azcosmos.ContainerProperties{
- ID: "ToDoItems",
- PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
- Paths: []string{"/category"},
- },
-}
-
-resp, err := database.CreateContainer(context.TODO(), properties, nil)
-if err != nil {
- log.Fatal(err)
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createContainer (client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "/customerId"
+
+ databaseClient, err := client.NewDatabase(databaseName) // returns a struct that represents a database
+ if err != nil {
+ log.Fatal("Failed to create a database client:", err)
+ }
+
+ // Setting container properties
+ containerProperties := azcosmos.ContainerProperties{
+ ID: containerName,
+ PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
+ Paths: []string{partitionKey},
+ },
+ }
+
+ // Setting container options
+ throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
+ options := &azcosmos.CreateContainerOptions{
+ ThroughputProperties: &throughputProperties,
+ }
+
+ ctx := context.TODO()
+ containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
+ if err != nil {
+ log.Fatal(err)
+
+ }
+ log.Printf("Container [%v] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
+
+ return nil
} ``` **Create an item** ```go
-container, err := client.NewContainer("<databaseName>", "<containerName>")
-if err != nil {
- log.Fatal(err)
-}
-
-pk := azcosmos.NewPartitionKeyString("personal") //specifies the value of the partition key
-
-item := map[string]interface{}{
- "id": "1",
- "category": "personal",
- "name": "groceries",
- "description": "Pick up apples and strawberries",
- "isComplete": false,
-}
-
-marshalled, err := json.Marshal(item)
-if err != nil {
- log.Fatal(err)
-}
-
-itemResponse, err := container.CreateItem(context.TODO(), pk, marshalled, nil)
-if err != nil {
- log.Fatal(err)
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+/*
+ item = struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ }
+*/
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ b, err := json.Marshal(item)
+ if err != nil {
+ return err
+ }
+ // setting item options upon creating ie. consistency level
+ itemOptions := azcosmos.ItemOptions{
+ ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
+ }
+ ctx := context.TODO()
+ itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
+
+ if err != nil {
+ return err
+ }
+ log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
} ``` **Read an item** ```go
-getResponse, err := container.ReadItem(context.TODO(), pk, "1", nil)
-if err != nil {
- log.Fatal(err)
-}
-
-var getResponseBody map[string]interface{}
-err = json.Unmarshal(getResponse.Value, &getResponseBody)
-if err != nil {
- log.Fatal(err)
-}
-
-fmt.Println("Read item with Id 1:")
-
-for key, value := range getResponseBody {
- fmt.Printf("%s: %v\n", key, value)
+import (
+ "context"
+ "log"
+ "fmt"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("Failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Read an item
+ ctx := context.TODO()
+ itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ itemResponseBody := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{}
+
+ err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
+ if err != nil {
+ return err
+ }
+
+ b, err := json.MarshalIndent(itemResponseBody, "", " ")
+ if err != nil {
+ return err
+ }
+ fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
+ fmt.Printf("%s\n", b)
+
+ log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
} ``` **Delete an item** ```go
-delResponse, err := container.DeleteItem(context.TODO(), pk, "1", nil)
-if err != nil {
- log.Fatal(err)
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("Failed to create a container client: %s", err)
+ }
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Delete an item
+ ctx := context.TODO()
+ res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
+
+ return nil
} ```
Get your Azure Cosmos account credentials by following these steps:
After you've copied the **URI** and **PRIMARY KEY** of your account, save them to a new environment variable on the local machine running the application.
-Use the values copied from the Azure port to set the following environment variables:
+Use the values copied from the Azure portal to set the following environment variables:
# [Bash](#tab/bash) ```bash
-export AZURE_COSMOS_URL=<Your_AZURE_COSMOS_URI>
-export AZURE_COSMOS_PRIMARY_KEY=<Your_COSMOS_PRIMARY_KEY>
+export AZURE_COSMOS_ENPOINT=<Your_AZURE_COSMOS_URI>
+export AZURE_COSMOS_KEY=<Your_COSMOS_PRIMARY_KEY>
``` # [PowerShell](#tab/powershell) ```powershell
-$env:AZURE_COSMOS_URL=<Your_AZURE_COSMOS_URI>
-$env:AZURE_COSMOS_PRIMARY_KEY=<Your_AZURE_COSMOS_URI>
+$env:AZURE_COSMOS_ENDPOINT=<Your_AZURE_COSMOS_URI>
+$env:AZURE_COSMOS_KEY=<Your_AZURE_COSMOS_URI>
```
Create a new Go module by running the following command:
go mod init azcosmos ```
-Create a new file named `main.go` and copy the desired code from the sample sections above.
+```go
+
+package main
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func main() {
+ endpoint := os.Getenv("AZURE_COSMOS_ENDPOINT")
+ if endpoint == "" {
+ log.Fatal("AZURE_COSMOS_ENDPOINT could not be found")
+ }
+
+ key := os.Getenv("AZURE_COSMOS_KEY")
+ if key == "" {
+ log.Fatal("AZURE_COSMOS_KEY could not be found")
+ }
+
+ var databaseName = "adventureworks"
+ var containerName = "customer"
+ var partitionKey = "/customerId"
+
+ item := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ }
+
+ cred, err := azcosmos.NewKeyCredential(key)
+ if err != nil {
+ log.Fatal("Failed to create a credential: ", err)
+ }
+
+ // Create a CosmosDB client
+ client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
+ if err != nil {
+ log.Fatal("Failed to create cosmos db client: ", err)
+ }
+
+ err = createDatabase(client, databaseName)
+ if err != nil {
+ log.Printf("createDatabase failed: %s\n", err)
+ }
+
+ err = createContainer(client, databaseName, containerName, partitionKey)
+ if err != nil {
+ log.Printf("createContainer failed: %s\n", err)
+ }
+
+ err = createItem(client, databaseName, containerName, item.CustomerId, item)
+ if err != nil {
+ log.Printf("createItem failed: %s\n", err)
+ }
+
+ err = readItem(client, databaseName, containerName, item.CustomerId, item.ID)
+ if err != nil {
+ log.Printf("readItem failed: %s\n", err)
+ }
+
+ err = deleteItem(client, databaseName, containerName, item.CustomerId, item.ID)
+ if err != nil {
+ log.Printf("deleteItem failed: %s\n", err)
+ }
+}
+
+func createDatabase(client *azcosmos.Client, databaseName string) error {
+// databaseName := "adventureworks"
+
+ databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
+
+ // This is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+ ctx := context.TODO()
+ databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Database [%s] already exists\n", databaseName)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Database [%v] created. ActivityId %s\n", databaseName, databaseResp.ActivityID)
+ }
+ return nil
+}
+
+func createContainer(client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
+// databaseName = adventureworks
+// containerName = customer
+// partitionKey = "/customerId"
+
+ databaseClient, err := client.NewDatabase(databaseName)
+ if err != nil {
+ return err
+ }
+
+ // creating a container
+ containerProperties := azcosmos.ContainerProperties{
+ ID: containerName,
+ PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
+ Paths: []string{partitionKey},
+ },
+ }
+
+ // this is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+
+ // setting options upon container creation
+ throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
+ options := &azcosmos.CreateContainerOptions{
+ ThroughputProperties: &throughputProperties,
+ }
+ ctx := context.TODO()
+ containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Container [%s] already exists\n", containerName)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Container [%s] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
+ }
+ return nil
+}
+
+func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+
+/* item = struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ CreationDate: "2014-02-25T00:00:00",
+ }
+*/
+ // create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ b, err := json.Marshal(item)
+ if err != nil {
+ return err
+ }
+ // setting the item options upon creating ie. consistency level
+ itemOptions := azcosmos.ItemOptions{
+ ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
+ }
+
+ // this is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+
+ ctx := context.TODO()
+ itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Item with partitionkey value %s already exists\n", pk)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+ }
+
+ return nil
+}
+
+func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Read an item
+ ctx := context.TODO()
+ itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ itemResponseBody := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{}
+
+ err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
+ if err != nil {
+ return err
+ }
+
+ b, err := json.MarshalIndent(itemResponseBody, "", " ")
+ if err != nil {
+ return err
+ }
+ fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
+ fmt.Printf("%s\n", b)
+
+ log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
+}
+
+func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client:: %s", err)
+ }
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Delete an item
+ ctx := context.TODO()
+
+ res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
+
+ return nil
+}
+
+```
+Create a new file named `main.go` and copy the code from the sample section above.
Run the following command to execute the app:
Run the following command to execute the app:
go run main.go ``` - ## Clean up resources [!INCLUDE [cosmosdb-delete-resource-group](../includes/cosmos-db-delete-resource-group.md)]
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/distribute-throughput-across-partitions.md
In general, usage of this feature is recommended for scenarios when both the fol
- You're consistently seeing greater than 1-5% overall rate of 429 responses - You've a consistent, predictable hot partition
-If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements.
+If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements. If you have a small amount of RU/s per partition, you can also use the [partition merge (preview)](../merge.md) to reduce the number of partitions and ensure more RU/s per partition for the same total provisioned throughput.
## Getting started
-To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
-- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).-- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+Before submitting your request:
+- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+
+The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Throughput redistribution across partition**. Run the **Check eligibility for throughput redistribution across partitions preview** diagnostic.
++ ## Example scenario
To enroll in the preview, your Cosmos account must meet all the following criter
- Logic Apps - Azure Functions - Azure Search-
+ - Azure Cosmos DB Spark connector
+ - Azure Cosmos DB data migration tool
+ - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
+
### SDK requirements (SQL API only) Throughput redistribution across partitions is supported only with the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use this feature for API for MongoDB accounts.
Support for other SDKs is planned for the future.
If you enroll in the preview, the following connectors will fail.
-* Azure Data Factory
-* Azure Stream Analytics
-* Logic Apps
-* Azure Functions
-* Azure Search
+* Azure Data Factory<sup>1</sup>
+* Azure Stream Analytics<sup>1</sup>
+* Logic Apps<sup>1</sup>
+* Azure Functions<sup>1</sup>
+* Azure Search<sup>1</sup>
+* Azure Cosmos DB Spark connector<sup>1</sup>
+* Azure Cosmos DB data migration tool
+* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-Support for these connectors is planned for the future.
+<sup>1</sup>Support for these connectors is planned for the future.
## Next steps
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
tags: billing
Previously updated : 06/14/2022 Last updated : 06/29/2022
You can request billing ownership of products for the subscription types listed
- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)<sup>2</sup> - [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)<sup>1</sup> - [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/)
- - Transfers are only supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer.
- - Transfers aren't supported for indirect EA customers. An indirect EA is one where a customer signs an agreement with a Microsoft partner.
+ - Subscription and reservation transfer are supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer.
+ - Only subscription transfers are supported for indirect EA customers. Reservation transfers aren't supported. An indirect EA agreement is one where a customer signs an agreement with a Microsoft partner.
- [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) - [Microsoft Partner Network](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup> - [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)<sup>1</sup>
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
Previously updated : 10/13/2021 Last updated : 06/29/2022
This article provides suggestions to troubleshoot common problems with the Azure
| If the error message contains the string "SqlException", SQL Database throws an error indicating some specific operation failed. | If the SQL error is not clear, try to alter the database to the latest compatibility level '150'. It can throw the latest version SQL errors. For more information, see the [documentation](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level#backwardCompat). <br/> For more information about troubleshooting SQL issues, search by SQL error code in [Database engine errors](/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support. | | If the error message contains the string "PdwManagedToNativeInteropException", it's usually caused by a mismatch between the source and sink column sizes. | Check the size of both the source and sink columns. For further help, contact Azure SQL support. | | If the error message contains the string "InvalidOperationException", it's usually caused by invalid input data. | To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity, which can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity](./copy-activity-fault-tolerance.md). |
+ | If the error message contains "Execution Timeout Expired", it's usually caused by query timeout. | Configure **Query timeout** in the source and **Write batch timeout** in the sink to increase timeout. |
## Error code: SqlUnauthorizedAccess
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
Previously updated : 06/09/2022 Last updated : 06/29/2022
All the linked service types are supported for parameterization.
- FTP - Generic HTTP - Generic REST
+- Google AdWords
- MySQL - OData - Oracle
databox-online Azure Stack Edge Gpu Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-shares.md
Previously updated : 05/03/2022 Last updated : 06/29/2022 # Use Azure portal to manage shares on your Azure Stack Edge Pro
Do the following steps in the Azure portal to create a share.
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
- ![Select add share](media/azure-stack-edge-gpu-manage-shares/add-share-1.png)
+ ![Screenshot of selecting the Add share option on the command bar.](media/azure-stack-edge-gpu-manage-shares/add-share-1.png)
2. In **Add Share**, specify the share settings. Provide a unique name for your share.
Do the following steps in the Azure portal to create a share.
6. This step depends on whether you're creating an SMB or an NFS share. - **If creating an SMB share** - In the **All privilege local user** field, choose from **Create new** or **Use existing**. If creating a new local user, provide the **username**, **password**, and then confirm password. This assigns the permissions to the local user. After you have assigned the permissions here, you can then use File Explorer to modify these permissions.
- ![Add SMB share](media/azure-stack-edge-gpu-manage-shares/add-smb-share.png)
+ ![Screenshot of the Add SMB share page.](media/azure-stack-edge-gpu-manage-shares/add-smb-share.png)
If you check allow only read operations for this share data, you can specify read-only users. - **If creating an NFS share** - You need to supply the **IP addresses of the allowed clients** that can access the share.
- ![Add NFS share](media/azure-stack-edge-gpu-manage-shares/add-nfs-share.png)
+ ![Screenshot of the Add NFS share page.](media/azure-stack-edge-gpu-manage-shares/add-nfs-share.png)
7. To easily access the shares from Edge compute modules, use the local mount point. Select **Use the share with Edge compute** so that the share is automatically mounted after it's created. When this option is selected, the Edge module can also use the compute with the local mount point.
Do the following steps in the Azure portal to create a share.
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
- ![Select add share 2](media/azure-stack-edge-gpu-manage-shares/add-local-share-1.png)
+ ![Screenshot of the Select add share 2 option on the command bar.](media/azure-stack-edge-gpu-manage-shares/add-local-share-1.png)
2. In **Add Share**, specify the share settings. Provide a unique name for your share.
Do the following steps in the Azure portal to create a share.
7. Select **Create**.
- ![Create local share](media/azure-stack-edge-gpu-manage-shares/add-local-share-2.png)
+ ![Screenshot of the Create local share with the Configure as Edge local share option.](media/azure-stack-edge-gpu-manage-shares/add-local-share-2.png)
You see a notification that the share creation is in progress. After the share is created with the specified settings, the **Shares** blade updates to reflect the new share.
- ![View updates Shares blade](media/azure-stack-edge-gpu-manage-shares/add-local-share-3.png)
+ ![Screenshot of the View updates Shares blade.](media/azure-stack-edge-gpu-manage-shares/add-local-share-3.png)
Select the share to view the local mountpoint for the Edge compute modules for this share.
- ![View local share details](media/azure-stack-edge-gpu-manage-shares/add-local-share-4.png)
+ ![Screenshot of the View local share details.](media/azure-stack-edge-gpu-manage-shares/add-local-share-4.png)
## Mount a share
If you created a share before you configured compute on your Azure Stack Edge Pr
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share you want to mount. The **Used for compute** column will show the status as **Disabled** for the selected share.
- ![Select share](media/azure-stack-edge-gpu-manage-shares/mount-share-1.png)
+ ![Screenshot of the Select share to mount.](media/azure-stack-edge-gpu-manage-shares/mount-share-1.png)
2. Select **Mount**.
- ![Select mount](media/azure-stack-edge-gpu-manage-shares/mount-share-2.png)
+ ![Screenshot of the Select mount option in the command bar.](media/azure-stack-edge-gpu-manage-shares/mount-share-2.png)
3. When prompted for confirmation, select **Yes**. This will mount the share.
- ![Confirm mount](media/azure-stack-edge-gpu-manage-shares/mount-share-3.png)
+ ![Screenshot of the Confirm mount dialog.](media/azure-stack-edge-gpu-manage-shares/mount-share-3.png)
4. After the share is mounted, go to the list of shares. You'll see that the **Used for compute** column shows the share status as **Enabled**.
- ![Share mounted](media/azure-stack-edge-gpu-manage-shares/mount-share-4.png)
+ ![Screenshot of the Share mounted confirmation.](media/azure-stack-edge-gpu-manage-shares/mount-share-4.png)
5. Select the share again to view the local mountpoint for the share. Edge compute module uses this local mountpoint for the share.
- ![Local mountpoint for the share](media/azure-stack-edge-gpu-manage-shares/mount-share-5.png)
+ ![Screenshot of the local mount point for the share.](media/azure-stack-edge-gpu-manage-shares/mount-share-5.png)
## Unmount a share
Do the following steps in the Azure portal to unmount a share.
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share that you want to unmount. You want to make sure that the share you unmount isn't used by any modules. If the share is used by a module, then you'll see issues with the corresponding module.
- ![Select share 2](media/azure-stack-edge-gpu-manage-shares/unmount-share-1.png)
+ ![Screenshot of select share to unmount.](media/azure-stack-edge-gpu-manage-shares/unmount-share-1.png)
2. Select **Unmount**.
- ![Select unmount](media/azure-stack-edge-gpu-manage-shares/unmount-share-2.png)
+ ![Screenshot of selecting the unmount option from the command bar.](media/azure-stack-edge-gpu-manage-shares/unmount-share-2.png)
3. When prompted for confirmation, select **Yes**. This will unmount the share.
- ![Confirm unmount](media/azure-stack-edge-gpu-manage-shares/unmount-share-3.png)
+ ![Screenshot of confirming the unmount operation.](media/azure-stack-edge-gpu-manage-shares/unmount-share-3.png)
4. After the share is unmounted, go to the list of shares. You'll see that **Used for compute** column shows the share status as **Disabled**.
- ![Share unmounted](media/azure-stack-edge-gpu-manage-shares/unmount-share-4.png)
+ ![Screenshot of the share unmounted confirmation.](media/azure-stack-edge-gpu-manage-shares/unmount-share-4.png)
## Delete a share
Use the following steps in the Azure portal to delete a share.
1. From the list of shares, select and click the share that you want to delete.
- ![Screenshot of select share 3](media/azure-stack-edge-gpu-manage-shares/delete-share-1.png)
+ ![Screenshot of select share to delete.](media/azure-stack-edge-gpu-manage-shares/delete-share-1.png)
2. Select **Delete**.
- ![Screenshot of select delete](media/azure-stack-edge-gpu-manage-shares/delete-share-2.png)
+ ![Screenshot of the delete option confirmation.](media/azure-stack-edge-gpu-manage-shares/delete-share-2.png)
3. When prompted for confirmation, select **Yes**.
- ![Confirm delete](media/azure-stack-edge-gpu-manage-shares/delete-share-3.png)
+ ![Screenshot of the deleted share confirmation.](media/azure-stack-edge-gpu-manage-shares/delete-share-3.png)
The list of shares updates to reflect the deletion.
Do the following steps in the Azure portal to refresh a share.
1. In the Azure portal, go to **Shares**. Select and click the share that you want to refresh.
- ![Select share 4](media/azure-stack-edge-gpu-manage-shares/refresh-share-1.png)
+ ![Screenshot of the share to refresh.](media/azure-stack-edge-gpu-manage-shares/refresh-share-1.png)
2. Select **Refresh**.
- ![Screenshot of select refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-2.png)
+ ![Screenshot of select refresh data.](media/azure-stack-edge-gpu-manage-shares/refresh-share-2.png)
3. When prompted for confirmation, select **Yes**. A job starts to refresh the contents of the on-premises share.
- ![Confirm refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-3.png)
+ ![Screenshot of confirmation to refresh data for the share.](media/azure-stack-edge-gpu-manage-shares/refresh-share-3.png)
4. While the refresh is in progress, the refresh option is grayed out in the context menu. Select the job notification to view the refresh job status. 5. The time to refresh depends on the number of files in the Azure container and the files on the device. Once the refresh has successfully completed, the share timestamp is updated. Even if the refresh has partial failures, the operation is considered successful and the timestamp is updated. The refresh error logs are also updated.
-![Updated timestamp](media/azure-stack-edge-gpu-manage-shares/refresh-share-4.png)
+ ![Screenshot of the updated timestamp for the refresh operation.](media/azure-stack-edge-gpu-manage-shares/refresh-share-4.png)
-If there's a failure, an alert is raised. The alert details the cause and the recommendation to fix the issue. The alert also links to a file that has the complete summary of the failures including the files that failed to update or delete.
+ If there's a failure, an alert is raised. The alert details the cause and the recommendation to fix the issue. The alert also links to a file that has the complete summary of the failures including the files that failed to update or delete.
## Sync pinned files
To automatically sync up pinned files, do the following steps in the Azure porta
2. Go to **Containers** and select **+ Container** to create a container. Name this container as *newcontainer*. Set the **Public access level** to Container.
- ![Automated sync for pinned files 1](media/azure-stack-edge-gpu-manage-shares/image-1.png)
+ ![Screenshot of the automated sync for pinned files.](media/azure-stack-edge-gpu-manage-shares/image-1.png)
3. Select the container name and set the following metadata: - Name = "Pinned" - Value = "True"
- ![Automated sync for pinned files 2](media/azure-stack-edge-gpu-manage-shares/image-2.png)
+ ![Screenshot of metadata options for automated sync for pinned files.](media/azure-stack-edge-gpu-manage-shares/image-2.png)
4. Create a new share on your device. Map it to the pinned container by choosing the existing container option. Mark the share as read only. Create a new user and specify the user name and a corresponding password for this share.
- ![Automated sync for pinned files 3](media/azure-stack-edge-gpu-manage-shares/image-3.png)
+ ![Screenshot of new share mapping using an existing container for automated sync for pinned files.](media/azure-stack-edge-gpu-manage-shares/image-3.png)
5. From the Azure portal, browse to the container that you created. Upload the file that you want to be pinned into the new container, that has the metadata set to pinned. 6. Select **Refresh data** in Azure portal for the device to download the pinning policy for that particular Azure Storage container.
- ![Automated sync for pinned files 4](media/azure-stack-edge-gpu-manage-shares/image-4.png)
+ ![Screenshot of the Refresh data option in automated sync for pinned files.](media/azure-stack-edge-gpu-manage-shares/image-4.png)
7. Access the new share that was created on the device. The file that was uploaded to the storage account is now downloaded to the local share.
Do the following steps in the Azure portal to sync your storage access key.
1. Go to **Overview** in your resource. From the list of shares, select a share associated with the storage account that you need to sync.
- ![Select share with relevant storage account](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-1.png)
+ ![Screenshot of selecting a share with relevant storage account.](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-1.png)
2. Select **Sync storage key**. Select **Yes** when prompted for confirmation.
- ![Select Sync storage key](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-2.png)
+ ![Screenshot of selecting a Sync storage key.](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-2.png)
3. Exit out of the dialog once the sync is complete.
databox-online Azure Stack Edge Pro 2 Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-prep.md
Previously updated : 03/04/2022 Last updated : 05/03/2022 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro 2 so I can use it to transfer data to Azure.
If you have an existing Azure Stack Edge resource to manage your physical device
### Create an order
-You can use the Azure Edge Hardware Center to explore and order a variety of hardware from the Azure hybrid portfolio including Azure Stack Edge Pro 2 devices.
+You can use the Azure Edge Hardware Center to explore and order various hardware from the Azure hybrid portfolio including Azure Stack Edge Pro 2 devices.
When you place an order through the Azure Edge Hardware Center, you can order multiple devices, to be shipped to more than one address, and you can reuse ship to addresses from other orders.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
description: This article lists the security alerts visible in Microsoft Defende
Previously updated : 06/22/2022 Last updated : 06/29/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **Access from an unusual location**<br>(CosmosDB_GeoAnomaly) | This Azure Cosmos DB account was accessed from a location considered unfamiliar, based on the usual access pattern. <br><br> Either a threat actor has gained access to the account, or a legitimate user has connected from a new or unusual geographic location | Initial Access | Low | | **Unusual volume of data extracted**<br>(CosmosDB_DataExfiltrationAnomaly) | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium | | **Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High |
+| **Suspicious extraction of Azure Cosmos DB account keys** (AzureCosmosDB_SuspiciousListKeys.SuspiciousPrincipal) | A suspicious source extracted Azure Cosmos DB account access keys from your subscription. If this source is not a legitimate source, this may be a high impact issue. The access key that was extracted provides full control over the associated databases and the data stored within. See the details of each specific alert to understand why the source was flagged as suspicious. | Credential Access | high |
| **SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isnΓÇÖt authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium | | **SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack wonΓÇÖt succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, itΓÇÖs an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low | -- ## <a name="alerts-azurenetlayer"></a>Alerts for Azure network layer [Further details and notes](other-threat-protections.md#network-layer)
defender-for-cloud Concept Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md
You can use this information to quickly remediate security issues and improve th
Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. To learn how to stream alerts, see [Stream alerts to a SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). > [!TIP]
-> For a comprehensive list of all Defender for Storage alerts, see the [alerts reference page](alerts-reference.md#alerts-azurecosmos). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
+> For a comprehensive list of all Defender for Azure Cosmos DB alerts, see the [alerts reference page](alerts-reference.md#alerts-azurecosmos). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
## Alert types
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 06/15/2022 Last updated : 06/28/2022 # Enable Microsoft Defender for Containers
Defender for Containers protects your clusters whether they're running in:
Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md).
-You can learn more about from the product manager by watching [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md).
-
-You can also watch [Protect Containers in GCP with Defender for Containers](episode-ten.md) to learn how to protect your containers.
+You can learn more by watching these video from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md)
+- [Protect Containers in GCP with Defender for Containers](episode-ten.md)
::: zone pivot="defender-for-container-arc,defender-for-container-eks,defender-for-container-gke" > [!NOTE]
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 06/15/2022 Last updated : 06/28/2022 # Overview of Microsoft Defender for Containers
Microsoft Defender for Containers is the cloud-native solution for securing your
[How does Defender for Containers work in each Kubernetes platform?](defender-for-containers-architecture.md)
-You can learn more from the product manager about Microsoft Defender for Containers by watching [Microsoft Defender for Containers](episode-three.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Containers](episode-three.md)
## Microsoft Defender for Containers plan availability
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for Servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Servers. Previously updated : 06/26/2022 Last updated : 06/29/2022 # Overview of Microsoft Defender for Servers
To protect machines in hybrid and multicloud environments, Defender for Cloud us
> [!TIP] > For details of which Defender for Servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers).
-You can learn more from the product manager about Defender for Servers, by watching [Microsoft Defender for Servers](episode-five.md). You can also watch [Enhanced workload protection features in Defender for Servers](episode-twelve.md), or learn how to [deploy in Defender for Servers in AWS and GCP](episode-fourteen.md).
+You can learn more by watching these videos from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Servers](episode-five.md)
+- [Enhanced workload protection features in Defender for Servers](episode-twelve.md)
+- [Deploy in Defender for Servers in AWS and GCP](episode-fourteen.md)
## What are the Microsoft Defender for server plans?
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 06/16/2022 Last updated : 06/29/2022 # Overview of Microsoft Defender for Storage
Analyzed telemetry of Azure Blob Storage includes operation types such as **Get
Defender for Storage doesn't access the Storage account data and has no impact on its performance.
-You can learn more about from the product manager by watching [Defender for Storage in the field](episode-thirteen.md)
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Defender for Storage in the field](episode-thirteen.md)
## Availability
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
Title: Use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud description: Enable, deploy, and use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines Previously updated : 06/15/2022 Last updated : 06/29/2022 # Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management
For a quick overview of threat and vulnerability management, watch this video:
> [!TIP] > As well as alerting you to vulnerabilities, threat and vulnerability management provides additional functionality for Defender for Cloud's asset inventory tool. Learn more in [Software inventory](asset-inventory.md#access-a-software-inventory).
-You can also learn more from the product manager about security posture by watching [Microsoft Defender for Servers](episode-five.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Servers](episode-five.md)
## Availability
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Title: Understand the enhanced security features of Microsoft Defender for Cloud description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud Previously updated : 06/12/2022 Last updated : 06/29/2022
Defender for Cloud is offered in two modes:
- [If a Log Analytics agent reports to multiple workspaces, is the 500 MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them) - [Is the 500 MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine) - [What data types are included in the 500 MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
+- [How can I monitor my daily usage](#how-can-i-monitor-my-daily-usage)
### How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud? Azure Subscriptions may have multiple administrators with permissions to change the pricing settings. To find out which user made a change, use the Azure Activity Log.
Defender for Cloud's billing is closely tied to the billing for Log Analytics. [
If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
-## How can I monitor my daily usage
+### How can I monitor my daily usage
You can view your data usage in two different ways, the Azure portal, or by running a script.
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
Title: Microsoft Defender for Servers
-description: Learn all about Microsoft Defender for Servers from the product manager.
+description: Learn all about Microsoft Defender for Servers.
Previously updated : 06/01/2022 Last updated : 06/28/2022 # Microsoft Defender for Servers
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
Title: Prioritize security actions by data sensitivity - Microsoft Defender for Cloud description: Use Microsoft Purview's data sensitivity classifications in Microsoft Defender for Cloud Previously updated : 06/15/2022 Last updated : 06/29/2022 # Prioritize security actions by data sensitivity
Microsoft Defender for Cloud customers using Microsoft Purview can benefit from
This page explains the integration of Microsoft Purview's data sensitivity classification labels within Defender for Cloud.
-You can learn more from the product manager about Microsoft Defender for Cloud's [integration with Azure Purview](episode-two.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Integration with Azure Purview](episode-two.md)
## Availability |Aspect|Details|
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 06/19/2022 Last updated : 06/29/2022 zone_pivot_groups: connect-aws-accounts
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
:::image type="content" source="./media/quickstart-onboard-aws/aws-account-in-overview.png" alt-text="Four AWS projects listed on Defender for Cloud's overview dashboard" lightbox="./media/quickstart-onboard-aws/aws-account-in-overview.png":::
-You can learn more from the product manager about Microsoft Defender for Cloud's new AWS connector by watching [New AWS connector](episode-one.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [New AWS connector](episode-one.md)
::: zone pivot="env-settings"
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Improving your security posture with recommendations in Microsoft Defender for Cloud description: This document walks you through how to identify security recommendations that will help you improve your security posture. Previously updated : 06/15/2022 Last updated : 06/29/2022 # Find recommendations that can improve your security posture
To get to the list of recommendations:
You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations, and look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
-You can learn more from the product manager about security posture by watching [Security posture management improvements](episode-four.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Security posture management improvements](episode-four.md)
## Finding recommendations with high impact on your secure score<a name="monitor-recommendations"></a>
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 06/08/2022 Last updated : 06/29/2022
The **tabs** below show the features that are available, by environment, for Mic
### [**Azure (AKS)**](#tab/azure-aks)
-| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
|--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VM, VMSS | GA | X | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Compliance | Docker CIS | VM, VMSS | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Preview | Defender profile | Defender for Containers | Commercial clouds | | Hardening | Control plane recommendations | ACR, AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Hardening | Kubernetes data plane recommendations | AKS | GA | X | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Runtime protection| Threat detection (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime protection| Threat detection (workload) | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Runtime protection| Threat detection (workload) | AKS | Preview | - | Defender profile | Defender for Containers | Commercial clouds |
| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Discovery and provisioning | Collection of control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Auto provisioning of Defender profile | AKS | Preview | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Defender profile | AKS | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ### [**AWS (EKS)**](#tab/aws-eks)
-| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Compliance | Docker CIS | EC2 | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
| Vulnerability Assessment | Registry scan | - | - | - | - | - | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | EKS | Preview | - | Azure Policy extension | Defender for Containers |
| Runtime protection| Threat detection (control plane)| EKS | Preview | Preview | Agentless | Defender for Containers |
-| Runtime protection| Threat detection (workload) | EKS | Preview | X | Defender extension | Defender for Containers |
-| Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | X | Agentless | Free |
+| Runtime protection| Threat detection (workload) | EKS | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | - | Agentless | Free |
| Discovery and provisioning | Collection of control plane threat data | EKS | Preview | Preview | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - | | Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
The **tabs** below show the features that are available, by environment, for Mic
### [**GCP (GKE)**](#tab/gcp-gke)
-| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Compliance | Docker CIS | GCP VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
| Vulnerability Assessment | Registry scan | - | - | - | - | - | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | GKE | Preview | - | Azure Policy extension | Defender for Containers |
| Runtime protection| Threat detection (control plane)| GKE | Preview | Preview | Agentless | Defender for Containers |
-| Runtime protection| Threat detection (workload) | GKE | Preview | X | Defender extension | Defender for Containers |
-| Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | X | Agentless | Free |
+| Runtime protection| Threat detection (workload) | GKE | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | - | Agentless | Free |
| Discovery and provisioning | Collection of control plane threat data | GKE | Preview | Preview | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | X | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | - | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | - | Agentless | Defender for Containers |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ### [**On-prem/IaaS (Arc)**](#tab/iaas-arc)
-| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Compliance | Docker CIS | Arc enabled VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
-| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
| Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy extension | Defender for Containers |
| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
-| Runtime protection| Threat detection (workload) | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
-| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | X | Agentless | Free |
+| Runtime protection| Threat detection (workload) | Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free |
| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers | | Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features that are available, by environment, for Mic
| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> | <sup><a name="footnote1"></a>1</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
-<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../azure-arc/kubernetes/overview.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](https://mseng.visualstudio.com/TechnicalContent/_workitems/recentlyupdated/) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-azure-digital-twins-explorer.md
# Mandatory fields. Title: Azure Digital Twins Explorer
+ Title: Azure Digital Twins Explorer (preview)
-description: Learn about the capabilities and purpose of Azure Digital Twins Explorer and when it can be a useful tool for visualizing digital models, twins, and graphs.
+description: Learn about the capabilities and purpose of Azure Digital Twins Explorer (preview) and when it can be a useful tool for visualizing digital models, twins, and graphs.
Last updated 02/28/2022
# Azure Digital Twins Explorer (preview)
-This article contains information about the Azure Digital Twins Explorer, including its use cases and an overview of its features. For detailed steps on using each feature, see [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+This article contains information about the Azure Digital Twins Explorer, including its use cases and an overview of its features. For detailed steps on using each feature, see [Use Azure Digital Twins Explorer (preview)](how-to-use-azure-digital-twins-explorer.md).
*Azure Digital Twins Explorer* is a developer tool for visualizing and interacting with the data in your Azure Digital Twins instance, including your [models](concepts-models.md) and [twin graph](concepts-twins-graph.md).
->[!NOTE]
->This tool is currently in public preview.
- Here's a view of the explorer window, showing models and twins that have been populated for a sample graph: :::image type="content" source="media/concepts-azure-digital-twins-explorer/azure-digital-twins-explorer-demo.png" alt-text="Screenshot of Azure Digital Twins Explorer showing sample models and twins." lightbox="media/concepts-azure-digital-twins-explorer/azure-digital-twins-explorer-demo.png":::
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-maps.md
This pattern reads from the room twin directly, rather than the IoT device, whic
>[!NOTE] >There is currently a known issue in Cloud Shell affecting these command groups: `az dt route`, `az dt model`, `az dt twin`. >
- >To resolve, either run `az login` in Cloud Shell prior to running the command, or use the [local CLI](/cli/azure/install-azure-cli) instead of Cloud Shell. For more detail on this, see [Troubleshoot known issues](troubleshoot-known-issues.md#400-client-error-bad-request-in-cloud-shell).
+ >To resolve, either run `az login` in Cloud Shell prior to running the command, or use the [local CLI](/cli/azure/install-azure-cli) instead of Cloud Shell. For more detail on this, see [Azure Digital Twins known issues](troubleshoot-known-issues.md#400-client-error-bad-request-in-cloud-shell).
```azurecli-interactive az dt route create --dt-name <your-Azure-Digital-Twins-instance-hostname-or-name> --endpoint-name <Event-Grid-endpoint-name> --route-name <my-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'"
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
# Mandatory fields. Title: Use Azure Digital Twins Explorer
+ Title: Use Azure Digital Twins Explorer (preview)
-description: Learn how to use all the features of Azure Digital Twins Explorer
+description: Learn how to use all the features of Azure Digital Twins Explorer (preview)
Last updated 02/24/2022
# Use Azure Digital Twins Explorer (preview)
-[Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) is a tool for visualizing and working with Azure Digital Twins. This article describes the features of Azure Digital Twins Explorer, and how to use them to manage the data in your Azure Digital Twins instance. You can interact with the Azure Digital Twins Explorer using clicks or [keyboard shortcuts](#accessibility-and-advanced-settings).
-
->[!NOTE]
->This tool is currently in public preview.
+[Azure Digital Twins Explorer (preview)](concepts-azure-digital-twins-explorer.md) is a tool for visualizing and working with Azure Digital Twins. This article describes the features of Azure Digital Twins Explorer, and how to use them to manage the data in your Azure Digital Twins instance. You can interact with the Azure Digital Twins Explorer using clicks or [keyboard shortcuts](#accessibility-and-advanced-settings).
## How to access
digital-twins Troubleshoot Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-known-issues.md
Last updated 02/28/2022
-# Troubleshoot Azure Digital Twins known issues
+# Azure Digital Twins known issues
This article provides information about known issues associated with Azure Digital Twins.
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
Title: Get right-sized Azure recommendation for your on-premises SQL Server database(s) description: Learn how to use the Azure SQL migration extension in Azure Data Studio to get SKU recommendation to migrate SQL Server database(s) to the right-sized Azure SQL Managed Instance or SQL Server on Azure Virtual Machines. --++
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Title: What is Azure Database Migration Service? description: Overview of Azure Database Migration Service, which provides seamless migrations from many database sources to Azure Data platforms. --++
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
Title: Azure Database Migration Service tools matrix description: Learn about the services and tools available to migrate databases and to support various phases of the migration process. --++
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
Title: Migrate SSIS packages to SQL Managed Instance
description: Learn how to migrate SQL Server Integration Services (SSIS) packages and projects to an Azure SQL Managed Instance using the Azure Database Migration Service or the Data Migration Assistant. --++
dms How To Migrate Ssis Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages.md
Title: Redeploy SSIS packages to SQL single database
description: Learn how to migrate or redeploy SQL Server Integration Services packages and projects to Azure SQL Database single database using the Azure Database Migration Service and Data Migration Assistant. --++
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
Title: Monitor migration activity - Azure Database Migration Service description: Learn to use the Azure Database Migration Service to monitor migration activity. --++
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
Title: "PowerShell: Migrate SQL Server to SQL Managed Instance offline"
description: Learn to offline migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service. --++
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
Title: "PowerShell: Migrate SQL Server to SQL Managed Instance online"
description: Learn to online migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service. --++
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
Title: "PowerShell: Migrate SQL Server to SQL Database"
description: Learn to migrate a database from SQL Server to Azure SQL Database by using Azure PowerShell with the Azure Database Migration Service. --++
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md
Title: "Known issues: Online migrations from PostgreSQL to Azure Database for Po
description: Learn about known issues and migration limitations with online migrations from PostgreSQL to Azure Database for PostgreSQL using the Azure Database Migration Service. --++
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Title: Known issues and limitations with online migrations to Azure SQL Managed Instance description: Learn about known issues/migration limitations associated with online migrations to Azure SQL Managed Instance. --++
dms Known Issues Dms Hybrid Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-dms-hybrid-mode.md
Title: Known issues/migration limitations with using Hybrid mode description: Learn about known issues/migration limitations with using Azure Database Migration Service in hybrid mode. --++
dms Known Issues Mongo Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-mongo-cosmos-db.md
Title: "Known issues: Migrate from MongoDB to Azure Cosmos DB"
description: Learn about known issues and migration limitations with migrations from MongoDB to Azure Cosmos DB using the Azure Database Migration Service. --++
dms Known Issues Troubleshooting Dms Source Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms-source-connectivity.md
Title: "Issues connecting source databases"
description: Learn about how to troubleshoot known issues/errors associated with connecting Azure Database Migration Service to source databases. --++
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
Title: "Common issues - Azure Database Migration Service" description: Learn about how to troubleshoot common known issues/errors associated with using Azure Database Migration Service. --++
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Title: Migrate databases at scale using Azure PowerShell / CLI description: Learn how to use Azure PowerShell or CLI to migrate databases at scale using the capabilities of Azure SQL migration extension in Azure Data Studio with Azure Database Migration Service. --++
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Title: Migrate using Azure Data Studio description: Learn how to use the Azure SQL migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service. --++
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
Title: Prerequisites for Azure Database Migration Service description: Learn about an overview of the prerequisites for using the Azure Database Migration Service to perform database migrations. --++
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
Title: "Quickstart: Create a hybrid mode instance with Azure portal"
description: Use the Azure portal to create an instance of Azure Database Migration Service in hybrid mode. --++
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
Title: "Quickstart: Create an instance using the Azure portal"
description: Use the Azure portal to create an instance of Azure Database Migration Service. --++
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations"
description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance online migrations. --++
dms Resource Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-network-topologies.md
Title: Network topologies for SQL Managed Instance migrations description: Learn the source and target configurations for Azure SQL Managed Instance migrations using the Azure Database Migration Service.--++
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
Title: Database migration scenario status description: Learn about the status of the migration scenarios supported by Azure Database Migration Service.--++ Last updated 06/13/2022
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate Azure DB for PostgreSQL to Azure DB for PostgreSQL onl
description: Learn to perform an online migration from one Azure DB for PostgreSQL to another Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal. --++
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db-online.md
Title: "Tutorial: Migrate MongoDB online to Azure Cosmos DB API for MongoDB"
description: Learn to migrate from MongoDB on-premises to Azure Cosmos DB API for MongoDB online by using Azure Database Migration Service. --++
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db.md
Title: "Tutorial: Migrate MongoDB offline to Azure Cosmos DB API for MongoDB"
description: Migrate from MongoDB on-premises to Azure Cosmos DB API for MongoDB offline, by using Azure Database Migration Service. --++
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate PostgreSQL to Azure DB for PostgreSQL online via the A
description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal. --++
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
Title: "Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online via
description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the CLI. --++
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
Title: "Tutorial: Migrate RDS PostgreSQL online to Azure Database for PostgreSQL
description: Learn to perform an online migration from RDS PostgreSQL to Azure Database for PostgreSQL by using the Azure Database Migration Service. --++
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using
description: Migrate SQL Server to an Azure SQL Managed Instance offline using Azure Data Studio with Azure Database Migration Service (Preview) --++
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance online using
description: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with Azure Database Migration Service --++
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
Title: "Tutorial: Migrate SQL Server online to SQL Managed Instance"
description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service. --++
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
Title: "Tutorial: Migrate SQL Server offline to Azure SQL Database"
description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service. --++
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
Title: "Tutorial: Migrate SQL Server to SQL Managed Instance"
description: Learn to migrate from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service. --++
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 06/02/2022 Last updated : 06/29/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Azure DNS Private Resolver is a new service that enables you to query Azure DNS
Azure DNS Private Resolver requires an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). When you create an Azure DNS Private Resolver inside a virtual network, one or more [inbound endpoints](#inbound-endpoints) are established that can be used as the destination for DNS queries. The resolver's [outbound endpoint](#outbound-endpoints) processes DNS queries based on a [DNS forwarding ruleset](#dns-forwarding-rulesets) that you configure. DNS queries that are initiated in networks linked to a ruleset can be sent to other DNS servers.
+You don't need to change any DNS client settings on your virtual machines (VMs) to use the Azure DNS Private Resolver.
+ The DNS query process when using an Azure DNS Private Resolver is summarized below: 1. A client in a virtual network issues a DNS query.
dns Private Dns Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-scenarios.md
Azure DNS Private Zones provide name resolution within a virtual network and bet
In this scenario, you have a virtual network in Azure that has many resources in it, including virtual machines. Your requirement is to resolve any resources in the virtual network using a specific domain name (DNS zone). You also need the naming resolution to be private and not accessible from the internet. Lastly, you need Azure to automatically register VMs into the DNS zone.
-This scenario is shown below. We have a virtual network named "A" containing two VMs (VNETA-VM1 and VNETA-VM2). Each VM has a private IP associated. Once you've create a private zone, for example `contoso.com` and link virtual network "A" as a registration virtual network. Azure DNS will automatically create two A records in the zone referencing the two VMs. DNS queries from VNETA-VM1 can now resolve `VNETA-VM2.contoso.com` and will receive a DNS response that contains the private IP address of VNETA-VM2.
+This scenario is shown below. We have a virtual network named "A" containing two VMs (VNETA-VM1 and VNETA-VM2). Each VM has a private IP associated. Once you've created a private zone, for example, `contoso.com`, and link virtual network "A" as a registration virtual network, Azure DNS will automatically create two A records in the zone referencing the two VMs. DNS queries from VNETA-VM1 can now resolve `VNETA-VM2.contoso.com` and will receive a DNS response that contains the private IP address of VNETA-VM2.
You can also do a reverse DNS query (PTR) for the private IP of VNETA-VM1 (10.0.0.1) from VNETA-VM2. The DNS response will contain the name VNETA-VM1, as expected. ![Single Virtual network resolution](./media/private-dns-scenarios/single-vnet-resolution.png)
event-grid Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/network-security.md
By default, topic and domain are accessible from the internet as long as the req
For step-by-step instructions to configure IP firewall for topics and domains, see [Configure IP firewall](configure-firewall.md). ++ ## Private endpoints You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to your topics and domains securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. A private endpoint is a special network interface for an Azure service in your VNet. When you create a private endpoint for your topic or domain, it provides secure connectivity between clients on your VNet and your Event Grid resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Event Grid service uses a secure private link.
The following table describes the various states of the private endpoint connect
For publishing to be successful, the private endpoint connection state should be **approved**. If a connection is rejected, it can't be approved using the Azure portal. The only possibility is to delete the connection and create a new one instead.
-## Pricing and quotas
-**Private endpoints** is available in both basic and premium tiers of Event Grid. Event Grid allows up to 64 private endpoint connections to be created per topic or domain.
-**IP Firewall** feature is available in both basic and premium tiers of Event Grid. We allow up to 16 IP Firewall rules to be created per topic or domain.
+## Quotas and limits
+There's a limit on the number of IP firewall rules and private endpoint connections per topic or domain. See [Event Grid quotas and limits](quotas-limits.md).
## Next steps You can configure IP firewall for your Event Grid resource to restrict access over the public internet from only a select set of IP Addresses or IP Address ranges. For step-by-step instructions, see [Configure IP firewall](configure-firewall.md).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications | | **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | Supported| GlobalConnect, Megaport, Telenor, Telia Carrier | | **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo |
+| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix |
| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Megaport, NextDC | | **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | Supported | Cox Business Cloud Port, CenturyLink Cloud Connect, Megaport, Zayo | | **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| Supported | Tata Communications |
The following table shows connectivity locations and the service providers for e
| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ | | **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport, NextDC | | **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom, Chunghwa Telecom, FarEasTone |
-| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | n/a | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> **We are currently unable to support new ExpressRoute circuits in Tokyo. Please create new circuits in Tokyo2 or Osaka.* |
+| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> |
| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO, China Unicom Global, Colt, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications | | **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **du datamena** |Supported |Supported | Dubai2 | | **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin| | **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** |Supported |Supported | Singapore, Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported | Taipei |
firewall-manager Quick Secure Virtual Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-bicep.md
+
+ Title: 'Quickstart: Secure virtual hub using Azure Firewall Manager - Bicep'
+description: In this quickstart, you learn how to secure your virtual hub using Azure Firewall Manager and Bicep.
+++ Last updated : 06/28/2022+++++
+# Quickstart: Secure your virtual hub using Azure Firewall Manager - Bicep
+
+In this quickstart, you use Bicep to secure your virtual hub using Azure Firewall Manager. The deployed firewall has an application rule that allows connections to `www.microsoft.com` . Two Windows Server 2019 virtual machines are deployed to test the firewall. One jump server is used to connect to the workload server. From the workload server, you can only connect to `www.microsoft.com`.
++
+For more information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates a secured virtual hub using Azure Firewall Manager, along with the necessary resources to support the scenario.
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/fwm-docs-qs/).
++
+Multiple Azure resources are defined in the Bicep file:
+
+- [**Microsoft.Network/virtualWans**](/azure/templates/microsoft.network/virtualWans)
+- [**Microsoft.Network/virtualHubs**](/azure/templates/microsoft.network/virtualHubs)
+- [**Microsoft.Network/firewallPolicies**](/azure/templates/microsoft.network/firewallPolicies)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines)
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts)
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)
+- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `main.bicep` to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-user>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-user>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-user\>** with the administrator login username for the servers. You'll be prompted to enter **adminPassword**.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use Azure CLI or Azure PowerShell to review the deployed resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+Now, test the firewall rules to confirm that it works as expected.
+
+1. From the Azure portal, review the network settings for the **Workload-Srv** virtual machine and note the private IP address.
+2. Connect a remote desktop to **Jump-Srv** virtual machine, and sign in. From there, open a remote desktop connection to the **Workload-Srv** private IP address.
+3. Open Internet Explorer and browse to `www.microsoft.com`.
+4. Select **OK** > **Close** on the Internet Explorer security alerts.
+
+ You should see the Microsoft home page.
+
+5. Browse to `www.google.com`.
+
+ You should be blocked by the firewall.
+
+Now you've verified that the firewall rules are working, you can browse to the one allowed FQDN, but not to any others.
+
+## Clean up resources
+
+When you no longer need the resources that you created with the firewall, use Azure portal, Azure CLI, or Azure PowerShell to delete the resource group. This removes the firewall and all the related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about security partner providers](trusted-security-partners.md)
frontdoor Rule Set Server Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rule-set-server-variables.md
When you use [Rule set actions](front-door-rules-engine-actions.md), you can use
| `request_uri` | The full original request URI (with arguments).<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `request_uri` value will be `/article.aspx?id=123&title=fabrikam`.<br/> To access this server variable in a match condition, use [Request URL](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-url).| | `ssl_protocol` | The protocol of an established TLS connection.<br/> To access this server variable in a match condition, use [SSL protocol](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#ssl-protocol).| | `server_port` | The port of the server that accepted a request.<br/> To access this server variable in a match condition, use [Server port](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#server-port).|
-| `url_path` | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `uri_path` value will be `/article.aspx`.<br/> To access this server variable in a match condition, use [Request path](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-path).|
+| `url_path` | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `url_path` value will be `/article.aspx`.<br/> To access this server variable in a match condition, use [Request path](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-path).|
## Server variable format
governance Route State Change Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/route-state-change-events.md
Title: "Tutorial: Route policy state change events to Event Grid with Azure CLI" description: In this tutorial, you configure Event Grid to listen for policy state change events and call a webhook. Previously updated : 08/17/2021 Last updated : 06/29/2022 ++ # Tutorial: Route policy state change events to Event Grid with Azure CLI
uses the `Microsoft.PolicyInsights.PolicyStates` topic type for Azure Policy sta
```azurecli-interactive # Log in first with az login if you're not using Cloud Shell
-az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/subscriptions/<SubscriptionID>" --resource-group "<resource_group_name>"
+az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/subscriptions/<subscriptionID>" --resource-group "<resource_group_name>"
+```
+
+If your Event Grid system topic will be applied to the management group scope, then the Azure CLI `--source` parameter syntax is a bit different. Here's an example:
+
+```azurecli-interactive
+az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/tenants/<tenantID>/providers/Microsoft.Management/managementGroups/<management_group_name>" --resource-group "<resource_group_name>"
``` ## Create a message endpoint
groups** definition. This policy definition identifies resource groups that are
configured during policy assignment. Run the following command to create a policy assignment scoped to the resource group you created to
-hold the event grid topic:
+hold the Event Grid topic:
```azurecli-interactive # Log in first with az login if you're not using Cloud Shell
-az policy assignment create --name 'requiredtags-events' --display-name 'Require tag on RG' --scope '<ResourceGroupScope>' --policy '<policy definition ID>' --params '{ "tagName": { "value": "EventTest" } }'
+az policy assignment create --name 'requiredtags-events' --display-name 'Require tag on RG' --scope '<ResourceGroupScope>' --policy '<policy definition ID>' --params '{ \"tagName\": { \"value\": \"EventTest\" } }'
``` The preceding command uses the following information:
hdinsight Apache Hadoop Mahout Linux Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-mahout-linux-mac.md
description: Learn how to use the Apache Mahout machine learning library to gene
Previously updated : 05/14/2020 Last updated : 06/29/2022 # Generate recommendations using Apache Mahout in Azure HDInsight
hdinsight Apache Hadoop On Premises Migration Best Practices Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-infrastructure.md
description: Learn infrastructure best practices for migrating on-premises Hadoo
Previously updated : 12/06/2019 Last updated : 06/29/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - infrastructure best practices
For more information, see the article [Connect HDInsight to your on-premises net
## Next steps
-Read the next article in this series: [Storage best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-storage.md).
+Read the next article in this series: [Storage best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-storage.md).
hdinsight Troubleshoot Invalidnetworkconfigurationerrorcode Cluster Creation Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails.md
Title: InvalidNetworkConfigurationErrorCode error - Azure HDInsight
description: Various reasons for failed cluster creations with InvalidNetworkConfigurationErrorCode in Azure HDInsight Previously updated : 01/12/2021 Last updated : 06/29/2022 # Cluster creation fails with InvalidNetworkConfigurationErrorCode in Azure HDInsight
hdinsight Hdinsight Hadoop Stack Trace Error Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-stack-trace-error-messages.md
description: Index of Hadoop stack trace error messages in Azure HDInsight. Find
Previously updated : 01/03/2020 Last updated : 06/29/2022 # Index of Apache Hadoop in HDInsight troubleshooting articles
hdinsight Hdinsight Migrate Granular Access Cluster Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-migrate-granular-access-cluster-configurations.md
Title: Granular role-based access Azure HDInsight cluster configurations
description: Learn about the changes required as part of the migration to granular role-based access for HDInsight cluster configurations. Previously updated : 04/20/2020 Last updated : 06/29/2022 # Migrate to granular role-based access for cluster configurations
hdinsight Gateway Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/gateway-best-practices.md
Title: Gateway deep dive and best practices for Apache Hive in Azure HDInsight
description: Learn how to navigate the best practices for running Hive queries over the Azure HDInsight gateway Previously updated : 04/01/2020 Last updated : 06/29/2022 # Gateway deep dive and best practices for Apache Hive in Azure HDInsight
expect delays when retrieving the same results via external tools.
* [Apache Beeline on HDInsight](../hadoop/apache-hadoop-use-hive-beeline.md) * [HDInsight Gateway Timeout Troubleshooting Steps](./troubleshoot-gateway-timeout.md) * [Virtual Networks for HDInsight](../hdinsight-plan-virtual-network-deployment.md)
-* [HDInsight with Express Route](../connect-on-premises-network.md)
+* [HDInsight with Express Route](../connect-on-premises-network.md)
hdinsight Troubleshoot Gateway Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/troubleshoot-gateway-timeout.md
Title: Exception when running queries from Apache Ambari Hive View in Azure HDIn
description: Troubleshooting steps when running Apache Hive queries through Apache Ambari Hive View in Azure HDInsight. Previously updated : 12/23/2019 Last updated : 06/29/2022 # Exception when running queries from Apache Ambari Hive View in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Apache Spark Troubleshoot Illegalargumentexception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-illegalargumentexception.md
Title: IllegalArgumentException error for Apache Spark - Azure HDInsight
description: IllegalArgumentException for Apache Spark activity in Azure HDInsight for Azure Data Factory Previously updated : 07/29/2019 Last updated : 06/29/2022 # Scenario: IllegalArgumentException for Apache Spark activity in Azure HDInsight
Make sure the application jar is stored on the default/primary storage for the H
## Next steps
hdinsight Apache Storm Example Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/storm/apache-storm-example-topology.md
description: A list of example Storm topologies created and tested with Apache S
Previously updated : 12/27/2019 Last updated : 06/29/2022 # Example Apache Storm topologies and components for Apache Storm on HDInsight
healthcare-apis Get Started With Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-started-with-health-data-services.md
After the Azure Health Data Services resource group is deployed, you can enter t
To be guided through these steps, see [Deploy Azure Health Data Services workspace using Azure portal](healthcare-apis-quickstart.md).
-> [!Note]
+> [!NOTE]
> You can provision multiple data services within a workspace, and by design, they work seamlessly with one another. With the workspace, you can organize all your Azure Health Data Services instances and manage certain configuration settings that are shared among all the underlying datasets and services where it's applicable. [![Screenshot of the Azure Health Data Services workspace.](media/health-data-services-workspace.png)](media/health-data-services-workspace.png#lightbox)
For more information, see [Get started with the DICOM service](./../healthcare-a
MedTech service transforms device data into FHIR-based observation resources and then persists the transformed messages into Azure Health Data Services FHIR service. This allows for a unified approach to health data access, standardization, and trend capture enabling the discovery of operational and clinical insights, connecting new device applications, and enabling new research projects.
-To ensure that your MedTech service works properly, it must have granted access permissions to the Azure Event Hub and FHIR service. The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this Event Hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](./../healthcare-apis/authentication-authorization.md)
+To ensure that your MedTech service works properly, it must have granted access permissions to the Azure Event Hubs and FHIR service. The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this event hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](./../healthcare-apis/authentication-authorization.md)
You can also do the following: - Create a new FHIR service or use an existing one in the same or different workspace -- Create a new Event Hub or use an existing one -- Assign roles to allow the MedTech service to access [Event Hub](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-medtech-service-access) and [FHIR service](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service)-- Send data to the Event Hub, which is associated with the MedTech service
+- Create a new event hub or use an existing one
+- Assign roles to allow the MedTech service to access [Event Hubs](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-the-medtech-service-access) and [FHIR service](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service)
+- Send data to the event hub, which is associated with the MedTech service
For more information, see [Get started with the MedTech service](./../healthcare-apis/iot/get-started-with-iot.md).
This article described the basic steps to get started using Azure Health Data Se
>[Frequently asked questions about Azure Health Data Services](healthcare-apis-faqs.md) FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.-
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Title: MedTech service in the Azure portal - Azure Health Data Services
-description: In this article, you'll learn how to deploy MedTech service in the Azure portal.
+ Title: Deploy the MedTech service in the Azure portal - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service in the Azure portal.
Previously updated : 04/07/2022 Last updated : 06/29/2022
-# Deploy MedTech service in the Azure portal
+# Deploy the MedTech service in the Azure portal
In this quickstart, you'll learn how to deploy MedTech service in the Azure portal. The MedTech service will enable you to ingest data from Internet of Things (IoT) into your Fast Healthcare Interoperability Resources (FHIR&#174;) service.
It's important that you have the following prerequisites completed before you be
>* Two MedTech services accessing the same device message event hub. >* A MedTech service and a storage writer application accessing the same device message event hub.
-## Deploy MedTech service
+If you already have an active Azure account, you can use this [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json) button to deploy a MedTech service that will include the following resources and permissions:
+
+ * An Azure Event Hubs Namespace and device message event hub (the event hub is named: **devicedata**).
+ * An Azure event hub sender role (the sender role is named: **devicedatasender**).
+ * An Azure Health Data Services workspace.
+ * An Azure Health Data Services FHIR service.
+ * An Azure Health Data Services MedTech service including the necessary system managed identity permissions to the device message event hub and FHIR service.
+
+When the Azure portal launches, the following fields must be filled out:
+ * **Subscription** - Choose the Azure subscription you would like to use for the deployment.
+ * **Resource Group** - Choose an existing Resource Group or create a new Resource Group.
+ * **Region** - The Azure region of the Resource Group used for the deployment. This field will auto-fill based on the Resource Group region.
+ * **Basename** - Will be used to append the name the Azure services to be deployed.
+ * **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (could be the same or different region than your Resource Group).
+
+Leave the **Device Mapping** and **Destination Mapping** fields with their default values.
+
+Select the **Review + create** button once the fields are filled out.
++
+After the validation has passed, select the **Create** button to begin the deployment.
++
+After a successful deployment, there will be remaining configurations that will need to be completed by you for a fully functional MedTech service:
+ * Provide a working device mapping file. For more information, see [How to use device mappings](how-to-use-device-mappings.md).
+ * Provide a working destination mapping file. For more information, see [How to use FHIR destination mappings](how-to-use-fhir-mappings.md).
+ * Use the Shared access policies (SAS) key (**devicedatasender**) for connecting your device or application to the MedTech service device message event hub (**devicedata**). For more information, see [Connection string for a specific event hub in a namespace](../../event-hubs/event-hubs-get-connection-string.md#connection-string-for-a-specific-event-hub-in-a-namespace).
+
+## Deploy the MedTech service
1. Sign in the [Azure portal](https://portal.azure.com), and then enter your Health Data Services workspace resource name in the **Search** bar field.
It's important that you have the following prerequisites completed before you be
![Screenshot of add MedTech services.](media/add-iot-connector.png#lightbox)
-## Configure MedTech service to ingest data
+## Configure the MedTech service to ingest data
Under the **Basics** tab, complete the required fields under **Instance details**.
Under the **Basics** tab, complete the required fields under **Instance details*
5. Select **Next: Device mapping**.
-## Configure Device mapping properties
+## Configure the Device mapping properties
> [!TIP] > The IoMT Connector Data Mapper is an open source tool to visualize the mapping configuration for normalizing a device's input data, and then transform it to FHIR resources. Developers can use this tool to edit and test Devices and FHIR destination mappings, and to export the data to upload to an MedTech service in the Azure portal. This tool also helps developers understand their device's Device and FHIR destination mapping configurations.
Under the **Basics** tab, complete the required fields under **Instance details*
2. Select **Next: Destination >** to configure the destination properties associated with your MedTech service.
-## Configure FHIR destination mapping properties
+## Configure the FHIR destination mapping properties
Under the **Destination** tab, enter the destination properties associated with the MedTech service.
Under the **Tags** tab, enter the tag properties associated with the MedTech ser
Now that your MedTech service has been deployed, we're going to walk through the steps of assigning permissions to access the event hub and FHIR service.
-## Granting MedTech service access
+## Granting the MedTech service access
To ensure that your MedTech service works properly, it must have granted access permissions to the event hub and FHIR service.
For more information about authoring access to Event Hubs resources, see [Author
![Screenshot of FHIR service added role assignment message.](media/fhir-service-added-role-assignment.png#lightbox)
- For more information about assigning roles to the FHIR service, see [Configure Azure RBAC](.././configure-azure-rbac.md).
+ For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md).
## Next steps
-In this article, you've learned how to deploy a MedTech service in the Azure portal. For an overview of MedTech service, see
+In this article, you've learned how to deploy a MedTech service in the Azure portal. To learn more about the device and FHIR destination mapping files for the MedTech service, see
>[!div class="nextstepaction"]
->[MedTech service overview](iot-connector-overview.md)
+>[How to use Device mappings](how-to-use-device-mappings.md)
+>
+>[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
You can create a workspace from the [Azure portal](../healthcare-apis-quickstart
> [!NOTE] > There are limits to the number of workspaces and the number of MedTech service instances you can create in each Azure subscription.
-## Create the FHIR service and an Event Hub
+## Create the FHIR service and an event hub
-The MedTech service works with the Azure Event Hub and the FHIR service. You can create a new [FHIR service](../fhir/get-started-with-fhir.md) or use an existing one in the same or different workspace. Similarly, you can create a new [Event Hub](../../event-hubs/event-hubs-create.md) or use an existing one.
+The MedTech service works with Azure Event Hubs and the FHIR service. You can create a new [FHIR service](../fhir/get-started-with-fhir.md) or use an existing one in the same or different workspace. Similarly, you can create a new [Event Hub](../../event-hubs/event-hubs-create.md) or use an existing one.
## Create a MedTech service in the workspace
You can create a MedTech service from the [Azure portal](deploy-iot-connector-in
Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) and [DICOM service](../dicom/deploy-dicom-services-in-azure.md) in the workspace.
-## Assign roles to allow MedTech service to access Event Hub
+## Assign roles to allow MedTech service to access Event Hubs
-By design, the MedTech service retrieves data from the specified Event Hub using the system-managed identity. For more information on how to assign the role to the MedTech service from [Event Hub](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-medtech-service-access).
+By design, the MedTech service retrieves data from the specified event hub using the system-managed identity. For more information on how to assign the role to the MedTech service from [Event Hubs](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-the-medtech-service-access).
## Assign roles to allow MedTech service to access FHIR service
The MedTech service persists the data to the FHIR store using the system-managed
## Sending data to the MedTech service
-You can send data to the Event Hub, which is associated with the MedTech service. If you don't see any data in the FHIR service, check the mappings and role assignments for the MedTech service.
+You can send data to the event hub, which is associated with the MedTech service. If you don't see any data in the FHIR service, check the mappings and role assignments for the MedTech service.
## MedTech service mappings, data flow, ML, Power BI, and Teams notifications
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-raspberry-pi.md
Here are two examples for the `du-config.json` and the `du-diagnostics-config.js
ssh raspberrypi3 -l root ```
- 1. Create or open the `du-config.jso` file for editing by using:
+ 1. Create or open the `du-config.json` file for editing by using:
```bash nano /adu/du-config.json
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
In this quickstart you created a Key Vault and stored a certificate in it. To le
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure PowerShell Key Vault cmdlets](/powershell/module/az.keyvault/) - Review the [Key Vault security overview](../general/security-features.md)
-.md)
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
For more information, review the [Azurite documentation](https://github.com/Azur
* [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app.
- * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.3904) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. Don't install the 4.x version, which isn't supported and won't work.
+ * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4585) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. Don't install the 4.x version, which isn't supported and won't work.
These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
There are multiple ways to create a workspace:
## <a name="sub-resources"></a> Sub resources
-These sub resources are the main resources that are made in the AML workspace.
+These sub resources are the main resources that are made in the AzureML workspace.
-* VMs: provide computing power for your AML workspace and are an integral part in deploying and training models.
+* VMs: provide computing power for your AzureML workspace and are an integral part in deploying and training models.
* Load Balancer: a network load balancer is created for each compute instance and compute cluster to manage traffic even while the compute instance/cluster is stopped. * Virtual Network: these help Azure resources communicate with one another, the internet, and other on-premises networks. * Bandwidth: encapsulates all outbound data transfers across regions.
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
If you're an owner of a workspace, you can add and remove roles for the workspac
- [REST API](../role-based-access-control/role-assignments-rest.md) - [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)
+## Use Azure AD security groups to manage workspace access
+
+You can use Azure AD security groups to manage their access to workspace. This approach has following benefits:
+ * Team or project leaders can manage user access to workspace as security group owners, without needing Owner role on the workspace resource directly.
+ * You can organize, manage and revoke users' permissions on workspace and other resources as a group, without having to manage permissions on user-by-user basis.
+ * Using Azure AD groups helps you to avoid reaching the [subscription limit](https://docs.microsoft.com/azure/role-based-access-control/troubleshooting#azure-role-assignments-limit) on role assignments.
+
+To use Azure AD security groups:
+ 1. [Create a security group](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal).
+ 2. [Add a group owner](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners). This user has permissions to add or remove group members. Note that the group owner is not required to be group member, or have direct RBAC role on the workspace.
+ 3. Assign the group an RBAC role on the workspace, such as AzureML Data Scientist, Reader or Contributor.
+ 4. [Add group members](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-groups-members-azure-portal). The members consequently gain access to the workspace.
++ ## Create custom role If the built-in roles are insufficient, you can create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that workspace. You can make the role available at a specific workspace level, a specific resource group level, or a specific subscription level.
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
try:
ml_client = MLClient.from_config(credential) except Exception as ex: print(ex)
- # Enter details of your AML workspace
+ # Enter details of your AzureML workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"
- workspace = "<AML_WORKSPACE_NAME>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
ml_client = MLClient(credential, subscription_id, resource_group, workspace) ```
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Check the Azure CLI extensions you've installed:
:::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_list":::
-Ensure no conflicting extension using the `ml` namespace is installed, including the `azure-cli-ml` extension:
+Remove any existing installation of the of `ml` extension and also the CLI v1 `azure-cli-ml` extension:
:::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_remove":::
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-databricks-automl-environment.md
To use automated ML, skip to [Add the Azure ML SDK with AutoML](#add-the-azure-m
![Azure Machine Learning SDK for Databricks](./media/how-to-configure-environment/amlsdk-withoutautoml.jpg) ## Add the Azure ML SDK with AutoML to Databricks
-If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the AML SDK.
+If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the AzureML SDK.
``` %pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
## Limitations -- If you need a **Standard Load Balancer(SLB)** deployed in your cluster instead of a Basic Load Balancer(BLB), create a cluster in the AKS portal/CLI/SDK and then **attach** it to the AML workspace.
+- If you need a **Standard Load Balancer(SLB)** deployed in your cluster instead of a Basic Load Balancer(BLB), create a cluster in the AKS portal/CLI/SDK and then **attach** it to the AzureML workspace.
- If you have an Azure Policy that restricts the creation of Public IP addresses, then AKS cluster creation will fail. AKS requires a Public IP for [egress traffic](../aks/limit-egress-traffic.md). The egress traffic article also provides guidance to lock down egress traffic from the cluster through the Public IP, except for a few fully qualified domain names. There are 2 ways to enable a Public IP: - The cluster can use the Public IP created by default with the BLB or SLB, Or - The cluster can be created without a Public IP and then a Public IP is configured with a firewall with a user defined route. For more information, see [Customize cluster egress with a user-defined-route](../aks/egress-outboundtype.md).
- The AML control plane does not talk to this Public IP. It talks to the AKS control plane for deployments.
+ The AzureML control plane does not talk to this Public IP. It talks to the AKS control plane for deployments.
- To attach an AKS cluster, the service principal/user performing the operation must be assigned the __Owner or contributor__ Azure role-based access control (Azure RBAC) role on the Azure resource group that contains the cluster. The service principal/user must also be assigned [Azure Kubernetes Service Cluster Admin Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) on the cluster. -- If you **attach** an AKS cluster, which has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AML control plane IP ranges for the AKS cluster. The AML control plane is deployed across paired regions and deploys inference pods on the AKS cluster. Without access to the API server, the inference pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
+- If you **attach** an AKS cluster, which has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AzureML control plane IP ranges for the AKS cluster. The AzureML control plane is deployed across paired regions and deploys inference pods on the AKS cluster. Without access to the API server, the inference pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
Authorized IP ranges only works with Standard Load Balancer.
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
The `train.py` file contains a normal python function, which performs the traini
#### Define component using python function
-After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AML pipelines.
+After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AzureML pipelines.
:::code language="python" source="~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train_component.py":::
machine-learning How To Create Register Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-data-assets.md
Last updated 05/24/2022
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](./v1/how-to-create-register-datasets.md)
-> * [v2 (current version)](how-to-create-register-datasets.md)
+> * [v2 (current version)](how-to-create-register-data-assets.md)
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] [!INCLUDE [CLI v2](../../includes/machine-learning-CLI-v2.md)]
ml_client.data.create_or_update(my_data)
## Next steps -- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
+- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
machine-learning How To Data Ingest Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-data-ingest-adf.md
The following Python code demonstrates how to create a datastore that connects t
```python ws = Workspace.from_config()
-adlsgen2_datastore_name = '<ADLS gen2 storage account alias>' #set ADLS Gen2 storage account alias in AML
+adlsgen2_datastore_name = '<ADLS gen2 storage account alias>' #set ADLS Gen2 storage account alias in AzureML
subscription_id=os.getenv("ADL_SUBSCRIPTION", "<ADLS account subscription ID>") # subscription id of ADLS account resource_group=os.getenv("ADL_RESOURCE_GROUP", "<ADLS account resource group>") # resource group of ADLS account
from azureml.core import Workspace, Datastore, Dataset
from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig
-# retrieve data via AML datastore
+# retrieve data via AzureML datastore
datastore = Datastore.get(ws, adlsgen2_datastore) datastore_path = [(datastore, '/data/prepared-data.csv')]
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md
To enable debugging, make the following changes to the Python script(s) used by
parser.add_argument('--remote_debug', action='store_true') parser.add_argument('--remote_debug_connection_timeout', type=int, default=300,
- help=f'Defines how much time the AML compute target '
+ help=f'Defines how much time the AzureML compute target '
f'will await a connection from a debugger client (VSCODE).') parser.add_argument('--remote_debug_client_ip', type=str, help=f'Defines IP Address of VS Code client')
parser.add_argument("--output_train", type=str, help="output_train directory")
parser.add_argument('--remote_debug', action='store_true') parser.add_argument('--remote_debug_connection_timeout', type=int, default=300,
- help=f'Defines how much time the AML compute target '
+ help=f'Defines how much time the AzureML compute target '
f'will await a connection from a debugger client (VSCODE).') parser.add_argument('--remote_debug_client_ip', type=str, help=f'Defines IP Address of VS Code client')
machine-learning How To Deploy Managed Online Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoint-sdk-v2.md
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential). ```python
- # enter details of your AML workspace
+ # enter details of your AzureML workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"
- workspace = "<AML_WORKSPACE_NAME>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
``` ```python
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
from azureml.core.webservice import AksWebservice, Webservice
# If deploying to a cluster configured for dev/test, ensure that it was created with enough # cores and memory to handle this deployment configuration. Note that memory is also used by
-# things such as dependencies and AML components.
+# things such as dependencies and AzureML components.
aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True, autoscale_min_replicas=1,
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
run = experiment.submit(config=src)
Once you have a trained model, you can save/serialize it to a `.pkl` file with `pickle.dump()` and `pickle.load()`. You can also use `joblib.dump()` and `joblib.load()`.
-The following example is how you download and load a model in-memory that was trained in AML compute with `ScriptRunConfig`. This code can run in the same notebook you used the Azure ML SDK `ScriptRunConfig`.
+The following example is how you download and load a model in-memory that was trained in AzureML compute with `ScriptRunConfig`. This code can run in the same notebook you used the Azure ML SDK `ScriptRunConfig`.
```python import joblib
machine-learning How To Safely Rollout Managed Endpoints Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints-sdk-v2.md
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential). ```python
- # enter details of your AML workspace
+ # enter details of your AzureML workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"
- workspace = "<AML_WORKSPACE_NAME>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
``` ```python
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-web-service.md
When you request a certificate, you must provide the FQDN of the address that yo
## <a id="enable"></a> Enable TLS and deploy
-**For AKS deployment**, you can enable TLS termination when you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md) in AML workspace. At AKS model deployment time, you can disable TLS termination with deployment configuration object, otherwise all AKS model deployment by default will have TLS termination enabled at AKS cluster create or attach time.
+**For AKS deployment**, you can enable TLS termination when you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md) in AzureML workspace. At AKS model deployment time, you can disable TLS termination with deployment configuration object, otherwise all AKS model deployment by default will have TLS termination enabled at AKS cluster create or attach time.
For ACI deployment, you can enable TLS termination at model deployment time with deployment configuration object.
For ACI deployment, you can enable TLS termination at model deployment time with
> [!NOTE] > The information in this section also applies when you deploy a secure web service for the designer. If you aren't familiar with using the Python SDK, see [What is the Azure Machine Learning SDK for Python?](/python/api/overview/azure/ml/intro).
-When you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md) in AML workspace, you can enable TLS termination with **[AksCompute.provisioning_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none--load-balancer-type-none--load-balancer-subnet-none-)** and **[AksCompute.attach_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#attach-configuration-resource-group-none--cluster-name-none--resource-id-none--cluster-purpose-none-)** configuration objects. Both methods return a configuration object that has an **enable_ssl** method, and you can use **enable_ssl** method to enable TLS.
+When you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md) in AzureML workspace, you can enable TLS termination with **[AksCompute.provisioning_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none--load-balancer-type-none--load-balancer-subnet-none-)** and **[AksCompute.attach_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#attach-configuration-resource-group-none--cluster-name-none--resource-id-none--cluster-purpose-none-)** configuration objects. Both methods return a configuration object that has an **enable_ssl** method, and you can use **enable_ssl** method to enable TLS.
You can enable TLS either with Microsoft certificate or a custom certificate purchased from CA.
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
Title: Track, monitor, and analyze runs in studio
+ Title: Track, monitor, and analyze jobs in studio
-description: Learn how to start, monitor, and track your machine learning experiment runs with the Azure Machine Learning studio.
+description: Learn how to start, monitor, and track your machine learning experiment jobs with the Azure Machine Learning studio.
Previously updated : 04/28/2022 Last updated : 06/24/2022
-# Start, monitor, and track run history in studio
+# Start, monitor, and track job history in studio
-You can use [Azure Machine Learning studio](https://ml.azure.com) to monitor, organize, and track your runs for training and experimentation. Your ML run history is an important part of an explainable and repeatable ML development process.
+You can use [Azure Machine Learning studio](https://ml.azure.com) to monitor, organize, and track your jobs for training and experimentation. Your ML job history is an important part of an explainable and repeatable ML development process.
This article shows how to do the following tasks:
-* Add run display name.
+* Add job display name.
* Create a custom view.
-* Add a run description.
-* Tag and find runs.
-* Run search over your run history.
-* Cancel or fail runs.
-* Monitor the run status by email notification.
+* Add a job description.
+* Tag and find jobs.
+* Run search over your job history.
+* Cancel or fail jobs.
+* Monitor the job status by email notification.
> [!TIP]
-> * If you're looking for information on using the Azure Machine Learning SDK v1 or CLI v1, see [How to track, monitor, and analyze runs (v1)](./v1/how-to-track-monitor-analyze-runs.md).
-> * If you're looking for information on monitoring training runs from the CLI or SDK v2, see [Track experiments with MLflow and CLI v2](how-to-use-mlflow-cli-runs.md).
+> * If you're looking for information on using the Azure Machine Learning SDK v1 or CLI v1, see [How to track, monitor, and analyze jobs (v1)](./v1/how-to-track-monitor-analyze-runs.md).
+> * If you're looking for information on monitoring training jobs from the CLI or SDK v2, see [Track experiments with MLflow and CLI v2](how-to-use-mlflow-cli-runs.md).
> * If you're looking for information on monitoring the Azure Machine Learning service and associated Azure services, see [How to monitor Azure Machine Learning](monitor-azure-machine-learning.md). > > If you're looking for information on monitoring models deployed as web services, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
You'll need the following items:
* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). * You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-## Run Display Name
+## Job display name
-The run display name is an optional and customizable name that you can provide for your run. To edit the run display name:
+The job display name is an optional and customizable name that you can provide for your job. To edit the job display name:
-1. Navigate to the runs list.
+1. Navigate to the **Jobs** list.
-2. Select the run to edit the display name in the run details page.
+1. Select the job to edit.
-3. Select the **Edit** button to edit the run display name.
+ :::image type="content" source="media/how-to-track-monitor-analyze-runs/select-job.png" alt-text="Screenshot of Jobs list.":::
+1. Select the **Edit** button to edit the job display name.
+
+ :::image type="content" source="media/how-to-track-monitor-analyze-runs/display-name.gif" alt-text="Screenshot of how to edit the display name.":::
## Custom View
-To view your runs in the studio:
+To view your jobs in the studio:
-1. Navigate to the **Experiments** tab.
+1. Navigate to the **Jobs** tab.
-1. Select either **All experiments** to view all the runs in an experiment or select **All runs** to view all the runs submitted in the Workspace.
+1. Select either **All experiments** to view all the jobs in an experiment or select **All jobs** to view all the jobs submitted in the Workspace.
-In the **All runs'** page, you can filter the runs list by tags, experiments, compute target and more to better organize and scope your work.
+In the **All jobs'** page, you can filter the jobs list by tags, experiments, compute target and more to better organize and scope your work.
-1. Make customizations to the page by selecting runs to compare, adding charts or applying filters. These changes can be saved as a **Custom View** so you can easily return to your work. Users with workspace permissions can edit, or view the custom view. Also, share the custom view with team members for enhanced collaboration by selecting **Share view**.
+1. Make customizations to the page by selecting jobs to compare, adding charts or applying filters. These changes can be saved as a **Custom View** so you can easily return to your work. Users with workspace permissions can edit, or view the custom view. Also, share the custom view with team members for enhanced collaboration by selecting **Share view**.
-1. To view the run logs, select a specific run and in the **Outputs + logs** tab, you can find diagnostic and error logs for your run.
+1. To view the job logs, select a specific job and in the **Outputs + logs** tab, you can find diagnostic and error logs for your job.
-
+ :::image type="content" source="media/how-to-track-monitor-analyze-runs/custom-views-2.gif" alt-text="Screenshot of how to create a custom view.":::
-## Run description
+## Job description
-A run description can be added to a run to provide more context and information to the run. You can also search on these descriptions from the runs list and add the run description as a column in the runs list.
+A job description can be added to a job to provide more context and information to the job. You can also search on these descriptions from the jobs list and add the job description as a column in the jobs list.
-Navigate to the **Run Details** page for your run and select the edit or pencil icon to add, edit, or delete descriptions for your run. To persist the changes to the runs list, save the changes to your existing Custom View or a new Custom View. Markdown format is supported for run descriptions, which allows images to be embedded and deep linking as shown below.
+Navigate to the **Job Details** page for your job and select the edit or pencil icon to add, edit, or delete descriptions for your job. To persist the changes to the jobs list, save the changes to your existing Custom View or a new Custom View. Markdown format is supported for job descriptions, which allows images to be embedded and deep linking as shown below.
-## Tag and find runs
+## Tag and find jobs
-In Azure Machine Learning, you can use properties and tags to help organize and query your runs for important information.
+In Azure Machine Learning, you can use properties and tags to help organize and query your jobs for important information.
* Edit tags
- You can add, edit, or delete run tags from the studio. Navigate to the **Run Details** page for your run and select the edit, or pencil icon to add, edit, or delete tags for your runs. You can also search and filter on these tags from the runs list page.
+ You can add, edit, or delete job tags from the studio. Navigate to the **Job Details** page for your job and select the edit, or pencil icon to add, edit, or delete tags for your jobs. You can also search and filter on these tags from the jobs list page.
- :::image type="content" source="media/how-to-track-monitor-analyze-runs/run-tags.gif" alt-text="Screenshot: Add, edit, or delete run tags":::
+ :::image type="content" source="media/how-to-track-monitor-analyze-runs/run-tags.gif" alt-text="Screenshot of how to add, edit, or delete job tags.":::
* Query properties and tags
- You can query runs within an experiment to return a list of runs that match specific properties and tags.
+ You can query jobs within an experiment to return a list of jobs that match specific properties and tags.
- To search for specific runs, navigate to the **All runs** list. From there you have two options:
+ To search for specific jobs, navigate to the **All jobs** list. From there you have two options:
- 1. Use the **Add filter** button and select filter on tags to filter your runs by tag that was assigned to the run(s). <br><br>
+ 1. Use the **Add filter** button and select filter on tags to filter your jobs by tag that was assigned to the job(s). <br><br>
OR
- 1. Use the search bar to quickly find runs by searching on the run metadata like the run status, descriptions, experiment names, and submitter name.
+ 1. Use the search bar to quickly find jobs by searching on the job metadata like the job status, descriptions, experiment names, and submitter name.
-## Cancel or fail runs
+## Cancel or fail jobs
-If you notice a mistake or if your run is taking too long to finish, you can cancel the run.
+If you notice a mistake or if your job is taking too long to finish, you can cancel the job.
-To cancel a run in the studio, using the following steps:
+To cancel a job in the studio, using the following steps:
-1. Go to the running pipeline in either the **Experiments** or **Pipelines** section.
+1. Go to the running pipeline in either the **Jobs** or **Pipelines** section.
-1. Select the pipeline run number you want to cancel.
+1. Select the pipeline job number you want to cancel.
1. In the toolbar, select **Cancel**.
-## Monitor the run status by email notification
+## Monitor the job status by email notification
1. In the [Azure portal](https://portal.azure.com/), in the left navigation bar, select the **Monitor** tab.
The following notebooks demonstrate the concepts in this article:
* To learn more about the logging APIs, see the [logging API notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb).
-* For more information about managing runs with the Azure Machine Learning SDK, see the [manage runs notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.ipynb).
+* For more information about managing jobs with the Azure Machine Learning SDK, see the [manage jobs notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.ipynb).
## Next steps
-* To learn how to log metrics for your experiments, see [Log metrics during training runs](how-to-log-view-metrics.md).
+* To learn how to log metrics for your experiments, see [Log metrics during training jobs](how-to-log-view-metrics.md).
* To learn how to monitor resources and logs from Azure Machine Learning, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
Run this code on either of these environments:
You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets. + ## Set up the experiment This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the FileDataset for the input training data, creating the compute target, and defining the training environment.
dataset = dataset.register(workspace=ws,
Create a compute target for your training job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster. + ```Python cluster_name = "gpu-cluster"
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
Run this code on either of these environments:
You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets. + ## Set up the experiment This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the compute target, and defining the training environment.
shutil.copy('pytorch_train.py', project_folder)
Create a compute target for your PyTorch job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster. + ```Python # Choose a name for your CPU cluster
machine-learning How To Train Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md
To connect to the workspace, you need identifier parameters - a subscription, re
from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential
-#Enter details of your AML workspace
+#Enter details of your AzureML workspace
subscription_id = '<SUBSCRIPTION_ID>' resource_group = '<RESOURCE_GROUP>'
-workspace = '<AML_WORKSPACE_NAME>'
+workspace = '<AZUREML_WORKSPACE_NAME>'
#connect to the workspace ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
Run this code on either of these environments:
You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets. + ## Set up the experiment This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the compute target, and defining the training environment.
dataset.to_path()
Create a compute target for your TensorFlow job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster. + ```Python cluster_name = "gpu-cluster"
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
Virtual machine priority| Low priority virtual machines are cheaper but don't guarantee the compute nodes. Virtual machine type| Select CPU or GPU for virtual machine type. Virtual machine size| Select the virtual machine size for your compute.
- Min / Max nodes| To profile data, you must specify 1 or more nodes. Enter the maximum number of nodes for your compute. The default is 6 nodes for an AML Compute.
+ Min / Max nodes| To profile data, you must specify 1 or more nodes. Enter the maximum number of nodes for your compute. The default is 6 nodes for an AzureML Compute.
Advanced settings | These settings allow you to configure a user account and existing virtual network for your experiment. Select **Create**. Creation of a new compute can take a few minutes.
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
compute_target = ws.compute_targets[compute_name]
The intermediate data between the data preparation and the automated ML step can be stored in the workspace's default datastore, so we don't need to do more than call `get_default_datastore()` on the `Workspace` object.
-After that, the code checks if the AML compute target `'cpu-cluster'` already exists. If not, we specify that we want a small CPU-based compute target. If you plan to use automated ML's deep learning features (for instance, text featurization with DNN support) you should choose a compute with strong GPU support, as described in [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md).
+After that, the code checks if the AzureML compute target `'cpu-cluster'` already exists. If not, we specify that we want a small CPU-based compute target. If you plan to use automated ML's deep learning features (for instance, text featurization with DNN support) you should choose a compute with strong GPU support, as described in [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md).
The code blocks until the target is provisioned and then prints some details of the just-created compute target. Finally, the named compute target is retrieved from the workspace and assigned to `compute_target`.
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential). ```python
- # enter details of your AML workspace
+ # enter details of your AzureML workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"
- workspace = "<AML_WORKSPACE_NAME>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
``` ```python
machine-learning How To Use Batch Endpoints Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints-studio.md
OR
* Use a **datastore**:
- You can specify AML registered datastore or if your data is publicly available, specify the public path.
+ You can specify AzureML registered datastore or if your data is publicly available, specify the public path.
:::image type="content" source="media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option":::
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
To track a run that is not running on Azure Machine Learning compute (from now o
> [!NOTE] > When running on Azure Compute (Azure Notebooks, Jupyter Notebooks hosted on Azure Compute Instances or Compute Clusters) you don't have to configure the tracking URI. It's automatically configured for you.
-# [Using the Azure ML SDK v2](#tab/amlsdk)
+# [Using the Azure ML SDK v2](#tab/azuremlsdk)
You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential import mlflow
-#Enter details of your AML workspace
+#Enter details of your AzureML workspace
subscription_id = '<SUBSCRIPTION_ID>' resource_group = '<RESOURCE_GROUP>'
-workspace = '<AML_WORKSPACE_NAME>'
+workspace = '<AZUREML_WORKSPACE_NAME>'
ml_client = MLClient(credential=DefaultAzureCredential(), subscription_id=subscription_id,
machine-learning Monitor Resource Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-resource-reference.md
The following schemas are in use by Azure Machine Learning
| OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. |
-| AmlProjectId | The unique identifier of the AML project. |
-| AmlProjectName | The name of the AML project. |
+| AmlProjectId | The unique identifier of the AzureML project. |
+| AmlProjectName | The name of the AzureML project. |
| AmlLabelNames | The label class names which are created for the project. | | AmlDataStoreName | The name of the data store where the project's data is stored. |
The following schemas are in use by Azure Machine Learning
| TimeGenerated | Time (UTC) when the log entry was generated | | Level | The severity level of the event. Must be one of Informational, Warning, Error, or Critical. | | ResultType | The status of the event. Typical values include Started, In Progress, Succeeded, Failed, Active, and Resolved. |
-| AmlWorkspaceId | A GUID and unique ID of the AML workspace. |
+| AmlWorkspaceId | A GUID and unique ID of the AzureML workspace. |
| OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. |
-| AmlDatasetId | The ID of the AML Data Set. |
-| AmlDatasetName | The name of the AML Data Set. |
+| AmlDatasetId | The ID of the AzureML Data Set. |
+| AmlDatasetName | The name of the AzureML Data Set. |
### AmlDataStoreEvent table
The following schemas are in use by Azure Machine Learning
| TimeGenerated | Time (UTC) when the log entry was generated | | Level | The severity level of the event. Must be one of Informational, Warning, Error, or Critical. | | ResultType | The status of the event. Typical values include Started, In Progress, Succeeded, Failed, Active, and Resolved. |
-| AmlWorkspaceId | A GUID and unique ID of the AML workspace. |
+| AmlWorkspaceId | A GUID and unique ID of the AzureML workspace. |
| OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. |
-| AmlDatastoreName | The name of the AML Data Store. |
+| AmlDatastoreName | The name of the AzureML Data Store. |
### AmlDeploymentEvent table
The following schemas are in use by Azure Machine Learning
| OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. |
-| AmlServiceName | The name of the AML Service. |
+| AmlServiceName | The name of the AzureML Service. |
### AmlInferencingEvent table
The following schemas are in use by Azure Machine Learning
| OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. |
-| AmlServiceName | The name of the AML Service. |
+| AmlServiceName | The name of the AzureML Service. |
### AmlModelsEvent table
The following schemas are in use by Azure Machine Learning
| Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. | | ResultSignature | The HTTP status code of the event. Typical values include 200, 201, 202 etc. |
-| AmlModelName | The name of the AML Model. |
+| AmlModelName | The name of the AzureML Model. |
### AmlPipelineEvent table
The following schemas are in use by Azure Machine Learning
| TimeGenerated | Time (UTC) when the log entry was generated | | Level | The severity level of the event. Must be one of Informational, Warning, Error, or Critical. | | ResultType | The status of the event. Typical values include Started, In Progress, Succeeded, Failed, Active, and Resolved. |
-| AmlWorkspaceId | A GUID and unique ID of the AML workspace. |
-| AmlWorkspaceId | The name of the AML workspace. |
+| AmlWorkspaceId | A GUID and unique ID of the AzureML workspace. |
+| AmlWorkspaceId | The name of the AzureML workspace. |
| OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. | | AmlModuleId | A GUID and unique ID of the module.|
-| AmlModelName | The name of the AML Model. |
-| AmlPipelineId | The ID of the AML pipeline. |
-| AmlParentPipelineId | The ID of the parent AML pipeline (in the case of cloning). |
-| AmlPipelineDraftId | The ID of the AML pipeline draft. |
-| AmlPipelineDraftName | The name of the AML pipeline draft. |
-| AmlPipelineEndpointId | The ID of the AML pipeline endpoint. |
-| AmlPipelineEndpointName | The name of the AML pipeline endpoint. |
+| AmlModelName | The name of the AzureML Model. |
+| AmlPipelineId | The ID of the AzureML pipeline. |
+| AmlParentPipelineId | The ID of the parent AzureML pipeline (in the case of cloning). |
+| AmlPipelineDraftId | The ID of the AzureML pipeline draft. |
+| AmlPipelineDraftName | The name of the AzureML pipeline draft. |
+| AmlPipelineEndpointId | The ID of the AzureML pipeline endpoint. |
+| AmlPipelineEndpointName | The name of the AzureML pipeline endpoint. |
### AmlRunEvent table
The following schemas are in use by Azure Machine Learning
| Level | The severity level of the event. Must be one of Informational, Warning, Error, or Critical. | | ResultType | The status of the event. Typical values include Started, In Progress, Succeeded, Failed, Active, and Resolved. | | OperationName | The name of the operation associated with the log entry |
-| AmlWorkspaceId | A GUID and unique ID of the AML workspace. |
+| AmlWorkspaceId | A GUID and unique ID of the AzureML workspace. |
| Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. | | RunId | The unique ID of the run. |
The following schemas are in use by Azure Machine Learning
| OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. | | AadTenantId | The AAD tenant ID the operation was submitted for. |
-| AmlEnvironmentName | The name of the AML environment configuration. |
-| AmlEnvironmentVersion | The name of the AML environment configuration version. |
+| AmlEnvironmentName | The name of the AzureML environment configuration. |
+| AmlEnvironmentVersion | The name of the AzureML environment configuration version. |
## See also
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
Azure Machine Learning AutoML for Images requires input image data to be prepare
| Key | Description | Example | | -- |-|--|
-| `image_url` | Image location in AML datastore<br>`Required, String` | `"AmlDatastore://data_directory/Image_01.jpg"` |
+| `image_url` | Image location in AzureML datastore<br>`Required, String` | `"AmlDatastore://data_directory/Image_01.jpg"` |
| `image_details` | Image details<br>`Optional, Dictionary` | `"image_details":{"format": "jpg", "width": "400px", "height": "258px"}` | | `format` | Image type (all the available Image formats in [Pillow](https://pillow.readthedocs.io/en/stable/releasenotes/8.0.1.html) library are supported)<br>`Optional, String from {"jpg", "jpeg", "png", "jpe", "jfif","bmp", "tif", "tiff"}` | `"jpg" or "jpeg" or "png" or "jpe" or "jfif" or "bmp" or "tif" or "tiff"` | | `width` | Width of the image<br>`Optional, String or Positive Integer` | `"400px" or 400`|
The following is an example of input data format/schema in each JSON Line for im
| Key | Description | Example | | -- |-|--|
-| `image_url` | Image location in AML datastore<br>`Required, String` | `"AmlDatastore://data_directory/Image_01.jpg"` |
+| `image_url` | Image location in AzureML datastore<br>`Required, String` | `"AmlDatastore://data_directory/Image_01.jpg"` |
| `image_details` | Image details<br>`Optional, Dictionary` | `"image_details":{"format": "jpg", "width": "400px", "height": "258px"}` | | `format` | Image type (all the Image formats available in [Pillow](https://pillow.readthedocs.io/en/stable/releasenotes/8.0.1.html) library are supported)<br>`Optional, String from {"jpg", "jpeg", "png", "jpe", "jfif", "bmp", "tif", "tiff"}` | `"jpg" or "jpeg" or "png" or "jpe" or "jfif" or "bmp" or "tif" or "tiff"` | | `width` | Width of the image<br>`Optional, String or Positive Integer` | `"400px" or 400`|
Here,
| Key | Description | Example | | -- |-|--|
-| `image_url` | Image location in AML datastore<br>`Required, String` | `"AmlDatastore://data_directory/Image_01.jpg"` |
+| `image_url` | Image location in AzureML datastore<br>`Required, String` | `"AmlDatastore://data_directory/Image_01.jpg"` |
| `image_details` | Image details<br>`Optional, Dictionary` | `"image_details":{"format": "jpg", "width": "400px", "height": "258px"}` | | `format` | Image type (all the Image formats available in [Pillow](https://pillow.readthedocs.io/en/stable/releasenotes/8.0.1.html) library are supported. But for YOLO only image formats allowed by [opencv](https://pypi.org/project/opencv-python/4.3.0.36/) are supported)<br>`Optional, String from {"jpg", "jpeg", "png", "jpe", "jfif", "bmp", "tif", "tiff"}` | `"jpg" or "jpeg" or "png" or "jpe" or "jfif" or "bmp" or "tif" or "tiff"` | | `width` | Width of the image<br>`Optional, String or Positive Integer` | `"499px" or 499`|
The following is an example JSONL file for instance segmentation.
| Key | Description | Example | | -- |-|--|
-| `image_url` | Image location in AML datastore<br>`Required, String` | `"AmlDatastore://data_directory/Image_01.jpg"` |
+| `image_url` | Image location in AzureML datastore<br>`Required, String` | `"AmlDatastore://data_directory/Image_01.jpg"` |
| `image_details` | Image details<br>`Optional, Dictionary` | `"image_details":{"format": "jpg", "width": "400px", "height": "258px"}` | | `format` | Image type<br>`Optional, String from {"jpg", "jpeg", "png", "jpe", "jfif", "bmp", "tif", "tiff" }` | `"jpg" or "jpeg" or "png" or "jpe" or "jfif" or "bmp" or "tif" or "tiff"` | | `width` | Width of the image<br>`Optional, String or Positive Integer` | `"499px" or 499`|
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
* Sample notebook may not work, if it needs access to public data. * IP address ranges: The CLI command used in the [required public internet access](how-to-secure-training-vnet.md#required-public-internet-access) instructions does not return IP ranges. Use the [Azure IP ranges and service tags for Azure China](https://www.microsoft.com//download/details.aspx?id=57062) instead.
-* Azure Machine Learning compute instances preview is not supported in a workspace where Private Endpoint is enabled for now, but CI will be supported in the next deployment for the service expansion to all AML regions.
+* Azure Machine Learning compute instances preview is not supported in a workspace where Private Endpoint is enabled for now, but CI will be supported in the next deployment for the service expansion to all AzureML regions.
* Searching for assets in the web UI with Chinese characters will not work correctly. ## Next steps
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
Previously updated : 05/30/2022 Last updated : 06/16/2022 # Troubleshoot issues for Azure Managed Grafana
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
``` > [!IMPORTANT]
- > If you are using hybrid cluster as a method of migrating historic data into the new Azure Managed Instance Cassandra data centers, ensure that you run `nodetool repair --full` on all the nodes in your existing cluster's data center. You should run this only after all of the above steps have been taken. This should ensure that all historical data is replicated to your new data centers in Azure Managed Instance for Apache Cassandra. If you have a very large amount of data in your existing cluster, it may be necessary to run the repairs at the keyspace or even table level - see [here](https://cassandra.apache.org/doc/latest/cassandra/operating/repair.html) for more details on running repairs in Cassandra. Prior to changing the replication settings, you should also make sure that any application code that connects to your existing Cassandra cluster is using LOCAL_QUORUM. You should leave it at this setting during the migration (it can be switched back afterwards if required).
+ > If you are using hybrid cluster as a method of migrating historic data into the new Azure Managed Instance Cassandra data centers, ensure that you disable automatic repairs:
+ > ```azurecli-interactive
+ > az managed-cassandra cluster update --cluster-name --resource-group--repair-enabled false
+ > ```
+ > Then run `nodetool repair --full` on all the nodes in your existing cluster's data center. You should run this only after all of the above steps have been taken. This should ensure that all historical data is replicated to your new data centers in Azure Managed Instance for Apache Cassandra. If you have a very large amount of data in your existing cluster, it may be necessary to run the repairs at the keyspace or even table level - see [here](https://cassandra.apache.org/doc/latest/cassandra/operating/repair.html) for more details on running repairs in Cassandra. Prior to changing the replication settings, you should also make sure that any application code that connects to your existing Cassandra cluster is using LOCAL_QUORUM. You should leave it at this setting during the migration (it can be switched back afterwards if required). After everyhting is done and the old datacenter decommissioned you can enable automatic repair again).
+
+ > [!NOTE]
+ > To speed up repairs we advise (if system load permits it) to increase both stream throughput and compaction throughput as in the example below:
+ >```azure-cli
+ > az managed-cassandra cluster invoke-command --resource-group $resourceGroupName --cluster-name $clusterName --host $host --command-name nodetool --arguments "setstreamthroughput"="" "7000"=""
+ >
+ > az managed-cassandra cluster invoke-command --resource-group $resourceGroupName --cluster-name $clusterName --host $host --command-name nodetool --arguments "setcompactionthroughput"="" "960"=""
+ >```
## Troubleshooting
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-managed.md
Previously updated : 03/29/2022 Last updated : 06/28/2022 # Configure a managed application plan
Indicate who should have management access to this managed application in each s
Complete the following steps for Global Azure and Azure Government Cloud, as applicable. 1. In the **Azure Active Directory Tenant ID** box, enter the Azure AD Tenant ID (also known as directory ID) containing the identities of the users, groups, or applications you want to grant permissions to.
-1. In the **Principal ID** box (also known as object id), provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Identify the user by their Principal ID, which can be found at the [Azure Active Directory users blade](https://portal.azure.com/#view/Microsoft_AAD_UsersAndTenants/UserManagementMenuBlade/~/AllUsers) on the Azure portal.
+1. In the **Principal ID** box, provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Select a user from the list at the [Azure Active Directory users blade](https://portal.azure.com/#view/Microsoft_AAD_UsersAndTenants/UserManagementMenuBlade/~/AllUsers) and copy the Object ID value of that user.
1. From the **Role definition** list, select an Azure AD built-in role. The role you select describes the permissions the principal will have on the resources in the customer subscription. 1. To add another authorization, select the **Add authorization (max 100)** link, and repeat steps 1 through 3.
marketplace Azure App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-offer-setup.md
Previously updated : 03/28/2022 Last updated : 06/29/2022 # Create an Azure application offer
A test drive is a great way to showcase your offer to potential customers by giv
### Customer lead management
-Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest or deploys your product.
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
+
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
+
+> [!NOTE]
+> Connecting to a CRM system is optional.
#### To configure the connection details in Partner Center
marketplace Azure Container Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-offer-setup.md
Previously updated : 03/28/2022 Last updated : 06/29/2022 # Create an Azure Container offer
Enter a descriptive name that we'll use to refer to this offer solely within Par
## Customer leads
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
-For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
-Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
+> [!NOTE]
+> Connecting to a CRM system is optional.
+
+To configure the lead management in Partner Center:
+
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate** link.
+1. Select **Connect**.
+
+ For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+1. Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
## Next steps
marketplace Azure Vm Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-offer-setup.md
Previously updated : 03/28/2022 Last updated : 06/29/2022 # Create a virtual machine offer on Azure Marketplace
For VM offers, Azure Resource Manager (ARM) deployment is the only test drive op
To enable a test drive, select the **Enable a test drive** check box; this will enable a Test drive tab in the left-nav menu. You will configure and create the listing of your test drive using that tab later in [Configure a VM test drive](azure-vm-test-drive.md).
-With test drive, configuring a CRM for customer leads is required (see next section). To remove test drive from your offer, clear this check box.
+When a customer activates a test drive, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
+
+You can also connect the offer to your customer relationship management (CRM) system to manage leads there.
+
+> [!NOTE]
+> Connecting to a CRM system is optional.
## Customer leads
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
+
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
+
+> [!NOTE]
+> Connecting to a CRM system is optional.
+
+To configure the lead management in Partner Center:
+
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate** link.
+1. Select **Connect**.
+
+ For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
-Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
+1. Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
## Next steps
marketplace Create Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-consulting-service-offer.md
Previously updated : 03/28/2022 Last updated : 06/29/2022 # Create a consulting service offer
To publish a consulting service offer, you must meet certain eligibility require
## Configure lead management
-Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest in your consulting service. You can modify this connection at any time during or after you create the offer. For detailed guidance, see [Customer leads from your commercial marketplace offer](./partner-center-portal/commercial-marketplace-get-customer-leads.md).
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
+
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
+
+> [!NOTE]
+> Connecting to a CRM system is optional.
To configure the lead management in Partner Center:
-1. In Partner Center, go to the **Offer setup** tab.
-2. Under **Customer leads**, select the **Connect** link.
-3. In the **Connection details** dialog box, select a lead destination from the list.
-4. Complete the fields that appear. For detailed steps, see the following articles:
+1. In Partner Center, go to the **Offer setup** tab.
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination from the list.
+1. Complete the fields that appear. For detailed steps, see the following articles:
* [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table) * [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
To configure the lead management in Partner Center:
* [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo) * [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
-5. To validate the configuration you provided, select the **Validate link**.
-6. When youΓÇÖve configured the connection details, select **Connect**.
-7. Select **Save draft**.
+1. To validate the configuration you provided, select the **Validate link**.
+1. When youΓÇÖve configured the connection details, select **Connect**.
+1. Select **Save draft**.
After you submit your offer for publication in Partner Center, we'll validate the connection and send you a test lead. While you preview the offer before it goes live, test your lead connection by trying to purchase the offer yourself in the preview environment.
marketplace Create Managed Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-managed-service-offer.md
This section does not apply for this offer type.
## Customer leads
-Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest in your consulting service. You can modify this connection at any time during or after you create the offer. For detailed guidance, see [Customer leads from your commercial marketplace offer](./partner-center-portal/commercial-marketplace-get-customer-leads.md).
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
+
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
+
+> [!NOTE]
+> Connecting to a CRM system is optional.
To configure the lead management in Partner Center: 1. In Partner Center, go to the **Offer setup** tab.
-2. Under **Customer leads**, select the **Connect** link.
-3. In the **Connection details** dialog box, select a lead destination from the list.
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination from the list.
4. Complete the fields that appear. For detailed steps, see the following articles: - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
To configure the lead management in Partner Center:
- [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo) - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
-5. To validate the configuration you provided, select the **Validate link**.
-6. When youΓÇÖve configured the connection details, select **Connect**.
-7. Select **Save draft**.
+1. To validate the configuration you provided, select the **Validate link**.
+1. When youΓÇÖve configured the connection details, select **Connect**.
+1. Select **Save draft**.
-After you submit your offer for publication in Partner Center, we'll validate the connection and send you a test lead. While you preview the offer before it goes live, test your lead connection by trying to purchase the offer yourself in the preview environment.
+ After you submit your offer for publication in Partner Center, we'll validate the connection and send you a test lead. While you preview the offer before it goes live, test your lead connection by trying to purchase the offer yourself in the preview environment.
-> [!TIP]
-> Make sure the connection to the lead destination stays updated so you don't lose any leads.
+ > [!TIP]
+ > Make sure the connection to the lead destination stays updated so you don't lose any leads.
-Select **Save draft** before continuing to the next tab, **Properties**.
+1. Select **Save draft** before continuing to the next tab, **Properties**.
## Next step
marketplace Create New Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer.md
Previously updated : 03/28/2022 Last updated : 06/29/2022 # Create a SaaS offer
A test drive is a great way to showcase your offer to potential customers by giv
Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest or deploys your product. You can modify this connection at any time during or after you create the offer.
+### Configure the connection details in Partner Center
+
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
+
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
+ > [!NOTE]
-> You must configure lead management if youΓÇÖre selling your offer through Microsoft or you selected the **Contact Me** listing option. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+> Connecting to a CRM system is optional.
-### Configure the connection details in Partner Center
+To configure the lead management in Partner Center:
+
+1. In Partner Center, go to the **Offer setup** tab.
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination from the list.
+4. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+1. To validate the configuration you provided, select the **Validate link**.
+1. When youΓÇÖve configured the connection details, select **Connect**.
+1. Select **Save draft**.
+1. Select **Save draft** before continuing to the next tab, **Properties**.
## Configure Microsoft 365 App integration
marketplace Dynamics 365 Business Central Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-setup.md
Previously updated : 04/18/2022 Last updated : 06/29/2022 # Create a Dynamics 365 Business Central offer
For **How do you want potential customers to interact with this listing offer?**
> [!NOTE] > The tokens your application will receive through your trial link can only be used to obtain user information through Azure Active Directory (Azure AD) to automate account creation in your app. Microsoft accounts are not supported for authentication using this token. -- **Contact me** ΓÇô Collect customer contact information by connecting your Customer Relationship Management (CRM) system. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and marketplace source where they found your offer, will be sent to the CRM system that you've configured. For more information about configuring your CRM, see [Customer leads](#customer-leads).
+- **Contact me** ΓÇô Collect customer contact information in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. You can also connect your customer relationship management (CRM) system to manage leads there.
+ > [!NOTE]
+ > Connecting to a CRM system is optional. For more information about configuring your CRM, see [Customer leads](#customer-leads).
## Test drive
To enable a test drive for a fixed period of time, select the **Enable a test dr
## Customer leads
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
-For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
-Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
+> [!NOTE]
+> Connecting to a CRM system is optional.
+
+To configure the lead management in Partner Center:
+
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate** link.
+1. Select **Connect**.
+
+ For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+1. Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
## Next steps
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
Previously updated : 05/25/2022 Last updated : 06/29/2022 # Create a Dynamics 365 apps on Dataverse and Power Apps offer
Enter a descriptive name that we'll use to refer to this offer solely within Par
> [!NOTE] > The tokens your application will receive through your trial link can only be used to obtain user information through Azure Active Directory (Azure AD) to automate account creation in your app. Microsoft accounts are not supported for authentication using this token.
- - **Contact me** ΓÇô Collect customer contact information by connecting your Customer Relationship Management (CRM) system. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and marketplace source where they found your offer, will be sent to the CRM system that you've configured. For more information about configuring your CRM, see [Customer leads](#customer-leads).
+ - **Contact me** ΓÇô Collect customer contact information in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. You can also connect your customer relationship management (CRM) system to manage leads there.
+ > [!NOTE]
+ > Connecting to a CRM system is optional. For more information about configuring your CRM, see [Customer leads](#customer-leads).
## Test drive
A test drive is a great way to showcase your offer to potential customers by giv
> [!TIP] > A test drive is different from a free trial. You can offer either a test drive, free trial, or both. They both provide customers with your solution for a fixed period-of-time. But, a test drive also includes a hands-on, self-guided tour of your productΓÇÖs key features and benefits being demonstrated in a real-world implementation scenario.
-To enable a test drive, select the **Enable a test drive** check box and select the **Type of test drive**. You will configure the test drive later. With test drive, you must also configure your offer to a CRM system for customer leads (see next section). To remove test drive from your offer, clear this check box.
+To enable a test drive, select the **Enable a test drive** check box and select the **Type of test drive**. You will configure the test drive later. To remove test drive from your offer, clear this check box.
## Customer leads
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
+
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
+
+> [!NOTE]
+> Connecting to a CRM system is optional.
+
+To configure the lead management in Partner Center:
+
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate** link.
+1. Select **Connect**.
-For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+ For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
-Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
+1. Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
## Next steps
marketplace Dynamics 365 Operations Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-setup.md
Previously updated : 04/18/2022 Last updated : 06/29/2022 # Create a Dynamics 365 Operations Apps offer
A test drive is a great way to showcase your offer to potential customers by giv
> [!TIP] > A test drive is different from a free trial. You can offer either a test drive, free trial, or both. They both provide customers with your solution for a fixed period-of-time. But, a test drive also includes a hands-on, self-guided tour of your productΓÇÖs key features and benefits being demonstrated in a real-world implementation scenario.
-To enable a test drive, select the **Enable a test drive** check box and select the **Type of test drive**. You will configure the test drive later. With test drive, you must also configure your offer to a CRM system for customer leads (see next section). To remove test drive from your offer, clear this check box.
+To enable a test drive, select the **Enable a test drive** check box and select the **Type of test drive**. You will configure the test drive later. To remove test drive from your offer, clear this check box.
## Customer leads
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
-For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
+
+> [!NOTE]
+> Connecting to a CRM system is optional.
+
+To configure the lead management in Partner Center:
+
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate** link.
+1. Select **Connect**.
+
+ For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+1. Select **Save draft** before continuing to the next tab in the left-nav menu.
## Business Applications ISV Program
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-your-marketplace-benefits.md
description: Go-To-Market Services - Describes Microsoft resources that publishe
Previously updated : 05/11/2022-- Last updated : 06/29/2022++ # Your commercial marketplace benefits
marketplace Iot Edge Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-offer-setup.md
Previously updated : 03/28/2022 Last updated : 06/29/2022 # Create an IoT Edge Module offer
Enter a descriptive name that we'll use to refer to this offer solely within Par
## Customer leads
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
-For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
-Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
+> [!NOTE]
+> Connecting to a CRM system is optional.
+
+To configure the lead management in Partner Center:
+
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate** link.
+1. Select **Connect**.
+
+ For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+1. Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
## Next steps
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
description: This article describes pricing, billing, invoicing, and payout cons
Previously updated : 10/26/2021 Last updated : 06/29/2022
The transact publishing option is currently supported for the following offer ty
| Offer type | Billing cadence | Metered billing | Pricing model | | | - | - | - | | Azure Application <br>(Managed application) | Monthly | Yes | Usage-based |
-| Azure Virtual Machine | Monthly* | No | Usage-based, BYOL |
+| Azure Virtual Machine | Monthly<sup>1</sup> | No | Usage-based, BYOL |
| Software as a service (SaaS) | Monthly and annual | Yes | Flat rate, per user, usage-based. |
+| Dynamics 365 apps on Dataverse and Power Apps<sup>2</sup> | Monthly and annual | No | Per user |
-\* Azure Virtual Machine offers support usage-based billing plans. These plans are billed monthly for hourly use of the subscription based on per core, per core size, or per market and core size usage.
+<sup>1</sup> Azure Virtual Machine offers support usage-based billing plans. These plans are billed monthly for hourly use of the subscription based on per core, per core size, or per market and core size usage.
+
+<sup>2</sup> Dynamics 365 apps on Dataverse and Power Apps offers that you transact through Microsoft are automatically enabled for license management. See [ISV app license management](isv-app-license.md).
### Metered billing
The ability to transact through Microsoft is available for the following commerc
- **SaaS application**: Must be a multitenant solution, use [Azure Active Directory](https://azure.microsoft.com/services/active-directory/) for authentication, and integrate with the [SaaS Fulfillment APIs](partner-center-portal/pc-saas-fulfillment-apis.md). Azure infrastructure usage is managed and billed directly to you (the publisher), so you must account for Azure infrastructure usage fees and software licensing fees as a single cost item. For detailed guidance, see [How to plan a SaaS offer for the commercial marketplace](plan-saas-offer.md#plans).
+- **Dynamics 365 Dataverse apps and Power Apps**: Select ΓÇ£Per userΓÇ¥ pricing to enable Dynamics 365 Dataverse apps and Power Apps to be sold in AppSource marketplace. Customers can manage licenses of these offers in Microsoft Admin Center.
+ ## Private plans You can create a private plan for an offer, complete with negotiated, deal-specific pricing, or custom configurations.
marketplace Marketplace Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-containers.md
Previously updated : 03/15/2022 Last updated : 06/29/2022 # Plan an Azure container offer
These are the available licensing options for Azure Container offers:
## Customer leads
-When you're publishing an offer to the commercial marketplace with Partner Center, you'll want to connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Connecting to a CRM is required if you want to enable a test drive; otherwise, connecting to a CRM is optional. Partner Center supports Azure table, Dynamics 365 Customer Engagement, HTTPS endpoint, Marketo, and Salesforce.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
+
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
## Legal contracts
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
Previously updated : 06/06/2022 Last updated : 06/29/2022 # Plan a Microsoft Dynamics 365 offer
The following table describes the transaction process of each listing option.
## Customer leads
-When you're publishing an offer to the commercial marketplace with Partner Center, you'll want to connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Partner Center supports Azure table, Dynamics 365 Customer Engagement, HTTPS endpoint, Marketo, and Salesforce.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
+
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
## Legal
marketplace Marketplace Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-iot-edge.md
Previously updated : 03/16/2022 Last updated : 06/29/2022 # Plan an IoT Edge module offer
In all cases, IoT Edge modules should select the **Transact** publishing option.
## Customer leads
-When you're publishing an offer to the commercial marketplace with Partner Center, you'll want to connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Connecting to a CRM is required if you want to enable a test drive; otherwise, connecting to a CRM is optional. Partner Center supports Azure table, Dynamics 365 Customer Engagement, HTTPS endpoint, Marketo, and Salesforce.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
+
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
## Legal contracts
marketplace Marketplace Power Bi Visual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-power-bi-visual.md
Previously updated : 09/21/2021 Last updated : 06/29/2022 # Plan a Power BI visual offer
The technical requirements to get a Power BI visual offer published are detailed
Before submitting a Power BI visual to AppSource, ensure you've read the Power BI visuals [guidelines](/power-bi/developer/visuals/guidelines-powerbi-visuals) and [tested](/power-bi/developer/visuals/submission-testing) your visual.
+## Customer leads
+
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
+
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+ ## Legal contracts Provide an **End-User License Agreement (EULA)** file for your Power BI visual.
marketplace Marketplace Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-power-bi.md
Previously updated : 03/15/2022 Last updated : 06/29/2022 # Plan a Power BI App offer
This is the only licensing option available for Power BI app offers:
## Customer leads
-When you're publishing an offer to the commercial marketplace with Partner Center, you'll want to connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Connecting to a CRM is required if you want to enable a test drive; otherwise, connecting to a CRM is optional. Partner Center supports Azure table, Dynamics 365 Customer Engagement, HTTPS endpoint, Marketo, and Salesforce.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
+
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
## Legal contracts
marketplace Marketplace Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-virtual-machines.md
Previously updated : 04/15/2022 Last updated : 06/29/2022 # Plan a virtual machine offer
You can enable a test drive that lets customers try your offer prior to purchase
## Customer leads
-When you're publishing an offer to the commercial marketplace with Partner Center, connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Connecting to a CRM is required if you want to enable a test drive (see the preceding section). Otherwise, connecting to a CRM is optional.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
+
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
## Legal contracts
marketplace Pc Saas Fulfillment Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-webhook.md
The publisher must implement a webhook in the SaaS service to keep the SaaS subs
```json {
-"id": "<guid>",
-"activityId": "<guid>",
-"publisherId": "XXX",
-"offerId": "offerid",
-"planId": "planid",
-"quantity": 100,
-"subscriptionId": "<guid>",
-"timeStamp": "2022-02-14T20:26:05.1419317Z",
-"action": "ChangeQuantity",
-"status": "InProgress",
-"operationRequestSource": "Partner",
+ "id": "<guid>",
+ "activityId": "<guid>",
+ "publisherId": "XXX",
+ "offerId": "YYY",
+ "planId": "plan1",
+ "quantity": 100,
+ "subscriptionId": "<guid>",
+ "timeStamp": "2022-02-14T20:26:05.1419317Z",
+ "action": "ChangeQuantity",
+ "status": "InProgress",
+ "operationRequestSource": "Partner",
+ "subscription":
+ {
+ "id": "<guid>",
+ "name": "Test",
+ "publisherId": "XXX",
+ "offerId": "YYY",
+ "planId": "plan1",
+ "quantity": 10,
+ "beneficiary":
+ {
+ "emailId": "XX@gmail.com",
+ "objectId": "<guid>",
+ "tenantId": "<guid>",
+ "puid": "1234567890",
+ },
+ "purchaser":
+ {
+ "emailId": "XX@gmail.com",
+ "objectId": "<guid>",
+ "tenantId": "<guid>",
+ "puid": "1234567890",
+ },
+ "allowedCustomerOperations": ["Delete", "Update", "Read"],
+ "sessionMode": "None",
+ "isFreeTrial": false,
+ "isTest": false,
+ "sandboxType": "None",
+ "saasSubscriptionStatus": "Subscribed",
+ "term":
+ {
+ "startDate": "2022-02-10T00:00:00Z",
+ "endDate": "2022-03-12T00:00:00Z",
+ "termUnit": "P1M",
+ "chargeDuration": null,
+ },
+ "autoRenew": true,
+ "created": "2022-01-10T23:15:03.365988Z",
+ "lastModified": "2022-02-14T20:26:04.5632549Z",
+ },
+ "purchaseToken": null,
+}
``` *Webhook payload example of a subscription reinstatement event:*
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-application-offer.md
Previously updated : 03/16/2022 Last updated : 06/29/2022 # Tutorial: Plan an Azure Application offer
You can also read about [test drive best practices](https://github.com/Azure/Azu
## Customer leads
-You must connect your offer to your customer relationship management (CRM) system to collect customer information. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and online store where they found your offer, will be sent to the CRM system that you've configured. The commercial marketplace supports a variety of CRM systems, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
-You can add or modify a CRM connection at any time during or after offer creation. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
## Categories and subcategories
marketplace Plan Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-consulting-service-offer.md
Previously updated : 03/16/2022 Last updated : 06/29/2022 # Plan a consulting service offer
Your service should have a predetermined duration of up to 12 months. The servic
## Customer leads
-You must connect your offer to your customer relationship management (CRM) system to collect customer information. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and online store where they found your offer, will be sent to the CRM system that you've configured. The commercial marketplace supports different kinds of CRM systems, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
-You can add or modify a CRM connection at any time during or after offer creation. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
## Offer listing details
marketplace Plan Managed Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-managed-service-offer.md
Previously updated : 02/02/2022 Last updated : 06/29/2022 # Plan a Managed Service offer
Offers must meet all applicable [commercial marketplace certification policies](
## Customer leads
-You must connect your offer to your customer relationship management (CRM) system to collect customer information. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and online store where they found your offer, will be sent to the CRM system that you've configured. The commercial marketplace supports different kinds of CRM systems, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
+
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
You can add or modify a CRM connection at any time during or after offer creation. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
Previously updated : 05/26/2022 Last updated : 06/29/2022 # Plan a SaaS offer for the commercial marketplace
You can choose to enable a test drive for your SaaS app. Test drives give custom
## Customer leads
-You must connect your offer to your customer relationship management (CRM) system to collect customer information. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and online store where they found your offer, will be sent to the CRM system that you've configured. The commercial marketplace supports a variety of CRM systems, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate.
+The commercial marketplace will collect leads with customer information so you can access them in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center. Leads will include information such as customer details along with the offer name, ID, and online store where the customer found your offer.
+
+You can also choose to connect your CRM system to your offer. The commercial marketplace supports Dynamics 365, Marketo, and Salesforce, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
You can add or modify a CRM connection at any time during or after offer creation. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
marketplace Power Bi App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-app-offer-setup.md
Previously updated : 03/28/2022 Last updated : 06/29/2022 # Create a Power BI app offer
This section is blank and not applicable to Power BI apps.
## Customer leads
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
-For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
-Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
+> [!NOTE]
+> Connecting to a CRM system is optional.
+
+To configure the lead management in Partner Center:
+
+1. In Partner Center, go to the **Offer setup** tab.
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination from the list.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate link**.
+1. When youΓÇÖve configured the connection details, select **Connect**.
+
+ For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+1. Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
## Next steps
marketplace Power Bi Visual Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-offer-setup.md
For **Power BI certification** (optional), read the description carefully and if
## Customer leads
+When a customer expresses interest or deploys your product, youΓÇÖll receive a lead in the [Referrals workspace](https://partner.microsoft.com/dashboard/referrals/v2/leads) in Partner Center.
-Connecting to a CRM is optional. For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+You can also connect the product to your customer relationship management (CRM) system to handle leads there.
+
+> [!NOTE]
+> Connecting to a CRM system is optional.
+
+To configure the lead management in Partner Center:
+
+1. In Partner Center, go to the **Offer setup** tab.
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination from the list.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate link**.
+1. When youΓÇÖve configured the connection details, select **Connect**.
+
+ For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+1. Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
After extensions are allow-listed and loaded, these must be installed in your da
Azure Database for PostgreSQL supports a subset of key extensions as listed below. This information is also available by running `SHOW azure.extensions;`. Extensions not listed in this document are not supported on Azure Database for PostgreSQL - Flexible Server. You cannot create or load your own extension in Azure Database for PostgreSQL.
+## Postgres 14 extensions
+
+The following extensions are available in Azure Database for PostgreSQL - Flexible Servers which have Postgres version 14.
+
+> [!div class="mx-tableFixed"]
+> | **Extension**| **Extension version** | **Description** |
+> ||||
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 3.1.1 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 3.1.1 | Address Standardizer US dataset example|
+> |[amcheck](https://www.postgresql.org/docs/13/amcheck.html) | 1.2 | functions for verifying relation integrity|
+> |[bloom](https://www.postgresql.org/docs/13/bloom.html) | 1.0 | bloom access method - signature file based index|
+> |[btree_gin](https://www.postgresql.org/docs/13/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
+> |[btree_gist](https://www.postgresql.org/docs/13/btree-gist.html) | 1.5 | support for indexing common datatypes in GiST|
+> |[citext](https://www.postgresql.org/docs/13/citext.html) | 1.6 | data type for case-insensitive character strings|
+> |[cube](https://www.postgresql.org/docs/13/cube.html) | 1.4 | data type for multidimensional cubes|
+> |[dblink](https://www.postgresql.org/docs/13/dblink.html) | 1.2 | connect to other PostgreSQL databases from within a database|
+> |[dict_int](https://www.postgresql.org/docs/13/dict-int.html) | 1.0 | text search dictionary template for integers|
+> |[dict_xsyn](https://www.postgresql.org/docs/13/dict-xsyn.html) | 1.0 | text search dictionary template for extended synonym processing|
+> |[earthdistance](https://www.postgresql.org/docs/13/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth|
+> |[fuzzystrmatch](https://www.postgresql.org/docs/13/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
+> |[hstore](https://www.postgresql.org/docs/13/hstore.html) | 1.7 | data type for storing sets of (key, value) pairs|
+> |[intagg](https://www.postgresql.org/docs/13/intagg.html) | 1.1 | integer aggregator and enumerator. (Obsolete)|
+> |[intarray](https://www.postgresql.org/docs/13/intarray.html) | 1.3 | functions, operators, and index support for 1-D arrays of integers|
+> |[isn](https://www.postgresql.org/docs/13/isn.html) | 1.2 | data types for international product numbering standards|
+> |[lo](https://www.postgresql.org/docs/13/lo.html) | 1.1 | large object maintenance |
+> |[ltree](https://www.postgresql.org/docs/13/ltree.html) | 1.2 | data type for hierarchical tree-like structures|
+ > |[orafce](https://github.com/orafce/orafce) | 3.1.8 |implements in Postgres some of the functions from the Oracle database that are missing|
+> |[pageinspect](https://www.postgresql.org/docs/13/pageinspect.html) | 1.8 | inspect the contents of database pages at a low level|
+> |[pg_buffercache](https://www.postgresql.org/docs/13/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
+> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.4 | Job scheduler for PostgreSQL|
+> |[pg_freespacemap](https://www.postgresql.org/docs/13/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)|
+> |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.6.1 | Extension to manage partitioned tables by time or ID |
+> |[pg_prewarm](https://www.postgresql.org/docs/13/pgprewarm.html) | 1.2 | prewarm relation data|
+> |[pg_repack](https://github.com/reorg/pg_repack) | 1.4.7 | reorganize tables in PostgreSQL databases with minimal locks|
+> |[pg_stat_statements](https://www.postgresql.org/docs/13/pgstatstatements.html) | 1.8 | track execution statistics of all SQL statements executed|
+> |[pg_trgm](https://www.postgresql.org/docs/13/pgtrgm.html) | 1.5 | text similarity measurement and index searching based on trigrams|
+> |[pg_visibility](https://www.postgresql.org/docs/13/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
+> |[pgaudit](https://www.pgaudit.org/) | 1.6.2 | provides auditing functionality|
+> |[pgcrypto](https://www.postgresql.org/docs/13/pgcrypto.html) | 1.3 | cryptographic functions|
+> |[pglogical](https://github.com/2ndQuadrant/pglogical) | 2.3.2 | Logical streaming replication |
+> |[pgrouting](https://pgrouting.org/) | 3.3.0 | geospatial database to provide geospatial routing|
+> |[pgrowlocks](https://www.postgresql.org/docs/13/pgrowlocks.html) | 1.2 | show row-level locking information|
+> |[pgstattuple](https://www.postgresql.org/docs/13/pgstattuple.html) | 1.5 | show tuple-level statistics|
+> |[plpgsql](https://www.postgresql.org/docs/13/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
+> |[plv8](https://plv8.github.io/) | 3.0.0 | Trusted Javascript language extension|
+> |[postgis](https://www.postgis.net/) | 3.2.0 | PostGIS geometry, geography |
+> |[postgis_raster](https://www.postgis.net/) | 3.2.0 | PostGIS raster types and functions|
+> |[postgis_sfcgal](https://www.postgis.net/) | 3.2.0 | PostGIS SFCGAL functions|
+> |[postgis_tiger_geocoder](https://www.postgis.net/) | 3.2.0 | PostGIS tiger geocoder and reverse geocoder|
+> |[postgis_topology](https://postgis.net/docs/Topology.html) | 3.2.0 | PostGIS topology spatial types and functions|
+> |[postgres_fdw](https://www.postgresql.org/docs/13/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
+> |[sslinfo](https://www.postgresql.org/docs/13/sslinfo.html) | 1.2 | information about SSL certificates|
+> |[timescaledb](https://github.com/timescale/timescaledb) | 2.5.1 | Open-source relational database for time-series and analytics|
+> |[tsm_system_rows](https://www.postgresql.org/docs/13/tsm-system-rows.html) | 1.0 | TABLESAMPLE method which accepts number of rows as a limit|
+> |[tsm_system_time](https://www.postgresql.org/docs/13/tsm-system-time.html) | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit|
+> |[unaccent](https://www.postgresql.org/docs/13/unaccent.html) | 1.1 | text search dictionary that removes accents|
+> |[uuid-ossp](https://www.postgresql.org/docs/13/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
## Postgres 13 extensions
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Previously updated : 04/14/2022 Last updated : 06/29/2022 # Supported PostgreSQL major versions in Azure Database for PostgreSQL - Flexible Server Azure Database for PostgreSQL - Flexible Server currently supports the following major versions:
+## PostgreSQL version 14
+
+The current minor release is **14.3**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/14/static/release-14-3.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+ ## PostgreSQL version 13
-The current minor release is **13.6**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/13/static/release-13-6.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **13.7**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/13/static/release-13-7.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
## PostgreSQL version 12
-The current minor release is **12.10**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/12/static/release-12-10.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **12.11**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/12/static/release-12-11.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 11
-The current minor release is **11.15**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-15.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **11.16**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-16.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 10 and older
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 06/17/2022 Last updated : 06/29/2022 # Release notes - Azure Database for PostgreSQL - Flexible Server This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL. ## Release: June 2022
+* Support for [**PostgreSQL version 14**](./concepts-supported-versions.md).
+* Support for [minor versions](./concepts-supported-versions.md) 14.3, 13.7, 12.11, 11.16. <sup>$</sup>
* Support for [Same-zone high availability](concepts-high-availability.md) deployment option. * Support for choosing [standby availability zone](./how-to-manage-high-availability-portal.md) when deploying zone-redundant high availability. * Support for [extensions](concepts-extensions.md) PLV8, pgrouting with new servers<sup>$</sup>
This page provides latest news and updates regarding feature additions, engine v
## Release: November 2021
-* Azure Database for PostgreSQL is [**Generally Available**](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/azure-database-for-postgresql-flexible-server-is-now-ga/ba-p/2987030)!!
+* Azure Database for PostgreSQL is [**Generally Available**](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/azure-database-for-postgresql-flexible-server-is-now-ga/ba-p/2987030).
* Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.4, 12.8 and 11.13 with new server creates<sup>$</sup>. * Support for [Geo-redundant backup and restore](concepts-backup-restore.md) feature in preview in selected paired regions - East US 2, Central US, North Europe, West Europe, Japan East, and Japan West. * Support for [new regions](overview.md#azure-regions) North Central US, Sweden Central, and West US 3.
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-version-policy.md
Previously updated : 06/24/2022+ Last updated : 06/29/2022+
This page describes the Azure Database for PostgreSQL versioning policy, and is
* Single Server * Flexible Server
-* Hyperscale (Citus)
+* Hyperscale (Citus)
## Supported PostgreSQL versions
Azure Database for PostgreSQL supports the following database versions.
| Version | Single Server | Flexible Server | Hyperscale (Citus) | | -- | :: | :-: | :-: |
-| PostgreSQL 14 | | | X |
+| PostgreSQL 14 | | X | X |
| PostgreSQL 13 | | X | X | | PostgreSQL 12 | | X | X | | PostgreSQL 11 | X | X | X |
The table below provides the retirement details for PostgreSQL major versions. T
| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2023 | [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024 | [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025
-| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | October 1, 2021 | November 12, 2026
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | October 1, 2021 (Hyperscale Citus) <br> June 29, 2022 (Flexible Server)| November 12, 2026
## Retired PostgreSQL engine versions not supported in Azure Database for PostgreSQL
private-5g-core Statement Of Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/statement-of-compliance.md
The implementation of all of the 3GPP specifications given in [3GPP specificatio
- IETF RFC 2279: UTF-8, a transformation format of ISO 10646. - IETF RFC 2460: Internet Protocol, Version 6 (IPv6) Specification. - IETF RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers.-- IETF RFC 3748: Extensible Authentication Protocol (EAP). - IETF RFC 3986: Uniform Resource Identifier (URI): Generic Syntax.-- IETF RFC 4187: Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA). - IETF RFC 4291: IP Version 6 Addressing Architecture. - IETF RFC 4960: Stream Control Transmission Protocol.-- IETF RFC 5448: Improved Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA'). - IETF RFC 5789: PATCH Method for HTTP. - IETF RFC 6458: Sockets API Extensions for the Stream Control Transmission Protocol (SCTP). - IETF RFC 6733: Diameter Base Protocol.
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
Previously updated : 04/06/2022 Last updated : 06/28/2022 #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
Get started with Azure Private Link by creating and using a private endpoint to
In this quickstart, you'll create a private endpoint for an Azure web app and then create and deploy a virtual machine (VM) to test the private connection.
-You can create private endpoints for a variety of Azure services, such as Azure SQL and Azure Storage.
+You can create private endpoints for various Azure services, such as Azure SQL and Azure Storage.
## Prerequisites
-* An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
+- An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
- For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+ - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
- For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app by using a private endpoint](tutorial-private-endpoint-webapp-portal.md).
+ - The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name.
## Create a virtual network and bastion host
You use the bastion host to connect securely to the VM for testing the private e
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. At the upper left, select **Create a resource**.
+2. In the search box at the top of the portal, enter **Virtual network**. In the search results, select **Virtual networks**.
-1. On the left pane, select **Networking**, and then select **Virtual network**.
+3. Select **+ Create** in **Virtual networks**.
-1. On the **Create virtual network** pane, select the **Basics** tab, and then enter the following values:
+4. In the **Basics** tab of **Create virtual network**, enter or select the following information.
- | Setting | Value |
- |||
- | **Project&nbsp;details** | |
- | Subscription | Select your Azure subscription. |
- | Resource group | Select **Create New**. </br> Enter **CreatePrivateEndpointQS-rg**. </br> Select **OK**. |
- | **Instance&nbsp;details** | |
- | Name | Enter **myVNet**. |
- | Region | Select **West Europe**. |
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **CreatePrivateEndpointQS-rg** in **Name** and select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet**. |
+ | Region | Select **West Europe**. |
-1. Select the **IP Addresses** tab.
-1. On the **IP Addresses** pane, enter this value:
+5. Select **Next: IP Addresses** or the **IP Addresses** tab.
- | Setting | Value |
- |--||
- | IPv4 address space | Enter **10.1.0.0/16**. |
+6. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
-1. Under **Subnet name**, select the **Add subnet** link.
+7. In the **IP Addresses** tab, enter the following information:
-1. On the **Edit subnet** right pane, enter these values:
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
- | Setting | Value |
- |-||
- | Subnet name | Enter **mySubnet**. |
- | Subnet address range | Enter **10.1.0.0/24**. |
+8. Under **Subnet name**, select the word **default**. If a subnet isn't present, select **+ Add subnet**.
-1. Select **Add**.
+9. In **Edit subnet**, enter the following information:
-1. Select the **Security** tab.
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **mySubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
-1. For **BastionHost**, select **Enable**, and then enter these values:
+10. Select **Save** or **Add**.
- | Setting | Value |
- |-|-|
- | Bastion name | Enter **myBastionHost**. |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
- | Public IP Address | Select **Create new** and then, for **Name**, enter **myBastionIP**, and then select **OK**. |
+11. Select **Next: Security**, or the **Security** tab.
-1. Select the **Review + create** tab.
+12. Under **BastionHost**, select **Enable**. Enter the following information:
-1. Select **Create**.
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+
+13. Select the **Review + create** tab or select the **Review + create** button.
+
+14. Select **Create**.
+
+ > [!NOTE]
+ > The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
## Create a test virtual machine Next, create a VM that you can use to test the private endpoint.
-1. In the Azure portal, select **Create a resource**.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
-1. On the left pane, select **Compute**, and then select **Virtual machine**.
+2. Select **+ Create** then **Azure virtual machine** in **Virtual machines**.
-1. On the **Create a virtual machine** pane, select the **Basics** tab, and then enter the following values:
+3. In the **Basics** tab of **Create a virtual machine**, enter or select the following information.
| Setting | Value | |--||
- | **Project&nbsp;details** | |
+ | **Project details** | |
| Subscription | Select your Azure subscription. | | Resource group | Select **CreatePrivateEndpointQS-rg**. |
- | **Instance&nbsp;details** | |
+ | **Instance details** | |
| Virtual machine name | Enter **myVM**. | | Region | Select **West Europe**. | | Availability options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
| Image | Select **Windows Server 2019 Datacenter - Gen2**. |
- | Azure Spot instance | Clear the checkbox. |
| Size | Select the VM size or use the default setting. |
- | **Administrator&nbsp;account** | |
- | Authentication type | Select **Password** |
+ | **Administrator account** | |
| Username | Enter a username. | | Password | Enter a password. | | Confirm password | Reenter the password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
-1. Select the **Networking** tab.
+
+4. Select the **Networking** tab.
-1. On the **Networking** pane, enter the following values:
+5. In the **Networking** tab, enter or select the following information.
|Setting | Value | |-||
- | **Network&nbsp;interface** | |
- | Virtual network | Enter **myVNet**. |
- | Subnet | Enter **mySubnet**. |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **mySubnet (10.1.0.0/24)**. |
| Public IP | Select **None**. | | NIC network security group | Select **Basic**. | | Public inbound ports | Select **None**. |
-1. Select **Review + create**.
+6. Select **Review + create**.
-1. Review the settings, and then select **Create**.
+7. Review the settings, and then select **Create**.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
Next, create a VM that you can use to test the private endpoint.
Next, you create a private endpoint for the web app that you created in the "Prerequisites" section.
-1. In the Azure portal, select **Create a resource**.
+> [!IMPORTANT]
+> You must have a previously deployed Azure WebApp to proceed with the steps in this article. For more information, see [Prerequisites](#prerequisites) .
-1. On the left pane, select **Networking**, and then select **Private Link**. You might have to search for **Private Link** and then select it in the search results.
+1. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints**.
-1. On the **Private Link** page, select **Create**.
+2. Select **+ Create** in **Private endpoints**.
-1. In **Private Link Center**, on the left pane, select **Private endpoints**.
+3. In the **Basics** tab of **Create a private endpoint**, enter or select the following information.
-1. On the **Private endpoints** pane, select **Create**.
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreatePrivateEndpointQS-rg** |
+ | **Instance details** | |
+ | Name | Enter **myPrivateEndpoint**. |
+ | Network Interface Name | Leave the default of **myPrivateEndpoint-nic**. |
+ | Region | Select **West Europe**. |
-1. On the **Create a private endpoint** pane, select the **Basics** tab, and then enter the following values:
+4. Select **Next: Resource**.
+
+5. In the **Resource** pane, enter or select the following information.
- | Setting | Value |
- ||--|
- | **Project&nbsp;details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreatePrivateEndpointQS-rg**. You created this resource group in an earlier section. |
- | **Instance&nbsp;details** | |
- | Name | Enter **myPrivateEndpoint**. |
- | Region | Select **West Europe**. |
+ | Setting | Value |
+ | - | -- |
+ | Connection method | Leave the default of **Connect to an Azure resource in my directory.** |
+ | Subscription | Select your subscription. |
+ | Resource type | Select **Microsoft.Web/sites**. |
+ | Resource | Select **mywebapp1979**. |
+ | Target subresource | Select **sites**. |
-1. Select the **Resource** tab.
-
-1. On the **Resource** pane, enter the following values:
-
- | Setting | Value |
- |||
- | Connection method | Select **Connect to an Azure resource in my directory**. |
- | Subscription | Select your subscription. |
- | Resource type | Select **Microsoft.Web/sites**. |
- | Resource | Select **\<your-web-app-name>**. </br> Select the name of the web app that you created in the "Prerequisites" section. |
- | Target sub-resource | Select **sites**. |
-
-1. Click **Next** to the **Virtual Network** tab.
-
-1. On the **Virtual Network** pane, enter the following values:
-
- | Setting | Value |
- ||--|
- | **Networking** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **mySubnet**. |
- | **Private&nbsp;DNS&nbsp;integration** | |
- | Integrate with private DNS zone | Keep the default of **Yes**. |
- | Subscription | Select your subscription. |
- | Resource Group | Select Resource Group **CreatePrivateEndpointQS-rg**. |
- | Private DNS zones | Keep the default of **(New) privatelink.azurewebsites.net**. |
-
+6. Select **Next: Virtual Network**.
+
+7. In **Virtual Network**, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | **Networking** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **myVNet/mySubnet (10.1.0.0/24)**. |
+ | Enable network policies for all private endpoints in this subnet. | Select the checkbox if you plan to apply Application Security Groups or Network Security groups to the subnet that contains the private endpoint. </br> For more information, see [Manage network policies for private endpoints](disable-private-endpoint-network-policy.md) |
+
+# [**Dynamic IP**](#tab/dynamic-ip)
+
+| Setting | Value |
+| - | -- |
+| **Private IP configuration** | Select **Dynamically allocate IP address**. |
-1. Click **Next** to **Review + create**.
-1. Select **Create**.
+# [**Static IP**](#tab/static-ip)
+
+| Setting | Value |
+| - | -- |
+| **Private IP configuration** | Select **Statically allocate IP address**. |
+| Name | Enter **myIPconfig**. |
+| Private IP | Enter **10.1.0.10**. |
++++
+8. Select **Next: DNS**.
+
+9. Leave the defaults in **DNS**. Select **Next: Tags**, then **Next: Review + create**.
+
+10. Select **Create**.
## Test connectivity to the private endpoint Use the VM that you created earlier to connect to the web app across the private endpoint.
-1. In the Azure portal, on the left pane, select **Resource groups**.
-
-1. Select **CreatePrivateEndpointQS-rg**.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
-1. Select **myVM**.
+2. Select **myVM**.
-1. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
+3. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
-1. Select the blue **Use Bastion** button.
+4. Enter the username and password that you used when you created the VM.
-1. Enter the username and password that you used when you created the VM.
+5. Select **Connect**.
-1. After you've connected, open PowerShell on the server.
+6. After you've connected, open PowerShell on the server.
-1. Enter `nslookup <your-webapp-name>.azurewebsites.net`, replacing *\<your-webapp-name>* with the name of the web app that you created earlier. You'll receive a message that's similar to the following:
+7. Enter `nslookup mywebapp1979.azurewebsites.net`. You'll receive a message that's similar to the following example:
```powershell Server: UnKnown Address: 168.63.129.16 Non-authoritative answer:
- Name: mywebapp8675.privatelink.azurewebsites.net
+ Name: mywebapp1979.privatelink.azurewebsites.net
Address: 10.1.0.5
- Aliases: mywebapp8675.azurewebsites.net
+ Aliases: mywebapp1979.azurewebsites.net
```
- A private IP address of **10.1.0.5** is returned for the web app name. This address is in the subnet of the virtual network you created earlier.
+ A private IP address of **10.1.0.5** is returned for the web app name if you chose dynamic IP address in the previous steps. This address is in the subnet of the virtual network you created earlier.
-1. In the bastion connection to **myVM**, open your web browser.
+8. In the bastion connection to **myVM**, open the web browser.
-1. Enter the URL of your web app, **https://\<your-webapp-name>.azurewebsites.net**.
+9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**.
If your web app hasn't been deployed, you'll get the following default web app page: :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-1. Close the connection to **myVM**.
+10. Close the connection to **myVM**.
## Clean up resources
-If you're not going to continue to use this web app, delete the virtual network, virtual machine, and web app by doing the following:
+If you're not going to continue to use this web app, delete the virtual network, virtual machine, and web app by doing the following steps:
1. On the left pane, select **Resource groups**.
-1. Select **CreatePrivateEndpointQS-rg**.
+2. Select **CreatePrivateEndpointQS-rg**.
-1. Select **Delete resource group**.
+3. Select **Delete resource group**.
-1. Under **Type the resource group name**, enter **CreatePrivateEndpointQS-rg**.
+4. Under **Type the resource group name**, enter **CreatePrivateEndpointQS-rg**.
-1. Select **Delete**.
+5. Select **Delete**.
-## What you've learned
+## Next steps
In this quickstart, you created:
In this quickstart, you created:
You used the VM to test connectivity to the web app across the private endpoint.
-## Next steps
- For more information about the services that support private endpoints, see: > [!div class="nextstepaction"] > [What is Azure Private Link?](private-link-overview.md#availability)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Key Vault (Microsoft.KeyVault/managedHSMs) / Managed HSMs | privatelink.managedhsm.azure.net | managedhsm.azure.net | | Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) / management | privatelink.{region}.azmk8s.io </br> {subzone}.privatelink.{region}.azmk8s.io | {region}.azmk8s.io | | Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.net | search.windows.net |
-| Azure Container Registry (Microsoft.ContainerRegistry/registries) / registry | privatelink.azurecr.io | azurecr.io |
+| Azure Container Registry (Microsoft.ContainerRegistry/registries) / registry | privatelink.azurecr.io </br> {region}.privatelink.azurecr.io | azurecr.io </br> {region}.azurecr.io |
| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStores | privatelink.azconfig.io | azconfig.io | | Azure Backup (Microsoft.RecoveryServices/vaults) / AzureBackup | privatelink.{region}.backup.windowsazure.com | {region}.backup.windowsazure.com | | Azure Site Recovery (Microsoft.RecoveryServices/vaults) / AzureSiteRecovery | privatelink.siterecovery.windowsazure.com | {region}.hypervrecoverymanager.windowsazure.com |
purview Concept Guidelines Pricing Data Estate Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing-data-estate-insights.md
+
+ Title: Pricing guidelines for Data Estate Insights
+description: This article provides a guideline to understand and strategize pricing for the Data Estate Insights components of Microsoft Purview (formerly Azure Purview).
++++ Last updated : 06/27/2022+++
+# Pricing for Data Estate Insights
+
+> [!IMPORTANT]
+> The option to disable the Data Estate Insights application will only be available July 1st after 9am PST.
+
+This guide covers pricing guidelines for Data Estate Insights.
+
+For a full pricing guideline details for Microsoft Purview (formerly Azure Purview), see the [pricing guideline overview.](concept-guidelines-pricing.md)
+
+For specific price details, see the [Microsoft Purview (formerly Azure Purview) pricing page](https://azure.microsoft.com/pricing/details/purview/). This article will guide you through the features and factors that will affect pricing for Data Estate Insights.
+
+## Guidelines
+
+Data Estate Insights is billed on two dimensions:
+
+- **Report generation** - this incorporates the jobs that aggregate metrics about your Microsoft Purview account that will appear in specific reports.
+ > [!NOTE]
+ > On the [pricing page](https://azure.microsoft.com/pricing/details/purview/), you can find details for report generation pricing under Data Map Enrichment.
+ > :::image type="content" source="media/concept-guidelines-pricing/data-map-enrichment.png" alt-text="Screenshot of the pricing page headers, showing Data Map Enrichment selected." :::
+- **Report consumption** - This incorporates access of the report features (currently served through the UX). On the [pricing page](https://azure.microsoft.com/pricing/details/purview/), you can find details for report generation pricing under Data Estate Insights.
+ :::image type="content" source="media/concept-guidelines-pricing/data-estate-insights.png" alt-text="Screenshot of the pricing page headers, showing Data Estate Insights selected." :::
+
+> [!IMPORTANT]
+> The Data Estate Insights application is **on** by default when you create a Microsoft Purview account. This means, ΓÇ£StateΓÇ¥ is on and ΓÇ£Refresh FrequencyΓÇ¥ is set to automatic*.
+>
+> \* At this time automatic refresh is weekly.
+
+If you don't plan on using Data Estate Insights for a while, a **[data curator](catalog-permissions.md#roles) on the [root collection](reference-azure-purview-glossary.md#root-collection)** can disable Data Estate Insights features in one of two ways:
+
+- [Disable the Data Estate Insights application](#disable-the-data-estate-insights-application) - this will stop billing from both report generation and report consumption.
+- [Disable report refresh](#disable-report-refresh) - [insights readers](catalog-permissions.md#roles) have access to current reports, but reports won't be refreshed. Billing will occur for report consumption but not report generation.
+
+> [!NOTE]
+> The application or report refresh can be enabled again later at any time.
+
+A **[data curator](catalog-permissions.md#roles) on your account's [root collection](reference-azure-purview-glossary.md#root-collection)** can make these changes in the Management section of the Microsoft Purview governance portal in **Overview**, under **Feature options**. For specific steps, see the [enable or disable Data Estates Insights article](enable-disable-data-estate-insights.md)
++
+### Disable the Data Estate Insights application
+
+Disabling Data Estate Insights will disable the entire application, including these reports:
+
+- Stewardship
+- Asset
+- Glossary
+- Classification
+- Labeling
+
+The application icon will still show in the menu, but insights readers won't have access to reports at all, and report generation jobs will be stopped. The Microsoft Purview account won't receive any bill for Data Estate Insights.
+
+For steps to disable the Data Estate Insights application, see the [disable article.](enable-disable-data-estate-insights.md#disable-the-data-estate-insights-application)
+
+### Disable report refresh
+
+You can choose to disable report refreshes instead of disabling the entire Data Estate Insights application.
+
+When you disable report refreshes, insight readers will be able view reports but they'll see a banner on top of each report, warning that the report may not be current. It will also indicate the date the report was last generated.
+
+In this case, graphs showing data from last 30 days will appear blank after 30 days. Graphs showing snapshot of the data map will continue to show graph and details. When an [insights readers](catalog-permissions.md#roles) accesses an insight report, report consumption meter will be triggered, and the Microsoft Purview account will be billed.
+
+For steps to disable report refresh see the [disable article.](enable-disable-data-estate-insights.md#disable-report-refresh)
+
+## Next steps
+
+- [Enable or disable Data Estate Insights](enable-disable-data-estate-insights.md)
+- [Microsoft Purview, formerly Azure Purview, pricing page](https://azure.microsoft.com/pricing/details/azure-purview/)
+- [Pricing guideline overview](concept-guidelines-pricing.md)
+- [Pricing guideline Data Map](concept-guidelines-pricing-data-map.md)
purview Concept Guidelines Pricing Data Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing-data-map.md
+
+ Title: Pricing guidelines for the Microsoft Purview elastic data map
+description: This article provides a guideline to understand and strategize pricing for the elastic data map in the Microsoft Purview governance portal.
++++ Last updated : 06/27/2022+++
+# Pricing for the Microsoft Purview Data Map
+
+This guide covers pricing guidelines for the data map in the Microsoft Purview governance portal.
+
+For a full pricing guideline details for Microsoft Purview (formerly Azure Purview), see the [pricing guideline overview.](concept-guidelines-pricing.md)
+
+For specific price details, see the [Microsoft Purview (formerly Azure Purview) pricing page](https://azure.microsoft.com/pricing/details/purview/). This article will guide you through the features and factors that will affect pricing for the Microsoft Purview Data Map.
+
+Direct costs impacting pricing for the Microsoft Purview Data Map are based on the following three dimensions:
+- [**Elastic data map**](#elastic-data-map)
+- [**Automated scanning & classification**](#automated-scanning-classification-and-ingestion)
+- [**Advanced resource sets**](#advanced-resource-sets)
+
+## Elastic data map
+
+- The **Data map** is the foundation of the Microsoft Purview governance portal architecture and so needs to be up to date with asset information in the data estate at any given point
+
+- The data map is charged in terms of **Capacity Unit** (CU). The data map is provisioned at one CU if the catalog is storing up to 10 GB of metadata storage and serves up to 25 data map operations/sec
+
+- The data map is always provisioned at one CU when an account is first created
+
+- However, the data map scales automatically between the minimal and maximal limits of that elasticity window, to cater to changes in the data map with respect to two key factors - **operation throughput** and **metadata storage**
+
+### Operation throughput
+
+- An event driven factor based on the Create, Read, Update, Delete operations performed on the data map
+- Some examples of the data map operations would be:
+ - Creating an asset in Data Map
+ - Adding a relationship to an asset such as owner, steward, parent, lineage
+ - Editing an asset to add business metadata such as description, glossary term
+ - Keyword-search returning results to search result page
+ - Importing or exporting information using API
+- If there are multiple queries executed on the Data Map, the number of I/O operations also increases resulting in the scaling up of the data map
+- The number of concurrent users also forms a factor governing the data map capacity unit
+- Other factors to consider are type of search query, API interaction, workflows, approvals, and so on
+- Data burst level
+ - When there's a need for more operations/second throughput, the Data map can autoscale within the elasticity window to cater to the changed load
+ - This constitutes the **burst characteristic** that needs to be estimated and planned for
+ - The burst characteristic comprises the **burst level** and the **burst duration** for which the burst exists
+ - The **burst level** is a multiplicative index of the expected consistent elasticity under steady state
+ - The **burst duration** is the percentage of the month that such bursts (in elasticity) are expected because of growing metadata or higher number of operations on the data map
+
+### Metadata storage
+
+- If the number of assets reduces in the data estate, and are then removed in the data map through subsequent incremental scans, the storage component automatically reduces and so the data map scales down
+
+## Automated scanning, classification, and ingestion
+
+There are two major automated processes that can trigger ingestion of metadata into the Microsoft Purview Data Map:
+- Automatic scans using native [connectors](azure-purview-connector-overview.md). This process includes three main steps:
+ - Metadata scan
+ - Automatic classification
+ - Ingestion of metadata into the Microsoft Purview Data Map
+
+- Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines. This process includes:
+ - Ingestion of metadata and lineage into the Microsoft Purview Data Map if the account is connected to any Azure Data Factory or Azure Synapse pipelines.
++
+### Automatic scans using native connectors
+
+- A **full scan** processes all assets within a selected scope of a data source whereas an **incremental scan** detects and processes assets, which have been created, modified, or deleted since the previous successful scan
+
+- All scans (full or Incremental scans) will pick up **updated, modified, or deleted** assets
+
+- It's important to consider and avoid the scenarios when multiple people or groups belonging to different departments set up scans for the same data source resulting in more pricing for duplicate scanning
+
+- Schedule **frequent incremental scans** post the initial full scan aligned with the changes in the data estate. This will ensure the data map is kept up to date always and the incremental scans consume lesser v-core hours as compared to a full scan
+
+- The **ΓÇ£View DetailsΓÇ¥** link for a data source will enable users to run a full scan. However, consider running incremental scans after a full scan for optimized scanning excepting when there's a change to the scan rule set (classifications/file types)
+
+- **Register the data source at a parent collection** and **Scope scans at child collection** with different access controls to ensure there are no duplicate scanning costs being entailed
+
+- Curtail the users who are allowed to register data sources for scanning through **fine grained access control** and **Data Source Administrator** role using [Collection authorization](./catalog-permissions.md). This will ensure only valid data sources are allowed to be registered and scanning v-core hours is controlled resulting in lower costs for scanning
+
+- Consider that the **type of data source** and the **number of assets** being scanned affect the scan duration
+
+- **Create custom scan rule sets** to include only the subset of **file types** available in your data estate and **classifications** that are relevant to your business requirements to ensure optimal use of the scanners
+
+- While creating a new scan for a data source, follow the **order of preparation** recommended before actually running the scan. This includes gathering the requirements for **business specific classifications** and **file types** (for storage accounts) to enable appropriate scan rule sets to be defined to avoid multiple scans and control unnecessary costs for multiple scans through missed requirements
+
+- Align your scan schedules with Self-Hosted Integration Runtime (SHIR) VMs (Virtual Machines) size to avoid extra costs linked to virtual machines
+
+### Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
+
+- Metadata and lineage are ingested from Azure Data Factory or Azure Synapse pipelines every time the pipelines run in the source system.
+
+## Advanced resource sets
+
+- The Microsoft Purview Data Map uses **resource sets** to address the challenge of mapping large numbers of data assets to a single logical resource by providing the ability to scan all the files in the data lake and find patterns (GUID, localization patterns, etc.) to group them as a single asset in the data map
+
+- **Advanced Resource Set** is an optional feature, which allows for customers to get enriched resource set information computed such as Total Size, Partition Count, etc., and enables the customization of resource set grouping via pattern rules. If Advanced Resource Set feature isn't enabled, your data catalog will still contain resource set assets, but without the aggregated properties. There will be no "Resource Set" meter billed to the customer in this case.
+
+- Use the basic resource set feature, before switching on the Advanced Resource Sets in the Microsoft Purview Data Map to verify if requirements are met
+
+- Consider turning on Advanced Resource Sets if:
+ - Your data lakes schema is constantly changing, and you're looking for more value beyond the basic Resource Set feature to enable the Microsoft Purview Data Map to compute parameters such as #partitions, size of the data estate, etc., as a service
+ - There's a need to customize how resource set assets get grouped.
+
+- It's important to note that billing for Advanced Resource Sets is based on the compute used by the offline tier to aggregate resource set information and is dependent on the size/number of resource sets in your catalog
++
+## Next steps
+
+- [Microsoft Purview, formerly Azure Purview, pricing page](https://azure.microsoft.com/pricing/details/azure-purview/)
+- [Pricing guideline overview](concept-guidelines-pricing.md)
+- [Pricing guideline Data Estate Insights](concept-guidelines-pricing-data-estate-insights.md)
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing.md
Previously updated : 05/23/2022 Last updated : 06/27/2022
-# Pricing for Microsoft Purview (formerly Azure Purview)
+# Overview of pricing for Microsoft Purview (formerly Azure Purview)
Microsoft Purview, formally known as Azure Purview, provides a single pane of glass for managing data governance by enabling automated scanning and classifying data at scale through the Microsoft Purview governance portal.
+For specific price details, see the [Microsoft Purview (formerly Azure Purview) pricing page](https://azure.microsoft.com/pricing/details/purview/). This article will guide you through the features and factors that will affect pricing.
+ ## Why do you need to understand the components of pricing? - While the pricing for Microsoft Purview (formerly Azure Purview) is on a subscription-based **Pay-As-You-Go** model, there are various dimensions that you can consider while budgeting - This guideline is intended to help you plan the budgeting for Microsoft Purview in the governance portal by providing a view on the control factors that impact the budget - ## Factors impacting Azure Pricing There are **direct** and **indirect** costs that need to be considered while planning budgeting and cost management.
+Direct costs impacting Microsoft Purview pricing are based on these applications:
+- [The Microsoft Purview Data Map](concept-guidelines-pricing-data-map.md)
+- [Data Estate Insights](concept-guidelines-pricing-data-estate-insights.md)
-Direct costs impacting Microsoft Purview pricing are based on the following three dimensions:
-- [**Elastic data map**](#elastic-data-map)-- [**Automated scanning & classification**](#automated-scanning-classification-and-ingestion)-- [**Advanced resource sets**](#advanced-resource-sets)--
-## Elastic data map
--- The **Data map** is the foundation of the Microsoft Purview governance portal architecture and so needs to be up to date with asset information in the data estate at any given point--- The data map is charged in terms of **Capacity Unit** (CU). The data map is provisioned at one CU if the catalog is storing up to 10 GB of metadata storage and serves up to 25 data map operations/sec--- The data map is always provisioned at one CU when an account is first created--- However, the data map scales automatically between the minimal and maximal limits of that elasticity window, to cater to changes in the data map with respect to two key factors - **operation throughput** and **metadata storage**-
-### Operation throughput
--- An event driven factor based on the Create, Read, Update, Delete operations performed on the data map-- Some examples of the data map operations would be:
- - Creating an asset in Data Map
- - Adding a relationship to an asset such as owner, steward, parent, lineage
- - Editing an asset to add business metadata such as description, glossary term
- - Keyword-search returning results to search result page
- - Importing or exporting information using API
-- If there are multiple queries executed on the Data Map, the number of I/O operations also increases resulting in the scaling up of the data map-- The number of concurrent users also forms a factor governing the data map capacity unit-- Other factors to consider are type of search query, API interaction, workflows, approvals, and so on-- Data burst level
- - When there's a need for more operations/second throughput, the Data map can autoscale within the elasticity window to cater to the changed load
- - This constitutes the **burst characteristic** that needs to be estimated and planned for
- - The burst characteristic comprises the **burst level** and the **burst duration** for which the burst exists
- - The **burst level** is a multiplicative index of the expected consistent elasticity under steady state
- - The **burst duration** is the percentage of the month that such bursts (in elasticity) are expected because of growing metadata or higher number of operations on the data map
--
-### Metadata storage
--- If the number of assets reduces in the data estate, and are then removed in the data map through subsequent incremental scans, the storage component automatically reduces and so the data map scales down-
-## Automated scanning, classification, and ingestion
-
-There are two major automated processes that can trigger ingestion of metadata into the Microsoft Purview Data Map:
-1. Automatic scans using native [connectors](azure-purview-connector-overview.md). This process includes three main steps:
- - Metadata scan
- - Automatic classification
- - Ingestion of metadata into the Microsoft Purview Data Map
-
-2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines. This process includes:
- - Ingestion of metadata and lineage into the Microsoft Purview Data Map if the account is connected to any Azure Data Factory or Azure Synapse pipelines.
--
-### 1. Automatic scans using native connectors
--- A **full scan** processes all assets within a selected scope of a data source whereas an **incremental scan** detects and processes assets, which have been created, modified, or deleted since the previous successful scan --- All scans (full or Incremental scans) will pick up **updated, modified, or deleted** assets
+For guidelines about pricing for these applications, select the links above.
-- It's important to consider and avoid the scenarios when multiple people or groups belonging to different departments set up scans for the same data source resulting in more pricing for duplicate scanning--- Schedule **frequent incremental scans** post the initial full scan aligned with the changes in the data estate. This will ensure the data map is kept up to date always and the incremental scans consume lesser v-core hours as compared to a full scan--- The **ΓÇ£View DetailsΓÇ¥** link for a data source will enable users to run a full scan. However, consider running incremental scans after a full scan for optimized scanning excepting when there's a change to the scan rule set (classifications/file types)--- **Register the data source at a parent collection** and **Scope scans at child collection** with different access controls to ensure there are no duplicate scanning costs being entailed--- Curtail the users who are allowed to register data sources for scanning through **fine grained access control** and **Data Source Administrator** role using [Collection authorization](./catalog-permissions.md). This will ensure only valid data sources are allowed to be registered and scanning v-core hours is controlled resulting in lower costs for scanning--- Consider that the **type of data source** and the **number of assets** being scanned affect the scan duration--- **Create custom scan rule sets** to include only the subset of **file types** available in your data estate and **classifications** that are relevant to your business requirements to ensure optimal use of the scanners--- While creating a new scan for a data source, follow the **order of preparation** recommended before actually running the scan. This includes gathering the requirements for **business specific classifications** and **file types** (for storage accounts) to enable appropriate scan rule sets to be defined to avoid multiple scans and control unnecessary costs for multiple scans through missed requirements--- Align your scan schedules with Self-Hosted Integration Runtime (SHIR) VMs (Virtual Machines) size to avoid extra costs linked to virtual machines-
-### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
--- metadata and lineage are ingested from Azure Data Factory or Azure Synapse pipelines every time the pipelines run in the source system.-
-## Advanced resource sets
--- The Microsoft Purview Data Map uses **resource sets** to address the challenge of mapping large numbers of data assets to a single logical resource by providing the ability to scan all the files in the data lake and find patterns (GUID, localization patterns, etc.) to group them as a single asset in the data map--- **Advanced Resource Set** is an optional feature, which allows for customers to get enriched resource set information computed such as Total Size, Partition Count, etc., and enables the customization of resource set grouping via pattern rules. If Advanced Resource Set feature isn't enabled, your data catalog will still contain resource set assets, but without the aggregated properties. There will be no "Resource Set" meter billed to the customer in this case.--- Use the basic resource set feature, before switching on the Advanced Resource Sets in the Microsoft Purview Data Map to verify if requirements are met--- Consider turning on Advanced Resource Sets if:
- - Your data lakes schema is constantly changing, and you're looking for more value beyond the basic Resource Set feature to enable the Microsoft Purview Data Map to compute parameters such as #partitions, size of the data estate, etc., as a service
- - There's a need to customize how resource set assets get grouped
--- It's important to note that billing for Advanced Resource Sets is based on the compute used by the offline tier to aggregate resource set information and is dependent on the size/number of resource sets in your catalog--
-## Indirect costs
+## Indirect costs
Indirect costs impacting Microsoft Purview (formerly Azure Purview) pricing to be considered are:
Indirect costs impacting Microsoft Purview (formerly Azure Purview) pricing to b
- Multi-cloud egress charges - Consider the egress charges (minimal charges added as a part of the multi-cloud subscription) associated with scanning multi-cloud (for example AWS, Google) data sources running native services excepting the S3 and RDS sources - ## Next steps-- [Microsoft Purview, forerly Azure Purview, pricing page](https://azure.microsoft.com/pricing/details/azure-purview/)+
+- [Microsoft Purview, formerly Azure Purview, pricing page](https://azure.microsoft.com/pricing/details/azure-purview/)
+- [Pricing guideline Data Estate Insights](concept-guidelines-pricing-data-estate-insights.md)
+- [Pricing guideline Data Map](concept-guidelines-pricing-data-map.md)
purview Enable Disable Data Estate Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/enable-disable-data-estate-insights.md
+
+ Title: Disable or enable Data Estate Insights
+description: This article provides the steps to disable or enable Data Estate Insights in the Microsoft Purview governance portal.
+++++ Last updated : 06/27/2022++
+# Disable or enable Data Estate Insights
+
+> [!IMPORTANT]
+> The option to disable the Data Estate Insights application will only be available July 1st after 9am PST.
+
+Microsoft Purview Data Estate Insights automatically aggregates metrics and creates reports about your Microsoft Purview account and your data estate. When you scan registered sources and populate your Microsoft Purview Data Map, the Data Estate Insights application automatically extracts valuable governance gaps and highlights them in its top metrics. It also provides drill-down experience that enables all stakeholders, such as data owners and data stewards, to take appropriate action to close the gaps.
+
+These features are optional and can be enabled or disabled at any time. This article provides the specific steps required to enable or disable Microsoft Purview Data Estate Insights features.
+
+> [!IMPORTANT]
+> The Data Estate Insights application is **on** by default when you create a Microsoft Purview account. This means, ΓÇ£StateΓÇ¥ is on and ΓÇ£Refresh FrequencyΓÇ¥ is set to automatic*. As the Data Map is populated and curated, Insights App shows data in the reports. The reports are ready for consumption to anyone with Insights Reader role.
+>
+> \* At this time automatic refresh is weekly.
+
+If you don't plan on using Data Estate Insights for a time, a **[data curator](catalog-permissions.md#roles) on the [root collection](reference-azure-purview-glossary.md#root-collection)** can disable the Data Estate Insights in one of two ways:
+
+- [Disable the Data Estate Insights application](#disable-the-data-estate-insights-application) - this will stop billing from both report generation and report consumption.
+- [Disable report refresh](#disable-report-refresh) - Insights readers have access to current reports, but reports won't be refreshed. Billing will occur for report consumption but not report generation.
+
+Steps for both methods, and for re-enablement, are below.
+
+For more information about billing for Data Estates Insights, see our [pricing guidelines](concept-guidelines-pricing-data-estate-insights.md).
+
+## Disable the Data Estate Insights application
+
+> [!NOTE]
+> To be able to disable this application, you will need to have the [data curator role](catalog-permissions.md#roles) on your account's [root collection.](reference-azure-purview-glossary.md#root-collection)
+
+Disabling Data Estate Insights will disable the entire application, including these reports:
+- Stewardship
+- Asset
+- Glossary
+- Classification
+- Labeling
+
+The application icon will still show in the menu, but insights readers won't have access to reports at all, and report generation jobs will be stopped. The Microsoft Purview account won't receive any bill for Data Estate Insights.
+
+To disable the Data Estate Insights application, a user with the [data curator role](catalog-permissions.md#roles) at the [root collection](reference-azure-purview-glossary.md#root-collection) can follow these steps:
+
+1. In the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), go to the **Management** section.
+
+ :::image type="content" source="media/enable-disable-data-estate-insights/locate-management.png" alt-text="Screenshot of the Microsoft Purview governance portal left menu, with the Management section highlighted and overview shown selected in the next menu." :::
+
+1. Then select **Overview**.
+1. In the **Feature options** menu, locate Data Estate Insights, and select the **State** toggle to change it to **Off**.
+
+ :::image type="content" source="media/enable-disable-data-estate-insights/disable-option.png" alt-text="Screenshot of the Overview window in the Management section of the Microsoft Purview governance portal with the State toggle highlighted for Data Estate Insights feature options." :::
+
+Once you have disabled Data Estate Insights, the icon will still appear in the left hand menu, but users will receive a warning stating that the application has been disabled when attempting to access it.
++
+## Disable report refresh
+
+> [!NOTE]
+> To be able to disable or edit report refresh, you will need to have the [data curator role](catalog-permissions.md#roles) on your account's [root collection.](reference-azure-purview-glossary.md#root-collection)
+
+You can choose to disable report refreshes instead of disabling the entire Data Estate Insights application. When you disable report refreshes, users with the [insights reader role](catalog-permissions.md#roles) will still be able view reports, but they'll see warning at the top of each report indicating that the data may not be current and the date of the last refresh.
+
+Graphs that show data from the last 30 days will appear blank after 30 days while graphs showing snapshot of the data map will continue to show graphs and details.
++
+To disable the Data Estate Insights report refresh, a user with the [data curator role](catalog-permissions.md#roles) at the [root collection](reference-azure-purview-glossary.md#root-collection) can follow these steps:
+
+1. In the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), go to the **Management** section.
+
+ :::image type="content" source="media/enable-disable-data-estate-insights/locate-management.png" alt-text="Screenshot of the Microsoft Purview governance portal left menu, with the Management section highlighted." :::
+
+1. Then select **Overview**.
+1. In the **Feature options** menu, locate Data Estate Insights, select the **Refresh frequency** drop-down menu, and select **Off**.
+
+ :::image type="content" source="media/enable-disable-data-estate-insights/refresh-frequency.png" alt-text="Screenshot of the Overview window in the Management section of the Microsoft Purview governance portal with the refresh frequency dropdown highlighted for Data Estate Insights feature options." :::
+
+## Enable Data Estate Insights and report refresh
+
+> [!NOTE]
+> To be able to enable Data Estate Insights, enable report refresh, or edit report refresh, you will need to have the [data curator role](catalog-permissions.md#roles) on your account's [root collection.](reference-azure-purview-glossary.md#root-collection)
+
+If Data Estate Insights or report refresh has been disabled in your Microsoft Purview governance portal environment, a user with the [data curator role](catalog-permissions.md#roles) at the [root collection](reference-azure-purview-glossary.md#root-collection) can re-enable either at any time by following these steps:
+
+1. In the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), go to the **Management** section.
+
+ :::image type="content" source="media/enable-disable-data-estate-insights/locate-management.png" alt-text="Screenshot of the Microsoft Purview governance portal Management section highlighted.":::
+
+1. Then select **Overview**.
+1. In the **Feature options** menu, locate Data Estate Insights, select the **Refresh frequency** drop-down menu, and select a refresh rate to enable report refresh. Or select the State toggle to change it to **On** to re-enable the entire application.
+
+ :::image type="content" source="media/enable-disable-data-estate-insights/disable-data-estate-insights.png" alt-text="Screenshot of the Overview window in the Management section of the Microsoft Purview governance portal with Data Estate Insights highlighted." :::
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 04/29/2022 Last updated : 06/28/2022
Use any of the following deployment checklists during the setup or for troublesh
3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session. 4. There's no MFA or Conditional Access Policies are enforced on the user. 1. Validate App registration settings to make sure:
- 5. App registration exists in your Azure Active Directory tenant.
- 6. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ 1. App registration exists in your Azure Active Directory tenant.
+ 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
1. Power BI Service Tenant.Read.All 2. Microsoft Graph openid 3. Microsoft Graph User.Read
-1. Validate Self-hosted runtime settings:
+ 3. Under **Authentication**, **Allow public client flows** is enabled.
+2. Validate Self-hosted runtime settings:
1. Latest version of [Self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM. 2. Network connectivity from Self-hosted runtime to Power BI tenant is enabled. 3. Network connectivity from Self-hosted runtime to Microsoft services is enabled.
Use any of the following deployment checklists during the setup or for troublesh
1. Power BI Service Tenant.Read.All 2. Microsoft Graph openid 3. Microsoft Graph User.Read
-1. Review network configuration and validate if:
+ 3. Under **Authentication**, **Allow public client flows** is enabled.
+2. Review network configuration and validate if:
1. A [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links) is deployed. 2. All required [private endpoints for Microsoft Purview](/azure/purview/catalog-private-link-end-to-end) are deployed. 3. Network connectivity from Self-hosted runtime to Power BI tenant is enabled through private network.
To create and run a new scan, do the following:
- Microsoft Graph User.Read :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
+
+1. Under **Advanced settings**, enable **Allow Public client flows**.
-1. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
+2. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
1. Navigate to **Sources**.
search Search Howto Connecting Azure Sql Database To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md
If you're using a [rowversion](/sql/t-sql/data-types/rowversion-transact-sql) da
* Uses the rowversion data type for the high water mark column in the indexer SQL query. Using the correct data type improves indexer query performance.
-* Subtracts one from the rowversion value before the indexer query runs. Views with one-to-many joins may have rows with duplicate rowversion values. Subtracting 1one ensures the indexer query doesn't miss these rows.
+* Subtracts one from the rowversion value before the indexer query runs. Views with one-to-many joins may have rows with duplicate rowversion values. Subtracting one ensures the indexer query doesn't miss these rows.
To enable this property, create or update the indexer with the following configuration:
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Captions and answers are extracted verbatim from text in the search document. Th
## Prerequisites
-+ A Cognitive Search service at a Standard tier (S1, S2, S3), located in one of these regions: Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe. If you have an existing S1 or greater service in one of these regions, you can enable semantic search on your service without having to create a new one.
++ A Cognitive Search service at a Standard tier (S1, S2, S3) or Storage Optimized tier (L1, L2), located in one of these regions: Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe. If you have an existing S1 or greater service in one of these regions, you can enable semantic search on your service without having to create a new one. + [Semantic search enabled on your search service](semantic-search-overview.md#enable-semantic-search).
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
Microsoft Sentinel provides the following out-of-the-box, product-specific Netwo
| **Source** | **Built-in parsers** | **Workspace deployed parsers** | | | | |
-| **AppGate SDP** ip connection logs collected using Syslog |`_ASim_NetworkSession_AppGateSDP` (regular)<br> `_Im_NetworkSession_AppGateSDP` (filtering) | `ASimNetworkSessionAppGateSDP` (regular)<br> `vimNetworkSessionAppGateSDP` (filtering) |
+| **AppGate SDP** ip connection logs collected using Syslog |`_ASim_NetworkSession_AppGateSDP` (regular)<br> `_Im_NetworkSession_AppGateSDP` (filtering)<br> (Pending deployment) | `ASimNetworkSessionAppGateSDP` (regular)<br> `vimNetworkSessionAppGateSDP` (filtering) |
| **AWS VPC logs** collected using the AWS S3 connector |`_ASim_NetworkSession_AWSVPC` (regular)<br> `_Im_NetworkSession_AWSVPC` (filtering) | `ASimNetworkSessionAWSVPC` (regular)<br> `vimNetworkSessionAWSVPC` (filtering) | | **Azure Firewall logs** |`_ASim_NetworkSession_AzureFirewall` (regular)<br> `_Im_NetworkSession_AzureFirewall` (filtering) | `ASimNetworkSessionAzureFirewall` (regular)<br> `vimNetworkSessionAzureFirewall` (filtering) | | **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) | | **Azure Network Security Groups (NSG) logs** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_AzureNSG` (regular)<br> `_Im_NetworkSession_AzureNSG` (filtering) | `ASimNetworkSessionAzureNSG` (regular)<br> `vimNetworkSessionAzureNSG` (filtering) |
+| **Fortigate FortiOS** ip connection logs collected using Syslog |`_ASim_NetworkSession_FortinetFortiGate` (regular)<br> `_Im_NetworkSession_FortinetFortiGate` (filtering)<br> (Pending deployment) | `ASimNetworkSessionFortinetFortiGate` (regular)<br> `vimNetworkSessionFortinetFortiGate` (filtering) |
| **Microsoft 365 Defender for Endpoint** | `_ASim_NetworkSession_Microsoft365Defender` (regular)<br><br>`_Im_NetworkSession_Microsoft365Defender` (filtering) | `ASimNetworkSessionMicrosoft365Defender` (regular)<br><br> `vimNetworkSessionMicrosoft365Defender` (filtering) | | **Microsoft Defender for IoT - Endpoint** |`_ASim_NetworkSession_MD4IoT` (regular)<br><br>`_Im_NetworkSession_MD4IoT` (filtering) | `ASimNetworkSessionMD4IoT` (regular)<br><br> `vimNetworkSessionMD4IoT` (filtering) | | **Palo Alto PanOS traffic logs** collected using CEF |`_ASim_NetworkSession_PaloAltoCEF` (regular)<br> `_Im_NetworkSession_PaloAltoCEF` (filtering) | `ASimNetworkSessionPaloAltoCEF` (regular)<br> `vimNetworkSessionPaloAltoCEF` (filtering) |
Microsoft Sentinel provides the following out-of-the-box, product-specific Web S
| **Source** | **Built-in parsers** | **Workspace deployed parsers** | | | | | |**Squid Proxy** | `_ASim_WebSession_SquidProxy` (regular) <br> `_Im_WebSession_SquidProxy` (filtering) <br><br> | `ASimWebSessionSquidProxy` (regular) <br>`vimWebSessionSquidProxy` (filtering) <br><br> |
-| **Vectra AI Streams** |`_ASim_WebSession_VectraAI` (regular)<br> `_Im_WebSession_VectraAI` (filtering) | `ASimWebSessionVectraAI` (regular)<br> `vimWebSessionVectraAI` (filtering) |
+| **Vectra AI Streams** |`_ASim_WebSession_VectraAI` (regular)<br> `_Im_WebSession_VectraAI` (filtering) <br> (Pending deployment) | `ASimWebSessionVectraAI` (regular)<br> `vimWebSessionVectraAI` (filtering) |
| **Zscaler ZIA** |`_ASim_WebSessionZscalerZIA` (regular)<br> `_Im_WebSessionZscalerZIA` (filtering) | `AsimWebSessionZscalerZIA` (regular)<br> `vimWebSessionSzcalerZIA` (filtering) |
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
+
+ Title: Microsoft Sentinel skill-up training
+description: This article walks you through a Microsoft Sentinel level 400 training to help you skill up on Microsoft Sentinel. The training includes 21 modules that contain relevant product documentation, blog posts and other resources. Make sure to check the most recent links for the documentation.
++ Last updated : 06/29/2022++++
+# Microsoft Sentinel skill-up training
++
+This article walks you through a Microsoft Sentinel level 400 training to help you skill up on Microsoft Sentinel. The training includes 21 modules that contain relevant product documentation, blog posts and other resources. Make sure to check the most recent links for the documentation.
+
+The modules listed below are split into five parts following the life cycle of a Security Operation Center (SOC):
+
+[Part 1: Overview](#part-1-overview)
+- [Module 0: Other learning and support options ](#module-0-other-learning-and-support-options)
+- [Module 1: Get started with Microsoft Sentinel](#module-1-get-started-with-microsoft-sentinel)
+- [Module 2: How is Microsoft Sentinel used?](#module-2-how-is-microsoft-sentinel-used)
+
+[Part 2: Architecting & Deploying](#part-2-architecting--deploying)
+- [Module 3: Workspace and tenant architecture](#module-3-workspace-and-tenant-architecture)
+- [Module 4: Data collection](#module-4-data-collection)
+- [Module 5: Log Management](#module-5-log-management)
+- [Module 6: Enrichment: TI, Watchlists, and more](#module-6-enrichment-ti-watchlists-and-more)
+- [Module 7: Log transformation](#module-7-log-transformation)
+- [Module 8: Migration](#module-8-migration)
+- [Module 9: ASIM and Normalization](#module-9-advanced-siem-information-model-asim-and-normalization)
+
+[Part 3: Creating Content](#part-3-creating-content)
+- [Module 10: The Kusto Query Language (KQL)](#module-10-the-kusto-query-language-kql)
+- [Module 11: Analytics](#module-11-analytics)
+- [Module 12: Implementing SOAR](#module-12-implementing-soar)
+- [Module 13: Workbooks, reporting, and visualization](#module-13-workbooks-reporting-and-visualization)
+- [Module 14: Notebooks](#module-14-notebooks)
+- [Module 15: Use cases and solutions](#module-15-use-cases-and-solutions)
+
+[Part 4: Operating](#part-4-operating)
+- [Module 16: A day in a SOC analyst's life, incident management, and investigation](#module-16-handling-incidents)
+- [Module 17: Hunting](#module-17-hunting)
+- [Module 18: User and Entity Behavior Analytics (UEBA)](#module-18-user-and-entity-behavior-analytics-ueba)
+- [Module 19: Monitoring Microsoft Sentinel's health](#module-19-monitoring-microsoft-sentinels-health)
+
+[Part 5: Advanced](#part-5-advanced)
+- [Module 20: Extending and Integrating using Microsoft Sentinel APIs](#module-20-extending-and-integrating-using-microsoft-sentinel-apis)
+- [Module 21: Bring your own ML](#module-21-bring-your-own-ml)
+
+## Part 1: Overview
+
+### Module 0: Other learning and support options
+
+This Skill-up training is based on the [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/become-a-microsoft-sentinel-ninja-the-complete-level-400/ba-p/1246310) and is a level 400 training. If you don't want to go as deep, or have a specific issue, other resources might be more suitable:
+
+* While extensive, the Skill-up training has to follow a script, and can't expand on every topic. Read the referenced documentation for details on every article.
+* You can now certify with the new certification [SC-200: Microsoft Security Operations Analyst](/learn/certifications/exams/sc-200), which covers Microsoft Sentinel. You may also want to consider the [SC-900: Microsoft Security, Compliance, and Identity Fundamentals](/learn/certifications/exams/sc-900) or the [AZ-500: Microsoft Azure Security Technologies](/learn/certifications/exams/az-500), for a broader, higher level view of the Microsoft Security suite.
+* Are you already skilled-up on Microsoft Sentinel? Just keep track of [what's new](whats-new.md) or join the [Private Preview](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-kibZAPJAVBiU46J6wWF_5URDFSWUhYUldTWjdJNkFMVU1LTEU4VUZHMy4u) program for an earlier glimpse.
+* Do you have a feature idea and do you want to share with us? Let us know on the [Microsoft Sentinel user voice page](https://feedback.azure.com/d365community/forum/37638d17-0625-ec11-b6e6-000d3a4f07b8).
+* Premier customer? You might want the on-site (or remote) four-day _Microsoft Sentinel Fundamentals Workshop_. Contact your Customer Success Account Manager for more details.
+* Do you have a specific issue? Ask (or answer others) on the [Microsoft Sentinel Tech Community](https://techcommunity.microsoft.com/t5/microsoft-sentinel/bd-p/MicrosoftSentinel). As a last resort, send an e-mail to <MicrosoftSentinel@microsoft.com>.
++
+### Module 1: Get started with Microsoft Sentinel
+
+Microsoft Sentinel is a **scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution**. Microsoft Sentinel delivers security analytics and threat intelligence across the enterprise. It provides a single solution for alert detection, threat visibility, proactive hunting, and threat response. [Read more.](overview.md)
++
+If you want to get an initial overview of Microsoft Sentinel's technical capabilities, the [latest Ignite presentation](https://www.youtube.com/watch?v=kGctnb4ddAE) is a good starting point. You might also find the [Quick Start Guide to Microsoft Sentinel](https://azure.microsoft.com/resources/quick-start-guide-to-azure-sentinel/) useful (requires registration). A more detailed overview can be found in this webinar: [MP4](https://1drv.ms/v/s%21AnEPjr8tHcNmggMkcVweWOqoxuN9), [YouTube](https://youtu.be/7An7BB-CcQI), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgjrN_zHpzbnfX_mX).
++
+Lastly, do you want to try it yourself? The Microsoft Sentinel All-In-One Accelerator ([blog](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-all-in-one-accelerator/ba-p/1807933), [YouTube](https://youtu.be/JB73TuX9DVs), [MP4](https://aka.ms/AzSentinel_04FEB2021_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhjw41XZvVSCSNIuX)) presents an easy way to get you started. To learn how to start yourself, review the [onboarding documentation](quickstart-onboard.md), or watch [Insight's Sentinel setup and configuration video](https://www.youtube.com/watch?v=Cyd16wVwxZc).
++
+#### Learn from users
+
+Thousands of organizations and service providers are using Microsoft Sentinel. As usual with security products, most of them don't go public about it. Still, there are some.
+
+* You can find [public customer use cases here](https://customers.microsoft.com/en-us/home)
+* [Insight](https://www.insightcdct.com/) released a use case about [an NBA team adapting Sentinel](https://www.insightcdct.com/Resources/Case-Studies/Case-Studies/NBA-Team-Adopts-Azure-Sentinel-for-a-Modern-Securi).
+* Stuart Gregg, Security Operations Manager @ ASOS, posted a much more detailed [blog post from Microsoft Sentinel's experience, focusing on hunting](https://medium.com/@stuart.gregg/proactive-phishing-with-azure-sentinel-part-1-b570fff3113).
+
+
+#### Learn from Analysts
+* [Microsoft Sentinel is a Leader placement in Forrester Wave.](https://www.microsoft.com/security/blog/2020/12/01/azure-sentinel-achieves-a-leader-placement-in-forrester-wave-with-top-ranking-in-strategy/)
+* [Microsoft named a Visionary in the 2021 Gartner Magic Quadrant for SIEM for Microsoft Sentinel.](https://www.microsoft.com/security/blog/2021/07/08/microsoft-named-a-visionary-in-the-2021-gartner-magic-quadrant-for-siem-for-azure-sentinel/)
++
+### Module 2: How is Microsoft Sentinel used?
+
+Many users use Microsoft Sentinel as their primary SIEM. Most of the modules in this course cover this use case. In this module, we present a few extra ways to use Microsoft Sentinel.
+
+#### As part of the Microsoft Security stack
+
+Use Microsoft Sentinel, Microsoft Defender for Cloud, Microsoft 365 Defender in tandem to protect your Microsoft workloads, including Windows, Azure, and Office:
+
+* Read more about [our comprehensive SIEM+XDR solution combining Microsoft Sentinel and Microsoft 365 Defender](https://techcommunity.microsoft.com/t5/azure-sentinel/whats-new-azure-sentinel-and-microsoft-365-defender-incident/ba-p/2191090).
+* Read [The Azure Security compass](https://aka.ms/azuresecuritycompass) to understand Microsoft's blueprint for your security operations.
+* Read and watch how such a setup helps detect and respond to a WebShell attack: [Blog](https://techcommunity.microsoft.com/t5/azure-sentinel/analysing-web-shell-attacks-with-azure-defender-data-in-azure/ba-p/1724130), [Video demo](https://techcommunity.microsoft.com/t5/video-hub/webshell-attack-deep-dive/m-p/1698964).
+* Watch the webinar: [Better Together | OT and IoT Attack Detection, Investigation and Response](https://youtu.be/S8DlZmzYO2s).
++
+#### To monitor your multi-cloud workloads
+
+The cloud is (still) new and often not monitored as extensively as on-premises workloads. Read this [presentation](https://techcommunity.microsoft.com/gxcuf89792/attachments/gxcuf89792/AzureSentinelBlog/243/1/L400-P2%20Use%20cases.pdf) to learn how Microsoft Sentinel can help you close the cloud monitoring gap across your clouds.
+
+#### Side by side with your existing SIEM
+
+Either for a transition period or a longer term, if you're using Microsoft Sentinel for your cloud workloads, you may be using Microsoft Sentinel alongside your existing SIEM. You might also be using both with a ticketing system such as Service Now.
+
+For more information on migrating from another SIEM to Microsoft Sentinel, watch the migration webinar: [MP4](https://aka.ms/AzSentinel_DetectionRules_19FEB21_MP4), [YouTube](https://youtu.be/njXK1h9lfR4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhlsYDm99KLbNWlq5).
++
+There are three common scenarios for side by side deployment:
+
+* If you have a ticketing system in your SOC, a best practice is to send alerts or incidents from both SIEM systems to a ticketing system such as Service Now. An example is using [Microsoft Sentinel Incident Bi-directional sync with ServiceNow](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-incident-bi-directional-sync-with-servicenow/ba-p/1667771) or [sending alerts enriched with supporting events from Microsoft Sentinel to third-party SIEMs](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-alerts-enriched-with-supporting-events-from-azure/ba-p/1456976).
+* At least initially, many users send alerts from Microsoft Sentinel to your on-premises SIEM. Read on how to do it in [Sending alerts enriched with supporting events from Microsoft Sentinel to third-party SIEMs](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-alerts-enriched-with-supporting-events-from-azure/ba-p/1456976).
+* Over time, as Microsoft Sentinel covers more workloads, it's typical to reverse that and send alerts from your on-premises SIEM to Microsoft Sentinel. To do that:
+ * With Splunk, read [Send data and notable events from Splunk to Microsoft Sentinel using the Microsoft Sentinel Splunk ....](https://techcommunity.microsoft.com/t5/azure-sentinel/how-to-export-data-from-splunk-to-azure-sentinel/ba-p/1891237)
+ * With QRadar read [Sending QRadar offenses to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/migrating-qradar-offenses-to-azure-sentinel/ba-p/2102043)
+ * For ArcSight, use [CEF Forwarding](https://community.microfocus.com/t5/Logger-Forwarding-Connectors/ArcSight-Forwarding-Connector-Configuration-Guide/ta-p/1583918).
+
+You can also send the alerts from Microsoft Sentinel to your third-party SIEM or ticketing system using the [Graph Security API](/graph/security-integration), which is simpler, but wouldn't enable sending other data.
++
+#### For MSSPs
+Since it eliminates the setup cost and is location agnostics, Microsoft Sentinel is a popular choice for providing SIEM-as-a-service. You can find a [list of MISA (Microsoft Intelligent Security Association) member managed security service providers (MSSPs) using Microsoft Sentinel](https://www.microsoft.com/security/blog/2020/07/14/microsoft-intelligent-security-association-managed-security-service-providers/). Many other MSSPs, especially regional and smaller ones, use Microsoft Sentinel but aren't MISA members.
+
+To start your journey as an MSSP, you should read the [Microsoft Sentinel Technical Playbooks for MSSPs](https://aka.ms/azsentinelmssp). More information about MSSP support is included in the next module, cloud architecture and multi-tenant support.
+
+## Part 2: Architecting & Deploying
+
+While the previous section offers options to start using Microsoft Sentinel in a matter of minutes, before you start a production deployment, you need to plan. This section walks you through the areas that you need to consider when architecting your solution, and provides guidelines on how to implement your design:
+
+* Workspace and tenant architecture
+* Data collection
+* Log management
+* Threat Intelligence acquisition
+
+### Module 3: Workspace and tenant architecture
+
+A Microsoft Sentinel instance is called a workspace. The workspace is the same as a Log Analytics workspace and supports any Log Analytics capability. You can think of Sentinel as a solution that adds SIEM features on top of a Log Analytics workspace.
+
+Multiple workspaces are often necessary and can act together as a single Microsoft Sentinel system. A special use case is providing service using Microsoft Sentinel, for example, by an **MSSP** (Managed Security Service Provider) or by a **Global SOC** in a large organization.
+
+To learn more about using multiple workspaces as one Microsoft Sentinel system, read [Extend Microsoft Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md) or watch the Webinar: [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmgkqH7MASAKIg8ql8), [YouTube](https://youtu.be/hwahlwgJPnE), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgkkYuxOITkGSI7x8).
+
+There are a few specific areas that require your consideration when using multiple workspaces:
+* An important driver for using multiple workspaces is **data residency**. Read more about [Microsoft Sentinel data residency](quickstart-onboard.md).
+* To deploy Microsoft Sentinel and manage content efficiently across multiple workspaces; you would like to manage Sentinel as code using **CI/CD technology**. A recommended best practice for Microsoft Sentinel is to enable continuous deployment:
+ * Read [Enable Continuous Deployment Natively with Microsoft Sentinel Repositories!](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enable-continuous-deployment-natively-with-microsoft-sentinel/ba-p/2929413)
+* When managing multiple workspaces as an MSSP, you may want to [protect the MSSPΓÇÖs Intellectual Property in Microsoft Sentinel](mssp-protect-intellectual-property.md).
+
+The [Microsoft Sentinel Technical Playbook for MSSPs](https://aka.ms/azsentinelmssp) provides detailed guidelines for many of those topics, and is useful also for large organizations, not just to MSSPs.
+
+### Module 4: Data Collection
+
+The foundation of a SIEM is collecting telemetry: events, alerts, and contextual enrichment information such as Threat Intelligence, vulnerability data, and asset information. You can find a list of sources you can connect here:
+* [Microsoft Sentinel data connectors](connect-data-sources.md)
+* [Find your Microsoft Sentinel data connector](data-connectors-reference.md) for seeing all the supported and out-of-the-box data connectors. You'll find links to generic deployment procedures, and extra steps required for specific connectors.
+* Data Collection Scenarios: Learn about collection methods such as [Logstash/CEF/WEF](connect-logstash.md). Other common scenarios are permissions restriction to tables, log filtering, collecting logs from AWS or GCP, O365 raw logs etc. All can be found in this webinar: [YouTube](https://www.youtube.com/watch?v=FStpHl0NRM8), [MP4](https://aka.ms/AS_LogCollectionScenarios_V3.0_18MAR2021_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhx-_hfIf0Ng3aM_G).
+
+The first piece of information you'll see for each connector is its **data ingestion method**. The method that appears there will be a link to one of the following generic deployment procedures, which contain most of the information you'll need to connect your data sources to Microsoft Sentinel:
+
+|Data ingestion method | Linked article with instructions |
+| -- | -- |
+| Azure service-to-service integration | [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md) |
+| Common Event Format (CEF) over Syslog | [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](connect-common-event-format.md) |
+| Microsoft Sentinel Data Collector API | [Connect your data source to the Microsoft Sentinel Data Collector API to ingest data](connect-rest-api-template.md) |
+| Azure Functions and the REST API | [Use Azure Functions to connect Microsoft Sentinel to your data source](connect-azure-functions-template.md) |
+| Syslog | [Collect data from Linux-based sources using Syslog](connect-syslog.md) |
+| Custom logs | [ Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md) |
+
+If your source isn't available, you can [create a custom connector](create-custom-connector.md). Custom connectors use the ingestion API and therefore are similar to direct sources. Custom connectors are most often implemented using Logic Apps, offering a codeless option, or Azure Functions.
+
+### Module 5: Log management
+
+While 'how many workspaces and which ones to use' is the first architecture question to ask when configuring Sentinel, there are other log management architectural decisions to consider:
+* Where and how long to retain data
+* How to best manage access to data and secure it
+
+#### Ingest, Archive, Search, and Restore Data within Microsoft Sentinel
+
+Watch the webinar: Manage Your Log Lifecycle with New Methods for Ingestion, Archival, Search, and Restoration, [here](https://www.youtube.com/watch?v=LgGpSJxUGoc&ab_channel=MicrosoftSecurityCommunity).
++
+This suite of features contains:
+
+* **Basic ingestion tier**: new pricing tier for Azure Log Analytics that allows for logs to be ingested at a lower cost. This data is only retained in the workspace for eight days total.
+* **Archive tier**: Azure Log Analytics has expanded its retention capability from two years to seven years. With the new tier, it will allow data to be retained up to seven years in a low-cost archived state.
+* **Search jobs**: search tasks that run limited KQL in order to find and return all relevant logs to what is searched. These jobs search data across the analytics tier, basic tier. and archived data.
+* **Data restoration**: new feature that allows users to pick a data table and a time range in order to restore data to the workspace via restore table.
+
+Learn more about these new features in [this article](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/ingest-archive-search-and-restore-data-in-microsoft-sentinel/ba-p/3195126).
+
+#### Alternative retention options outside of the Microsoft Sentinel platform
+
+If you want to retain data for _more than two years_, or _reduce the retention cost_, you can consider using Azure Data Explorer for long-term retention of Microsoft Sentinel logs: [Webinar Slides](https://onedrive.live.com/?authkey=%21AGe3Zue4W0xYo4s&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21963&parId=66C31D2DBF8E0F71%21954&o=OneUp), [Webinar Recording](https://www.youtube.com/watch?v=UO8zeTxgeVw&ab_channel=MicrosoftSecurityCommunity), [Blog](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-azure-data-explorer-for-long-term-retention-of-microsoft/ba-p/1883947).
+
+Need more depth? Watch the _Improving the Breadth and Coverage of Threat Hunting with ADX Support, More Entity Types, and Updated MITRE Integration_ webinar [here](https://www.youtube.com/watch?v=5coYjlw2Qqs&ab_channel=MicrosoftSecurityCommunity).
+
+If you prefer another long-term retention solution, [export from Microsoft Sentinel / Log Analytics to Azure Storage and Event Hubs](/cli/azure/monitor/log-analytics/workspace/data-export) or [move Logs to Long-Term Storage using Logic Apps](../azure-monitor/logs/logs-export-logic-app.md). The latter advantage is that it can export historical data.
+Lastly, you can set fine-grained retention periods using [table-level retention Settings](https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/azure-log-analytics-data-retention-by-type-in-real-life/ba-p/1416287). More details [here](../azure-monitor/logs/data-retention-archive.md).
++
+#### Log Security
+
+* Use [resource RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/controlling-access-to-azure-sentinel-data-resource-rbac/ba-p/1301463) or [Table Level RBAC](../azure-monitor/logs/manage-access.md) to enable multiple teams to use a single workspace.
+* If needed, [delete customer content from your workspaces](../azure-monitor/logs/personal-data-mgmt.md).
+* Learn how to [audit workspace queries and Microsoft Sentinel use, using alerts workbooks and queries](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/auditing-microsoft-sentinel-activities/ba-p/1718328).
+* Use [private links](../azure-monitor/logs/private-link-security.md) to ensure logs never leave your private network.
++
+#### Dedicated cluster
+
+Use a [dedicated workspace cluster](../azure-monitor/logs/logs-dedicated-clusters.md) if your projected data ingestion is around or more than 500 GB per day. A dedicated cluster enables you to secure resources for your Microsoft Sentinel data, which enables better query performance for large data sets.
++
+### Module 6: Enrichment: TI, Watchlists, and more
+
+One of the important functions of a SIEM is to apply contextual information to the event steam, enabling detection, alert prioritization, and incident investigation. Contextual information includes, for example, threat intelligence, IP intelligence, host and user information, and watchlists
+
+Microsoft Sentinel provides comprehensive tools to import, manage, and use threat intelligence. For other types of contextual information, Microsoft Sentinel provides Watchlists, and other alternative solutions.
+
+#### Threat Intelligence
+
+Threat Intelligence is an important building block of a SIEM. Watch the Explore the Power of Threat Intelligence in Microsoft Sentinel webinar [here](https://www.youtube.com/watch?v=i29Uzg6cLKc&ab_channel=MicrosoftSecurityCommunity).
+
+In Microsoft Sentinel, you can integrate Threat Intelligence (TI) using the built-in connectors from TAXII servers or through the Microsoft Graph Security API. Read more on how to in the [documentation](threat-intelligence-integration.md). For more information about importing Threat Intelligence, see the data collection modules.
+
+Once imported, [Threat Intelligence](understand-threat-intelligence.md) is used extensively throughout Microsoft Sentinel. The following features focus on using Threat Intelligence:
+
+* View and manage the imported threat intelligence in **Logs** in the new Threat Intelligence area of Microsoft Sentinel.
+* Use the [built-in TI Analytics rule templates](understand-threat-intelligence.md#detect-threats-with-threat-indicator-based-analytics) to generate security alerts and incidents using your imported threat intelligence.
+* [Visualize key information about your threat intelligence](understand-threat-intelligence.md#view-and-manage-your-threat-indicators) in Microsoft Sentinel with the Threat Intelligence workbook.
+
+Watch the **Automate Your Microsoft Sentinel Triage Efforts with RiskIQ Threat
+Intelligence** webinar: [YouTube](https://youtu.be/8vTVKitim5c), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmkngW7psV4janJrVE?e=UkmgWk).
+
+Short on time? watch the [Ignite session](https://www.youtube.com/watch?v=RLt05JaOnHc) (28 Minutes)
+
+Go in-depth? Watch the Webinar: [YouTube](https://youtu.be/zfoVe4iarto), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmgi8zazMLahRyycPf), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgi0pABN930p56id_).
+
+#### Watchlists and other lookup mechanisms
+
+To import and manage any type of contextual information, Microsoft Sentinel provides Watchlists. Watchlists enable you to upload data tables in CSV format and use them in your KQL queries. Read more about Watchlists in the [documentation](watchlists.md) or watch the use _Watchlists to Manage Alerts, Reduce Alert Fatigue and improve SOC efficiency_ webinar: [YouTube](https://youtu.be/148mr8anqtI), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk1qPwVKXkyKwqsM5?e=jLlNmP).
+
+Use watchlists to help you with following scenarios:
+
+* **Investigate threats and respond to incidents quickly** with the rapid import of IP addresses, file hashes, and other data from CSV files. After you import the data, use watchlist name-value pairs for joins and filters in alert rules, threat hunting, workbooks, notebooks, and general queries.
+
+* **Import business data as a watchlist**. For example, import user lists with privileged system access, or terminated employees. Then, use the watchlist to create allowlists and blocklists to detect or prevent those users from logging in to the network.
+
+* **Reduce alert fatigue**. Create allowlists to suppress alerts from a group of users, such as users from authorized IP addresses that perform tasks that would normally trigger the alert. Prevent benign events from becoming alerts.
+
+* **Enrich event data**. Use watchlists to enrich your event data with name-value combinations derived from external data sources.
+
+In addition to Watchlists, you can also use the KQL externaldata operator, custom logs, and KQL functions to manage and query context information. Each one of the four methods has its pros and cons, and you can read more about the comparison between those options in the blog post ["Implementing Lookups in Microsoft Sentinel"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/implementing-lookups-in-azure-sentinel/ba-p/1091306). While each method is different, using the resulting information in your queries is similar enabling easy switching between them.
+
+Read ["Utilize Watchlists to Drive Efficiency During Microsoft Sentinel Investigations"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/utilize-watchlists-to-drive-efficiency-during-microsoft-sentinel/ba-p/2090711) for ideas on using Watchlist outside of analytic rules.
+
+Watch the **Use Watchlists to Manage Alerts, Reduce Alert Fatigue and improve
+SOC efficiency** webinar. [YouTube](https://youtu.be/148mr8anqtI), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk1qPwVKXkyKwqsM5?e=jLlNmP).
++
+### Module 7: Log transformation
+
+Microsoft Sentinel supports two new features for data ingestion and transformation. These features, provided by Log Analytics, act on your data even before it's stored in your workspace.
+
+* The first of these features is the [**custom logs API.**](../azure-monitor/logs/custom-logs-overview.md) It allows you to send custom-format logs from any data source to your Log Analytics workspace, and store those logs either in certain specific standard tables, or in custom-formatted tables that you create. The actual ingestion of these logs can be done by direct API calls. You can use Log Analytics [data collection rules (DCRs)](../azure-monitor/essentials/data-collection-rule-overview.md) to define and configure these workflows.
+
+* The second feature is [**ingestion-time data transformation for standard logs**](../azure-monitor/logs/ingestion-time-transformations.md). It uses [DCRs](../azure-monitor/essentials/data-collection-rule-overview.md) to filter out irrelevant data, to enrich or tag your data, or to hide sensitive or personal information. Data transformation can be configured at ingestion time for the following types of built-in data connectors:
+ * AMA-based data connectors (based on the new Azure Monitor Agent)
+ * MMA-based data connectors (based on the legacy Log Analytics Agent)
+ * Data connectors that use Diagnostic settings
+ * [Service-to-service data connectors](data-connectors-reference.md)
+
+For more information, see:
+* [Transform or customize data at ingestion time in Microsoft Sentinel](configure-data-transformation.md)
+* [Custom data ingestion and transformation in Microsoft Sentinel](configure-data-transformation.md)
+* [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+
+### Module 8: Migration
+
+In many (if not most) cases, you already have a SIEM and need to migrate to Microsoft Sentinel. While it may be a good time to start over, and rethink your SIEM implementation, it makes sense to utilize some of the assets you already built in your current implementation. Watch the webinar describing best practices for converting detection rules from Splunk, QRadar, and ArcSight to Azure Sentinel Rules: [YouTube](https://youtu.be/njXK1h9lfR4), [MP4](https://aka.ms/AzSentinel_DetectionRules_19FEB21_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmhlsYDm99KLbNWlq5), [blog](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
+
+You might also be interested in some of the following resources:
+
+* [Splunk SPL to KQL mappings](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md)
+* [ArcSight and QRadar rule mapping samples](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/Rule%20Logic%20Mappings.md)
+
+### Module 9: Advanced SIEM Information Model (ASIM) and Normalization
+
+Working with various data types and tables together presents a challenge. You must become familiar with different data types and schemas, write and use a unique set of analytics rules, workbooks, and hunting queries. Correlation between the different data types necessary for investigation and hunting can also be tricky.
+
+The **Advanced SIEM Information Model (ASIM)** provides a seamless experience for handling various sources in uniform, normalized views. ASIM aligns with the Open-Source Security Events Metadata (OSSEM) common information model, promoting vendor agnostic, industry-wide normalization. Watch the Advanced SIEM Information Model (ASIM): Now built into Microsoft Sentinel webinar: YouTube, Deck.
+
+The current implementation is based on query time normalization using KQL functions:
+
+* **Normalized schemas** cover standard sets of predictable event types that are easy to work with and build unified capabilities. The schema defines which fields should represent an event, a normalized column naming convention, and a standard format for the field values.
+ * Watch the _Understanding Normalization in Microsoft Sentinel_ webinar: [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG).
+ * Watch the _Deep Dive into Microsoft Sentinel Normalizing Parsers and Normalized Content_ webinar: [YouTube](https://www.youtube.com/watch?v=zaqblyjQW6k), [MP3](https://aka.ms/AS_Normalizing_Parsers_and_Normalized_Content_11AUG2021_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM).
+* **Parsers** map existing data to the normalized schemas. Parsers are implemented using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). Watch the _Extend and Manage ASIM: Developing, Testing and Deploying Parsers_ webinar: [YouTube](https://youtu.be/NHLdcuJNqKw), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmk0_k0zs21rL7euHp?e=5XkTnW).
+* **Content** for each normalized schema includes analytics rules, workbooks, hunting queries. This content works on any normalized data without the need to create source-specific content.
+
+
+Using ASIM provides the following benefits:
+
+* **Cross source detection**: Normalized analytic rules work across sources, on-premises and cloud, now detecting attacks such as brute force or impossible travel across systems including Okta, AWS, and Azure.
+* **Allows source agnostic content**: the coverage of built-in and custom content using ASIM automatically expands to any source that supports ASIM, even if the source was added after the content was created. For example, process event analytics support any source that a customer may use to bring in the data, including Microsoft Defender for Endpoint, Windows Events, and Sysmon. We're ready to add [Sysmon for Linux](https://twitter.com/markrussinovich/status/1283039153920368651?lang=en) and WEF once released!
+* **Support for your custom sources in built-in analytics**
+* **Ease of use:** once an analyst learns ASIM, writing queries is much simpler as the field names are always the same.
++
+#### To learn more about ASIM:
+
+* Watch the overview webinar: [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG) .
+* Watch the _Deep Dive into Microsoft Sentinel Normalizing Parsers and Normalized Content_ webinar: [YouTube](https://www.youtube.com/watch?v=zaqblyjQW6k), [MP3](https://aka.ms/AS_Normalizing_Parsers_and_Normalized_Content_11AUG2021_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM).
+* Watch the _Turbocharging ASIM: Making Sure Normalization Helps Performance Rather Than Impacting It_ webinar: [YouTube](https://youtu.be/-dg_0NBIoak), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmjk5AfH32XSdoVzTJ?e=a6hCHb), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjnQITNn35QafW5V2?e=GnCDkA).
+* Read the [documentation](https://aka.ms/AzSentinelNormalization).
+
+#### To Deploy ASIM:
+
+* Deploy the parsers from the folders starting with ΓÇ£ASIM*ΓÇ¥ in the [parsers](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers) folder on GitHub.
+* Activate analytic rules that use ASIM. Search for ΓÇ£normalΓÇ¥ in the template gallery to find some of them. To get the full list, use this [GitHub search](https://github.com/search?q=ASIM+repo%3AAzure%2FAzure-Sentinel+path%3A%2Fdetections&type=Code&ref=advsearch&l=&l=).
+
+#### To Use ASIM:
+
+* Use the [ASIM hunting queries from GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries)
+* Use ASIM queries when using KQL in the log screen.
+* Write your own analytic rules using ASIM or [convert existing ones](normalization.md).
+* Write [parsers](normalization.md#asim-components) for your custom sources to make them ASIM compatible and take part in built-in analytics
+
+## Part 3: Creating Content
+
+What is Microsoft Sentinel's content?
+
+Microsoft Sentinel's security value is a combination of its built-in capabilities and your capability to create custom ones and customize the built-in ones. Among built-in capabilities, there are UEBA, Machine Learning or out-of-the-box analytics rules. Customized capabilities are often referred to as "content" and include analytic rules, hunting queries, workbooks, playbooks, etc.
+
+In this section, we grouped the modules that help you learn how to create such content or modify built-in-content to your needs. We start with KQL, the Lingua Franca of Azure Sentinel. The following modules discuss one of the content building blocks such as rules, playbooks, and workbooks. We wrap up by discussing use cases, which encompass elements of different types to address specific security goals such as threat detection, hunting, or governance.
+
+### Module 10: The Kusto Query Language (KQL)
+
+Most Microsoft Sentinel capabilities use [KQL or Kusto Query Language](/azure/data-explorer/kusto/query/). When you search in your logs, write rules, create hunting queries, or design workbooks, you use KQL.
+
+The next section on writing rules explains how to use KQL in the specific context of SIEM rules.
+
+#### Below is the recommended journey for learning Sentinel KQL:
+* [Pluralsight KQL course](https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch) - the basics
+* The Microsoft Sentinel KQL Lab: An interactive lab teaching KQL focusing on what you need for Microsoft Sentinel:
+ * [Learning module (SC-200 part 4)](/learn/paths/sc-200-utilize-kql-for-azure-sentinel/)
+ * [Presentation](https://onedrive.live.com/?authkey=%21AJRxX475AhXGQBE&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21740&parId=66C31D2DBF8E0F71%21446&o=OneUp), [Lab URL](https://aka.ms/lademo)
+ * a [Jupyter Notebooks version](https://github.com/jjsantanna/azure_sentinel_learn_kql_lab/blob/master/azure_sentinel_learn_kql_lab.ipynb), which let you test the queries within the notebook.
+ * Learning webinar: [YouTube](https://youtu.be/EDCBLULjtCM), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmglwAjUjmYy2Qn5J-);
+ * Reviewing lab solutions webinar: [YouTube](https://youtu.be/YKD_OFLMpf8), [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmg0EKIi5gwXyccB44?e=sF6UG5)
+* [Pluralsight Advanced KQL course](https://www.pluralsight.com/courses/microsoft-azure-data-explorer-advanced-query-capabilities)
+* _Optimizing Azure Sentinel KQL queries performance_: [YouTube](https://youtu.be/jN1Cz0JcLYU), [MP4](https://aka.ms/AzS_09SEP20_MP4), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmg2imjIS8NABc26b-?e=rXZrR5).
+* Using ASIM in your KQL queries: [YouTube](https://www.youtube.com/watch?v=WoGD-JeC7ng), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+* _KQL Framework for Microsoft Sentinel - Empowering You to Become KQL-Savvy:_ [YouTube](https://youtu.be/j7BQvJ-Qx_k), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmkgqKSV-m1QWgkzKT?e=QAilwu).
+
+You might also find the following references useful as you learn KQL:
+
+* [The KQL Cheat Sheet](https://www.mbsecure.nl/blog/2019/12/kql-cheat-sheet)
+* [Query optimization best practices](../azure-monitor/logs/query-optimization.md)
+
+### Module 11: Analytics
+
+#### Writing Scheduled Analytics Rules
+
+Microsoft Sentinel enables you to use [built-in rule templates](detect-threats-built-in.md), customize the templates for your environment, or create custom rules. The core of the rules is a KQL query; however, there's much more than that to configure in a rule.
+
+To learn the procedure for creating rules, read the [documentation](detect-threats-custom.md). To learn how to write rules, that is, what should go into a rule, focusing on KQL for rules, watch the webinar: [MP4](https://1drv.ms/v/s%21AnEPjr8tHcNmghlWrlBCPKwT5WTT), [YouTube](https://youtu.be/pJjljBT4ipQ), [Presentation](https://1drv.ms/b/s!AnEPjr8tHcNmgmffNHf0wqmNEqdx).
+
+SIEM analytics rules have specific patterns. Learn how to implement rules and write KQL for those patterns:
+* **Correlation rules**: [using lists and the "in" operator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-active-lists-out-make-list-in/ba-p/1029225) or using the ["join" operator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-correlation-rules-the-join-kql-operator/ba-p/1041500)
+* **Aggregation**: see using lists and the "in" operator above, or a more [advanced pattern handling sliding windows](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/handling-sliding-windows-in-azure-sentinel-rules/ba-p/1505394)
+* **Lookups**: Regular, or Approximate, partial & combined lookups
+* **Handling false positives**
+* **Delayed events:** are a fact of life in any SIEM and are hard to tackle. Microsoft Sentinel can help you mitigate delays in your rules.
+* Using KQL functions as **building blocks**: Enriching Windows Security Events with Parameterized Function.
+
+To blog post ["Blob and File Storage Investigations"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-ignite-2021-blob-and-file-storage-investigations/ba-p/2175138) provides a step by step example of writing a useful analytic rule.
+
+#### Using Built-in Analytics
+
+Before embarking on your own rule writing, you should take advantage of the built-in analytics capabilities. They don't require much from you, but it's worthwhile learning about them:
+
+* Use the [built-in scheduled rule templates](detect-threats-built-in.md). You can tune those templates by modifying the templates the same way to edit any scheduled rule. Make sure to deploy the templates for the data connectors you connect listed in the data connector "next steps" tab.
+* Learn more about Microsoft Sentinel's [Machine learning capabilities](bring-your-own-ml.md): [MP4](https://onedrive.live.com/?authkey=%21ANHkqv1CC1rX0JE&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21772&parId=66C31D2DBF8E0F71%21770&o=OneUp), [YouTube](https://www.youtube.com/watch?v=DxZXHvq1jOs&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21ACovlR%2DY24o1rzU&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21773&parId=66C31D2DBF8E0F71%21770&o=OneUp)
+* Find the list of Microsoft Sentinel's [Advanced multi-stage attack detections ("Fusion") ](fusion.md) that are enabled by default.
+* Watch the Fusion ML Detections with Scheduled Analytics Rules webinar: [YouTube](https://www.youtube.com/watch?v=Ee7gBAQ2Dzc), [MP4](https://onedrive.live.com/?authkey=%21AJzpplg3agpLKdo&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211663&parId=66C31D2DBF8E0F71%211654&o=OneUp), [Presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%211674&ithint=file%2Cpdf&authkey=%21AD%5F1AN14N3W592M).
+* Learn more about Azure Sentinel's built-in SOC-ML anomalies [here](soc-ml-anomalies.md).
+* Watch the customized SOC-ML anomalies and how to use them webinar here: [YouTube](https://www.youtube.com/watch?v=z-suDfFgSsk&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AJVEGsR4ym8hVKk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211742&parId=66C31D2DBF8E0F71%211720&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AFqylaqbAGZAIfA&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211729&parId=66C31D2DBF8E0F71%211720&o=OneUp).
+* Watch the Fusion ML Detections for Emerging Threats & Configuration UI webinar here: [YouTube](https://www.youtube.com/watch?v=bTDp41yMGdk), [Presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%212287&ithint=file%2Cpdf&authkey=%21AIJICOTqjY7bszE).
+
+### Module 12: Implementing SOAR
+
+In modern SIEMs such as Microsoft Sentinel, SOAR (Security Orchestration, Automation, and Response) comprises the entire process from the moment an incident is triggered and until it's resolved. This process starts with an [incident investigation](investigate-cases.md) and continues with an [automated response](tutorial-respond-threats-playbook.md). The blog post ["How to use Microsoft Sentinel for Incident Response, Orchestration and Automation"](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397) provides an overview of common use cases for SOAR.
+
+[Automation rules](automate-incident-handling-with-automation-rules.md) are the starting point for Microsoft Sentinel automation. They provide a lightweight method for central automated handling of incidents, including suppression,[ false-positive handling](false-positives.md), and automatic assignment.
+
+To provide robust workflow based automation capabilities, automation rules use [Logic App playbooks](automate-responses-with-playbooks.md):
+* Watch the Unleash the automation Jedi tricks & build Logic Apps Playbooks like a Boss Webinar: [YouTube](https://www.youtube.com/watch?v=G6TIzJK8XBA&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AMHoD01Fnv0Nkeg&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21513&parId=66C31D2DBF8E0F71%21511&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AJK2W6MaFrzSzpw&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21514&parId=66C31D2DBF8E0F71%21511&o=OneUp).
+* Read about [Logic Apps](../logic-apps/logic-apps-overview.md), which is the core technology driving Microsoft Sentinel playbooks.
+*[ The Microsoft Sentinel Logic App connector](/connectors/azuresentinel/) is a link between Logic Apps and Azure Sentinel.
+
+You can find dozens of useful Playbooks in the [Playbooks folder](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks) on the [Microsoft Sentinel GitHub](https://github.com/Azure/Azure-Sentinel), or read [_A playbook using a watchlist to Inform a subscription owner about an alert_](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/playbooks-amp-watchlists-part-1-inform-the-subscription-owner/ba-p/1768917) for a Playbook walkthrough.
+
+### Module 13: Workbooks, reporting, and visualization
+
+#### Workbooks
+
+As the nerve center of your SOC, you need Microsoft Sentinel to visualize the information it collects and produces. Use workbooks to visualize data in Microsoft Sentinel.
+
+* To learn how to create workbooks, read the [documentation](../azure-monitor/visualize/workbooks-overview.md) or watch Billy York's [Workbooks training](https://www.youtube.com/watch?v=iGiPpD_-10M&ab_channel=FestiveTechCalendar) (and [accompanying text](https://www.cloudsma.com/2019/12/azure-advent-calendar-azure-monitor-workbooks/).
+* The mentioned resources aren't Microsoft Sentinel specific, and apply to Microsoft Workbooks in general. To learn more about Workbooks in Microsoft Sentinel, watch the Webinar: [YouTube](https://www.youtube.com/watch?v=7eYNaYSsk1A&list=PLmAptfqzxVEUD7-w180kVApknWHJCXf0j&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ALoa5KFEhBq2DyQ&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21373&parId=66C31D2DBF8E0F71%21372&o=OneUp), [Presentation](https://onedrive.live.com/view.aspx?resid=66C31D2DBF8E0F71!374&ithint=file%2cpptx&authkey=!AD5hvwtCTeHvQLQ), and read the [documentation](monitor-your-data.md).
+
+Workbooks can be interactive and enable much more than just charting. With Workbooks, you can create apps or extension modules for Microsoft Sentinel to complement built-in functionality. We also use workbooks to extend the features of Microsoft Sentinel. Few examples of such apps you can both use and learn from are:
+* The [Investigation Insights Workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-investigation-insights-workbook/ba-p/1816903) provides an alternative approach for investigating incidents.
+* [Graph Visualization of External Teams Collaborations](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/graph-visualization-of-external-teams-collaborations-in-azure/ba-p/1356847) enables hunting for risky Teams use.
+* The [users' travel map workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-to-follow-a-users-travel-and-map-their/ba-p/981716) allows investigating geo-location alerts.
+
+* The insecure protocols workbook ([Implementation Guide](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-insecure-protocols-workbook-implementation-guide/ba-p/1197564), [recent enhancements](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-insecure-protocols-workbook-reimagined/ba-p/1558375), and [overview video](https://www.youtube.com/watch?v=xzHDWbBX6h8&list=PLmAptfqzxVEWkrUwV-B1Ob3qW-QPW_Ydu&index=9&ab_channel=MicrosoftSecurityCommunity)) lets you identify the use of insecure protocols in your network.
+
+* Lastly, learn how to [integrate information from any source using API calls in a workbook](https://techcommunity.microsoft.com/t5/azure-sentinel/using-the-sentinel-api-to-view-data-in-a-workbook/ba-p/1386436).
+
+You can find dozens of workbooks in the [Workbooks folder](https://github.com/Azure/Azure-Sentinel/tree/master/Workbooks) in the [Microsoft Sentinel GitHub](https://github.com/Azure/Azure-Sentinel). Some of them are available in the Microsoft Sentinel workbooks gallery as well.
+
+#### Reporting and other visualization options
+
+Workbooks can serve for reporting. For more advanced reporting capabilities such as reports scheduling and distribution or pivot tables, you might want to use:
+* Power BI, which natively [integrates with Log Analytics and Sentinel](../azure-monitor/logs/log-powerbi.md).
+* Excel, which can use [Log Analytics and Sentinel as the data source](../azure-monitor/logs/log-excel.md) (and see [video](https://www.youtube.com/watch?v=Rx7rJhjzTZA) on how).
+* Jupyter notebooks covered later in the hunting module are also a great visualization tool.
+
+### Module 14: Notebooks
+
+Jupyter notebooks are fully integrated with Microsoft Sentinel. While considered an important tool in the hunter's tool chest and discussed the webinars in the hunting section below, their value is much broader. Notebooks can serve for advanced visualization, an investigation guide, and for sophisticated automation.
+
+To understand them better, watch the [Introduction to notebooks video](https://www.youtube.com/watch?v=TgRRJeoyAYw&ab_channel=MicrosoftSecurityCommunity). Get started using the Notebooks webinar ([YouTube](https://www.youtube.com/watch?v=rewdNeX6H94&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ALXve0rEAhZOuP4&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21778&parId=66C31D2DBF8E0F71%21776&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AEQpzVDAwzzen30&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21779&parId=66C31D2DBF8E0F71%21776&o=OneUp)) or read the [documentation](notebooks.md). The [Microsoft Sentinel Notebooks Ninja series](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/becoming-a-microsoft-sentinel-notebooks-ninja-the-series/ba-p/2693491) is an ongoing training series to upskill you in Notebooks.
+
+An important part of the integration is implemented by [MSTICPY](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/msticpy-python-defender-tools/ba-p/648929), which is a Python library developed by our research team to be used with Jupyter notebooks. It adds Microsoft Sentinel interfaces and sophisticated security capabilities to your notebooks.
+* [MSTICPy Fundamentals to Build Your Own Notebooks](https://www.youtube.com/watch?v=S0knTOnA2Rk&ab_channel=MicrosoftSecurityCommunity)
+* [MSTICPy Intermediate to Build Your Own Notebooks](https://www.youtube.com/watch?v=Rpj-FS_0Wqg&ab_channel=MicrosoftSecurityCommunity)
+
+### Module 15: Use cases and solutions
+
+Connectors, rules, playbooks, and workbooks enable you to implement **use cases**: the SIEM term for a content pack intended to detect and respond to a threat. You can deploy Sentinel built-in use cases by activating the suggested rules when connecting each Connector. A **solution** is a **group of use cases** addressing a specific threat domain.
+
+The Webinar **"Tackling Identity"**([YouTube](https://www.youtube.com/watch?v=BcxiY32famg&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AFsVrhZwut8EnB4&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21284&parId=66C31D2DBF8E0F71%21282&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21ACSAvdeLB7JfAX8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21283&parId=66C31D2DBF8E0F71%21282&o=OneUp)) explains what a use case is, how to approach its design, and presents several use cases that collectively address identity threats.
+
+Another relevant solution area is **protecting remote work**. Watch our [Ignite session on protection remote work](https://www.youtube.com/watch?v=09JfbjQdzpg&ab_channel=MicrosoftSecurity), and read more on the specific use cases:
+* [Microsoft Teams hunting use cases](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/protecting-your-teams-with-azure-sentinel/ba-p/1265761) and [Graph Visualization of External Microsoft Teams Collaborations](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/graph-visualization-of-external-teams-collaborations-in-azure/ba-p/1356847)
+* [Monitoring Zoom with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/monitoring-zoom-with-azure-sentinel/ba-p/1341516): custom connectors, analytic rules, and hunting queries.
+* [Monitoring Azure Virtual Desktop with Microsoft Sentinel](../virtual-desktop/diagnostics-log-analytics.md): use Windows Security Events, Azure AD Sign-in logs, Microsoft 365 Defender for Endpoints, and AVD diagnostics logs to detect and hunt for AVD threats.
+*[ Monitor Microsoft endpoint Manager / Intune](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/secure-working-from-home-deep-insights-at-enrolled-mem-assets/ba-p/1424255), using queries and workbooks.
+
+And lastly, focusing on recent attacks, learn how to [monitor the software supply chain with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/monitoring-the-software-supply-chain-with-azure-sentinel/ba-p/2176463).
+
+**Microsoft Sentinel solutions** provide in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical scenarios in Microsoft Sentinel. Read more about them [here](sentinel-solutions.md), and watch the **webinar about how to create your own [here](https://www.youtube.com/watch?v=oYTgaTh_NOU&ab_channel=MicrosoftSecurityCommunity).** For more about Sentinel content management in general, watch the Microsoft Sentinel Content Management webinar - [YouTube](https://www.youtube.com/watch?v=oYTgaTh_NOU&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%212201&ithint=file%2Cpdf&authkey=%21AIdsDXF3iluXd94).
+
+## Part 4: Operating
+
+### Module 16: Handling incidents
+
+After building your SOC, you need to start using it. The "day in a SOC analyst life" webinar ([YouTube](https://www.youtube.com/watch?v=HloK6Ay4h1M&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21ACD%5F1nY2ND8MOmg&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21273&parId=66C31D2DBF8E0F71%21271&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AAvOR9OSD51OZ8c&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21272&parId=66C31D2DBF8E0F71%21271&o=OneUp)) walks you through using Microsoft Sentinel in the SOC to **triage**, **investigate** and **respond** to incidents.
+
+[Integrating with Microsoft Teams directly from Microsoft Sentinel](collaborate-in-microsoft-teams.md) enables your teams to collaborate seamlessly across the organization, and with external stakeholders. Watch the _Decrease Your SOCΓÇÖs MTTR (Mean Time to Respond) by Integrating Microsoft Sentinel with Microsoft Teams_ webinar [here](https://www.youtube.com/watch?v=0REgc2jB560&ab_channel=MicrosoftSecurityCommunity).
+
+You might also want to read the [documentation article on incident investigation](investigate-cases.md). As part of the investigation, you'll also use the [entity pages](identify-threats-with-entity-behavior-analytics.md#entity-pages) to get more information about entities related to your incident or identified as part of your investigation.
+
+**Incident investigation** in Microsoft Sentinel extends beyond the core incident investigation functionality. We can build **additional investigation tools** using Workbooks and Notebooks (the latter are discussed later, under _Hunting_). You can also build more investigation tools or modify existing one to your specific needs. Examples include:
+* The [Investigation Insights Workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-investigation-insights-workbook/ba-p/1816903) provides an alternative approach for investigating incidents.
+* Notebooks enhance the investigation experience. Read [_Why Use Jupyter for Security Investigations?_](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/why-use-jupyter-for-security-investigations/ba-p/475729) and learn how to investigate with Microsoft Sentinel & Jupyter Notebooks: [part 1](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/432921), [part 2](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/483466), and [part 3](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/561413).
+
+### Module 17: Hunting
+
+While most of the discussion so far focused on detection and incident management, **hunting** is another important use case for Microsoft Sentinel. Hunting is a **proactive search for threats** rather than a reactive response to alerts.
+
+The hunting dashboard is constantly updated. It shows all the queries that were written by Microsoft's team of security analysts and any extra queries that you've created or modified. Each query provides a description of what it hunts for, and what kind of data it runs on. These templates are grouped by their various tactics - the icons on the right categorize the type of threat, such as initial access, persistence, and exfiltration. Read more about it [here](hunting.md).
+
+To understand more about what hunting is and how Microsoft Sentinel supports it, watch the **Hunting Intro Webinar** ([YouTube](https://www.youtube.com/watch?v=6ueR09PLoLU&t=1451s&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?authkey=%21AO3gGrb474Bjmls&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21468&parId=66C31D2DBF8E0F71%21466&o=OneUp), [Presentation](https://onedrive.live.com/?authkey=%21AJ09hohPMbtbVKk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21469&parId=66C31D2DBF8E0F71%21466&o=OneUp)). The webinar starts with an update on new features. To learn about hunting, start at slide 12. The YouTube link is already set to start there.
+
+While the intro webinar focuses on tools, hunting is all about security. Our **security research team webinar on hunting** ([MP4](https://onedrive.live.com/?authkey=%21ADC2GvI1Yjlh%2D6E&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21276&parId=66C31D2DBF8E0F71%21274&o=OneUp), [YouTube](https://www.youtube.com/watch?v=BTEV_b6-vtg&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21AF1uqmmrWbI3Mb8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21275&parId=66C31D2DBF8E0F71%21274&o=OneUp)) focuses on how to actually hunt. The follow-up **AWS Threat Hunting using Sentinel Webinar** ([MP4](https://onedrive.live.com/?authkey=%21ADu7r7XMTmKyiMk&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21336&parId=66C31D2DBF8E0F71%21333&o=OneUp), [YouTube](https://www.youtube.com/watch?v=bSH-JOKl2Kk&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21AA7UKQIj2wu1FiI&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21334&parId=66C31D2DBF8E0F71%21333&o=OneUp)) really drives the point by showing an end-to-end hunting scenario on a high-value target environment. Lastly, you can learn how to do [SolarWinds Post-Compromise Hunting with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/solarwinds-post-compromise-hunting-with-azure-sentinel/ba-p/1995095) and [WebShell hunting](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/web-shell-threat-hunting-with-azure-sentinel/ba-p/2234968) motivated by the latest recent vulnerabilities in on-premises Microsoft Exchange servers.
+
+### Module 18: User and Entity Behavior Analytics (UEBA)
+
+Microsoft Sentinel newly introduced [User and Entity Behavior Analytics (UEBA)](identify-threats-with-entity-behavior-analytics.md) module enables you to identify and investigate threats inside your organization and their potential impact - whether a compromised entity or a malicious insider.
+
+As Microsoft Sentinel collects logs and alerts from all of its connected data sources, it analyzes them and builds baseline behavioral profiles of your organizationΓÇÖs entities (such as **users**, **hosts**, **IP addresses**, and **applications**) across time and peer group horizon. With various techniques and machine learning capabilities, Microsoft Sentinel can then identify anomalous activity and help you determine if an asset has been compromised. Not only that, but it can also figure out the relative sensitivity of particular assets, identify peer groups of assets, and evaluate the potential impact of any given compromised asset (its ΓÇ£blast radiusΓÇ¥). Armed with this information, you can effectively prioritize your investigation and incident handling.
+
+Learn more about UEBA in the _UEBA Webinar_ ([YouTube](https://www.youtube.com/watch?v=ixBotw9Qidg&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21ADXz0j2AO7Kgfv8&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21515&parId=66C31D2DBF8E0F71%21508&o=OneUp), [MP4](https://onedrive.live.com/?authkey=%21AO0122hqWUkZTJI&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%211909&parId=66C31D2DBF8E0F71%21508&o=OneUp)) and read about using [UEBA for investigations in your SOC](https://techcommunity.microsoft.com/t5/azure-sentinel/guided-ueba-investigation-scenarios-to-empower-your-soc/ba-p/1857100).
+
+For watching the latest updates, see [Future of Users Entity Behavioral Analytics in Sentinel webinar](https://www.youtube.com/watch?v=dLVAkSLKLyQ&ab_channel=MicrosoftSecurityCommunity).
+
+### Module 19: Monitoring Microsoft Sentinel's health
+
+Part of operating a SIEM is making sure it works smoothly and an evolving area in Azure Sentinel. Use the following to monitor Microsoft Sentinel's health:
+
+* Measure the efficiency of your [Security operations](manage-soc-with-incident-metrics.md#security-operations-efficiency-workbook) ([video](https://www.youtube.com/watch?v=jRucUysVpxI&ab_channel=MicrosoftSecurityCommunity))
+* **SentinelHealth data table**. Provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions. Find more information [here](/monitor-data-connector-health.md).
+* Monitor [Data connectors health](monitor-data-connector-health.md) ([video](https://www.youtube.com/watch?v=T6Vyo7gZYds&ab_channel=MicrosoftSecurityCommunity)) and [get notifications on anomalies](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/data-connector-health-push-notification-alerts/ba-p/1996442).
+* Monitor agents using the [agents' health solution (Windows only)](../azure-monitor/insights/solution-agenthealth.md) and the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat)(Linux and Windows).
+* Monitor your Log Analytics workspace: [YouTube](https://www.youtube.com/watch?v=DmDU9QP_JlI&ab_channel=MicrosoftSecurityCommunity), [MP4](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%21792&ithint=video%2Cmp4&authkey=%21ALgHojpWDidvFyo), [Presentation](https://onedrive.live.com/?cid=66c31d2dbf8e0f71&id=66C31D2DBF8E0F71%21794&ithint=file%2Cpdf&authkey=%21AAva%2Do6Ru1fjJ78), including query execution and ingest health.
+* Cost management is also an important operational procedure in the SOC. Use the [Ingestion Cost Alert Playbook](https://techcommunity.microsoft.com/t5/azure-sentinel/ingestion-cost-alert-playbook/ba-p/2006003) to ensure you're aware in time of any cost increase.
+
+## Part 5: Advanced
+
+### Module 20: Extending and Integrating using Microsoft Sentinel APIs
+
+As a cloud-native SIEM, Microsoft Sentinel is an API first system. Every feature can be configured and used through an API, enabling easy integration with other systems and extending Sentinel with your own code. If API sounds intimidating to you, don't worry; whatever is available using the API is [also available using PowerShell](https://techcommunity.microsoft.com/t5/azure-sentinel/new-year-new-official-azure-sentinel-powershell-module/ba-p/2025041).
+
+To learn more about Microsoft Sentinel APIs, watch the [short introductory video](https://www.youtube.com/watch?v=gQDBkc-K-Y4&ab_channel=MicrosoftSecurityCommunity) and read the [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-api-101/ba-p/1438928). To get the details, watch the deep dive Webinar ([MP4](https://onedrive.live.com/?authkey=%21ACZmq6oAe1yVDmY&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21307&parId=66C31D2DBF8E0F71%21305&o=OneUp), [YouTube](https://www.youtube.com/watch?v=Cu4dc88GH1k&ab_channel=MicrosoftSecurityCommunity), [Presentation](https://onedrive.live.com/?authkey=%21AF3TWPEJKZvJ23Q&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21308&parId=66C31D2DBF8E0F71%21305&o=OneUp)) and read the blog post [_Extending Microsoft Sentinel: APIs, Integration, and management automation_](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/extending-azure-sentinel-apis-integration-and-management/ba-p/1116885).
+
+### Module 21: Bring your own ML
+
+Microsoft Sentinel provides a great platform for implementing your own Machine Learning algorithms. We call it Bring-Your-Own-ML(BYOML for short). BYOML is intended for advanced users. If you're looking for built-in behavioral analytics, use our ML Analytics rules, UEBA module, or write your own behavioral analytics KQL-based analytics rules.
+
+To start with bringing your own ML to Microsoft Sentinel, watch the [video](https://www.youtube.com/watch?v=QDIuvZbmUmc), and read the [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/build-your-own-machine-learning-detections-in-the-ai-immersed/ba-p/1750920). You might also want to refer to the [BYOML documentation](bring-your-own-ml.md).
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
- [Learn about Accenture CTI integration with Microsoft Sentinel](https://www.accenture.com/us-en/services/security/cyber-defense).
-### Anomali Limo
+### Anomali
+- [Learn how to import threat intelligence from Anoamli Threatstream into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-anomali-threatstream-feed-into-microsoft-sentinel/ba-p/3561742#M3787)
- [See what you need to connect to Anomali Limo feed](https://www.anomali.com/resources/limo). ### Cybersixgill Darkfeed
service-health Resource Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-overview.md
Different resources have their own criteria for when they report that they are d
![Status of *Degraded* for a virtual machine](./media/resource-health-overview/degraded.png)
+For VMSS, visit [Resource health state is "Degraded" in Azure Virtual Machine Scale Set](https://docs.microsoft.com/troubleshoot/azure/virtual-machine-scale-sets/resource-health-degraded-state) page for more information.
+ ## History information > [!NOTE]
You can also access Resource Health by selecting **All services** and typing **r
Check out these references to learn more about Resource Health: - [Resource types and health checks in Azure Resource Health](resource-health-checks-resource-types.md)-- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)
+- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)
site-recovery Quickstart Create Vault Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-bicep.md
+
+ Title: Quickstart to create an Azure Recovery Services vault using Bicep.
+description: In this quickstart, you learn how to create an Azure Recovery Services vault using Bicep.
++ Last updated : 06/27/2022++++
+# Quickstart: Create a Recovery Services vault using Bicep
+
+This quickstart describes how to set up a Recovery Services vault using Bicep. The [Azure Site Recovery](site-recovery-overview.md) service contributes to your business continuity and disaster recovery (BCDR) strategy so your business applications stay online during planned and unplanned outages. Site Recovery manages disaster recovery of on-premises machines and Azure virtual machines (VM), including replication, failover, and recovery.
++
+## Prerequisites
+
+If you don't have an active Azure subscription, you can create a
+[free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/recovery-services-vault-create/).
++
+Two Azure resources are defined in the Bicep file:
+
+- [Microsoft.RecoveryServices vaults](/azure/templates/microsoft.recoveryservices/vaults): creates the vault.
+- [Microsoft.RecoveryServices/vaults/backupstorageconfig](/rest/api/backup/backup-resource-storage-configs): configures the vault's backup redundancy settings.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters vaultName=<vault-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -vaultName "<vault-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<vault-name\>** with the name of the vault.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use Azure CLI or Azure PowerShell to confirm that the vault was created.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az backup vault show --name <vault-name> --resource-group exampleRG
+az backup vault backup-properties show --name <vault-name> --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$vaultBackupConfig = Get-AzRecoveryServicesVault -Name "<vault-name>"
+
+Get-AzRecoveryServicesVault -Name "<vault-name>" -ResourceGroupName "exampleRG"
+Get-AzRecoveryServicesBackupProperty -Vault $vaultBackupConfig
+```
+++
+> [!NOTE]
+> Replace **\<vault-name\>** with the name of the vault you created.
+
+## Clean up resources
+
+If you plan to use the new resources, no action is needed. Otherwise, you can remove the resource group and vault that was created in this quickstart. To delete the resource group and its resources, use Azure CLI or Azure PowerShell.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a Recovery Services vault using Bicep. To learn more about disaster recovery, continue to the next quickstart article.
+
+> [!div class="nextstepaction"]
+> [Set up disaster recovery](azure-to-azure-quickstart.md)
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Physical servers with the HP CCISS storage controller | Not supported.
Device/Mount point naming convention | Device name or mount point name should be unique.<br/> Ensure that no two devices/mount points have case-sensitive names. For example, naming devices for the same VM as *device1* and *Device1* isn't supported. Directories | If you're running a version of the Mobility service earlier than version 9.20 (released in [Update Rollup 31](https://support.microsoft.com/help/4478871/)), then these restrictions apply:<br/><br/> - These directories (if set up as separate partitions/file-systems) must be on the same OS disk on the source server: /(root), /boot, /usr, /usr/local, /var, /etc.</br> - The /boot directory should be on a disk partition and not be an LVM volume.<br/><br/> From version 9.20 onwards, these restrictions don't apply. Boot directory | - Boot disks with GPT partition format are supported. GPT disks are also supported as data disks.<br/><br/> Multiple boot disks on a VM aren't supported.<br/><br/> - /boot on an LVM volume across more than one disk isn't supported.<br/> - A machine without a boot disk can't be replicated.
-Free space requirements| 2 GB on the /root partition <br/><br/> 250 MB on the installation folder
+Free space requirements| 2 GB on the /(root) partition <br/><br/> 250 MB on the installation folder
XFSv5 | XFSv5 features on XFS file systems, such as metadata checksum, are supported (Mobility service version 9.10 onwards).<br/> Use the xfs_info utility to check the XFS superblock for the partition. If `ftype` is set to 1, then XFSv5 features are in use. BTRFS | BTRFS is supported from [Update Rollup 34](https://support.microsoft.com/help/4490016) (version 9.22 of the Mobility service) onwards. BTRFS isn't supported if:<br/><br/> - The BTRFS file system subvolume is changed after enabling protection.</br> - The BTRFS file system is spread over multiple disks.</br> - The BTRFS file system supports RAID.
spring-cloud How To Intellij Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-intellij-deploy-apps.md
Previously updated : 11/03/2021 Last updated : 06/24/2022
Before running this example, you can try the [basic quickstart](./quickstart.md)
You can add the Azure Toolkit for IntelliJ IDEA 3.51.0 from the IntelliJ **Plugins** UI.
-1. Start IntelliJ. If you have opened a project previously, close the project to show the welcome dialog. Select **Configure** from link lower right, and then select **Plugins** to open the plug-in configuration dialog, and select **Install Plugins from disk**.
+1. Start IntelliJ. If you have opened a project previously, close the project to show the welcome dialog. Select **Configure** from link lower right, and then select **Plugins** to open the plug-in configuration dialog, and select **Install Plugins from disk**.
- ![Select Configure](media/spring-cloud-intellij-howto/configure-plugin-1.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/configure-plugin.png" alt-text="Screenshot of IntelliJ IDEA Welcome dialog box with Configure element highlighted.":::
1. Search for Azure Toolkit for IntelliJ. Select **Install**.
- ![Install plugin](media/spring-cloud-intellij-howto/install-plugin.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/install-plugin.png" alt-text="Screenshot of IntelliJ IDEA Plugins dialog box with Install button highlighted.":::
1. Select **Restart IDE**.
The following procedures deploy a Hello World application using IntelliJ IDEA.
1. Open IntelliJ **Welcome** dialog, select **Import Project** to open the import wizard. 1. Select the *gs-spring-boot\complete* folder.
- ![Import Project](media/spring-cloud-intellij-howto/import-project-1.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/import-project.png" alt-text="Screenshot of IntelliJ IDEA Open File or Project dialog box with complete folder highlighted." lightbox="media/how-to-intellij-deploy-apps/import-project.png":::
## Deploy to Azure Spring Apps
-In order to deploy to Azure you must sign-in with your Azure account, and choose your subscription. For sign-in details, see [Installation and sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in).
+In order to deploy to Azure you must sign-in with your Azure account, and choose your subscription. For sign-in details, see [Installation and sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in).
1. Right-click your project in IntelliJ project explorer, and select **Azure** -> **Deploy to Azure Spring Apps**.
- ![Deploy to Azure 1](media/spring-cloud-intellij-howto/deploy-to-azure-1.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/deploy-to-azure-menu-option.png" alt-text="Screenshot of IntelliJ IDEA context menu with Deploy to Azure Spring Apps option highlighted." lightbox="media/how-to-intellij-deploy-apps/deploy-to-azure-menu-option.png":::
1. Accept the name for app in the **Name** field. **Name** refers to the configuration, not app name. Users don't usually need to change it. 1. Accept the identifier from the project for the **Artifact**.
-1. Select **App:** then select **Create app...**.
+1. Select **App:** then click **+** to create an Azure Spring Apps instance.
- ![Deploy to Azure 2](media/spring-cloud-intellij-howto/deploy-to-azure-2.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/deploy-to-azure-dialog-box.png" alt-text="Screenshot of IntelliJ IDEA Deploy Azure Spring app dialog box with plus button highlighted." lightbox="media/how-to-intellij-deploy-apps/deploy-to-azure-dialog-box.png":::
1. Enter **App name**, then select **OK**.
- ![Deploy to Azure OK](media/spring-cloud-intellij-howto/deploy-to-azure-2a.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/create-azure-spring-app-dialog-box.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring App dialog box with App name field in focus.":::
1. Start the deployment by selecting the **Run** button.
- ![Deploy to Azure 3](media/spring-cloud-intellij-howto/deploy-to-azure-3.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/run-button.png" alt-text="Screenshot of IntelliJ IDEA showing Run button." lightbox="media/how-to-intellij-deploy-apps/run-button.png":::
1. The plug-in will run the command `mvn package` on the project and then create the new app and deploy the jar generated by the `package` command.
-1. If the app URL is not shown in the output window, get it from the Azure portal. Navigate from your resource group to the instance of Azure Spring Apps. Then select **Apps**. The running app will be listed. Select the app, then copy the **URL** or **Test Endpoint**.
+1. If the app URL is not shown in the output window, get it from the Azure portal. Navigate from your resource group to the instance of Azure Spring Apps. Then select **Apps**. The running app will be listed. Select the app, then copy the **URL** or **Test Endpoint**.
- ![Get test URL](media/spring-cloud-intellij-howto/get-test-url.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/get-test-url.png" alt-text="Screenshot of Azure portal showing the app overview page with the URL and Test Endpoint fields highlighted." lightbox="media/how-to-intellij-deploy-apps/get-test-url.png":::
1. Navigate to the URL or Test Endpoint in the browser.
- ![Navigate in Browser 2](media/spring-cloud-intellij-howto/navigate-in-browser-2.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/navigate-in-browser.png" alt-text="Screenshot of the app running in a browser displaying the message Greetings from Spring Boot.":::
## Show streaming logs To get the logs:
-1. Select **Azure Explorer**, then **Spring Cloud**.
+1. Select **Azure Explorer**, then **Spring Apps**.
1. Right-click the running app.
-1. Select **Streaming Logs** from the drop-down list.
+1. Select **Streaming Log** from the drop-down list.
- ![Select streaming logs](media/spring-cloud-intellij-howto/streaming-logs.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/streaming-logs.png" alt-text="Screenshot of IntelliJ IDEA context menu with the Streaming Log option highlighted.":::
1. Select instance.
- ![Select instance](media/spring-cloud-intellij-howto/select-instance.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/select-instance.png" alt-text="Screenshot of the IntelliJ IDEA Select Instance dialog box.":::
1. The streaming log will be visible in the output window.
- ![Streaming log output](media/spring-cloud-intellij-howto/streaming-log-output.png)
+ :::image type="content" source="media/how-to-intellij-deploy-apps/streaming-log-output.png" alt-text="Screenshot of the IntelliJ IDEA showing the streaming log in the output window." lightbox="media/how-to-intellij-deploy-apps/streaming-log-output.png":::
## Next steps
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
Changing the access tier for a blob when versioning is enabled, or if the blob h
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| [Standard general-purpose v2](../common/storage-account-overview.md#types-of-storage-accounts) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Premium block blobs](../common/storage-account-overview.md#types-of-storage-accounts) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-For information about feature support by region, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage).
## Next steps
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Get-AzStorageContainer -Context $ctx | Select Name, PublicAccess
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Next steps
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
A rehydration operation with [Set Blob Tier](/rest/api/storageservices/set-blob-
Copying an archived blob to an online tier with [Copy Blob](/rest/api/storageservices/copy-blob) is billed for data read transactions and data retrieval size. Creating the destination blob in an online tier is billed for data write transactions. Early deletion fees don't apply when you copy to an online blob because the source blob remains unmodified in the Archive tier. High-priority retrieval charges do apply if selected.
-Blobs in the Archive tier should be stored for a minimum of 180 days. Deleting or changing the tier of an archived blob before the 180-day period elapses incurs an early deletion fee.For example, if a blob is moved to the Archive tier and then deleted or moved to the Hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the Archive tier. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
+Blobs in the Archive tier should be stored for a minimum of 180 days. Deleting or changing the tier of an archived blob before the 180-day period elapses incurs an early deletion fee. For example, if a blob is moved to the Archive tier and then deleted or moved to the Hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the Archive tier. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
For more information about pricing for block blobs and data rehydration, see [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/). For more information on outbound data transfer charges, see [Data Transfers Pricing Details](https://azure.microsoft.com/pricing/details/data-transfers/).
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
Azure CLI and PowerShell support signing in with Azure AD credentials. After you
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)| ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Next steps
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
For more information about pricing for Azure Storage blob inventory, see [Azure
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-<sup>2</sup> Feature is supported at the preview level.
## Known issues
storage Encryption Customer Provided Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-customer-provided-keys.md
To rotate an encryption key that was used to encrypt a blob, download the blob a
## Feature support
-This table shows how this feature is supported in your account and the effect on that support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Next steps
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-overview.md
Keep in mind that customer-managed keys are protected by soft delete and purge p
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Next steps
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
If you fail to pay your bill and your account has an active time-based retention
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup>
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup>
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-<sup>2</sup> Feature is supported at the preview level.
## Next steps
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
For data that is modified and accessed regularly throughout its lifetime, you ca
## Feature support
-This table shows how this feature is supported in your account and the effect on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png)|![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and Secure File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Regional availability and pricing
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Use these queries to help you monitor your Azure Storage accounts:
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-### Logs in Azure Monitor
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png)| ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-
-### Metrics in Azure Monitor
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-<sup>2</sup> Feature is supported at the preview level.
## FAQ
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
If the replication status for a blob in the source account indicates failure, th
## Feature support
-This table shows how this feature is supported in your account and the effect on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Billing
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md
Point-in-time restore for block blobs has the following limitations and known is
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![No](../media/icons/no-icon.png)|![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Pricing and billing
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
You can use many different SFTP clients to securely connect and then transfer fi
| rsa-sha2-512 <sup>1</sup> | ecdh-sha2-nistp256 | aes256-gcm@openssh.com | hmac-sha2-512 | ecdsa-sha2-nistp256 | | ecdsa-sha2-nistp256 | diffie-hellman-group14-sha256 | aes128-cbc| hmac-sha2-256-etm@openssh.com | ecdsa-sha2-nistp384 | | ecdsa-sha2-nistp384| diffie-hellman-group16-sha512 | aes256-cbc | hmac-sha2-512-etm@openssh.com |
-||| aes192-cbc ||
+||diffie-hellman-group-exchange-sha256| aes192-cbc ||
<sup>1</sup> Requires minimum key length of 2048 bits.
storage Snapshots Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-overview.md
The following table describes the billing behavior for a blob that is soft-delet
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and Secure File Transfer protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-<sup>2</sup> Feature is supported at the preview level.
## Next steps
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md
The following table describes the expected behavior for delete and write operati
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-<sup>3</sup> For more information, see [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md). These issues apply to all accounts that have the hierarchical namespace feature enabled.
## Pricing and billing
storage Soft Delete Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-container-overview.md
Version 2019-12-12 or higher of the Azure Storage REST API supports container so
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Pricing and billing
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
This section describes known issues and conditions in the current release of the
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled
## FAQ
storage Storage Blob Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-overview.md
Applications that handle Blob storage events should follow a few recommended pra
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Next steps
storage Storage Blob Static Website Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website-host.md
You've successfully completed the tutorial and deployed a static website to Azur
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup>
-|--|||--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2 and the Network File System (NFS) 3.0 protocol both require a storage account with a hierarchical namespace enabled.
## Next steps
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md
To enable metrics on your static website pages, see [Enable metrics on static we
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png)|![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## FAQ
storage Storage Custom Domain Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-custom-domain-name.md
If you don't need users to access your blob or web content by using HTTPS, then
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-<sup>2</sup> Feature is supported at the preview level.
## Next steps
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
To learn how to persist the mount, see [Persisting](https://github.com/Azure/azu
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png)|![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Next steps
storage Storage Manage Find Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-manage-find-blobs.md
You're charged for the monthly average number of index tags within a storage acc
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![No](../media/icons/no-icon.png)|![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## Conditions and known issues
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
When blob soft delete is enabled, all soft-deleted entities are billed at full c
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
-
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
-|--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and Secure File Transfer protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## See also
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 05/05/2022 Last updated : 06/28/2022
You must use one of the following Azure key stores to store your customer-manage
- [Azure Key Vault](../../key-vault/general/overview.md) - [Azure Key Vault Managed Hardware Security Module (HSM)](../../key-vault/managed-hsm/overview.md)
-You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure Key Vault APIs to generate keys. The storage account and the key vault or managed HSM must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions.
+You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure Key Vault APIs to generate keys. The storage account and the key vault or managed HSM must be in the same Azure Active Directory (Azure AD) tenant, but they can be in different regions and subscriptions.
> [!NOTE] > Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration.
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
Previously updated : 05/26/2022 Last updated : 06/28/2022
The following table lists the format for Azure DNS Zone endpoints for each of th
| Storage service | Endpoint | |--|--|
-| Blob Storage | `https://<storage-account>.z[00-99].blob.core.windows.net` |
-| Static website (Blob Storage) | `https://<storage-account>.z[00-99].web.core.windows.net` |
-| Data Lake Storage Gen2 | `https://<storage-account>.z[00-99].dfs.core.windows.net` |
-| Azure Files | `https://<storage-account>.z[00-99].file.core.windows.net` |
-| Queue Storage | `https://<storage-account>.z[00-99].queue.core.windows.net` |
-| Table Storage | `https://<storage-account>.z[00-99].table.core.windows.net` |
+| Blob Storage | `https://<storage-account>.z[00-99].blob.storage.azure.net` |
+| Static website (Blob Storage) | `https://<storage-account>.z[00-99].web.storage.azure.net` |
+| Data Lake Storage Gen2 | `https://<storage-account>.z[00-99].dfs.storage.azure.net` |
+| Azure Files | `https://<storage-account>.z[00-99].file.storage.azure.net` |
+| Queue Storage | `https://<storage-account>.z[00-99].queue.storage.azure.net` |
+| Table Storage | `https://<storage-account>.z[00-99].table.storage.azure.net` |
> [!IMPORTANT] > You can create up to 5000 accounts with Azure DNS Zone endpoints per subscription. However, you may need to update your application code to query for the account endpoint at runtime. You can call the [Get Properties](/rest/api/storagerp/storage-accounts/get-properties) operation to query for the storage account endpoints.
virtual-desktop Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/management.md
description: Recommended ways for you to manage your Azure Virtual Desktop envir
Previously updated : 04/26/2022 Last updated : 06/29/2022
Microsoft Endpoint Configuration Manager versions 1906 and later can manage your
Microsoft Intune can manage your Azure AD-joined and Hybrid Azure AD-joined session hosts. To learn more about using Intune to manage Windows 11 and Windows 10 single session hosts, see [Using Azure Virtual Desktop with Intune](/mem/intune/fundamentals/windows-virtual-desktop).
-For Windows 11 and Windows 10 multi-session hosts, Intune currently supports device-based configurations. To learn more about using Intune to manage multi-session hosts, see [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/windows-virtual-desktop-multi-session).
+For Windows 11 and Windows 10 multi-session hosts, Intune currently supports device-based configurations. User scope configurations are also currently in preview on Windows 11. To learn more about using Intune to manage multi-session hosts, see [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/windows-virtual-desktop-multi-session).
> [!NOTE] > Managing Azure Virtual Desktop session hosts using Intune is currently supported in the Azure Public and Azure Government clouds.
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Service Fabric | No | Yes | No | | Azure Kubernetes Service (AKS) / AKE | No | Yes | No | | UserData | Yes | Yes | UserData can be specified for individual VMs |
+| Option to delete or retain VM NIC and Disks | Yes | No (always delete) | Yes |
+| Ultra Disks | Yes | Yes | No |
<sup>1</sup> For Uniform scale sets, the `GET VMSS` response will have a reference to the *identity*, *clientID*, and *principalID*. For Flexible scale sets, the response will only get a reference the *identity*. You can make a call to `Identity` to get the *clientID* and *PrincipalID*.
virtual-machines Boot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-diagnostics.md
Everything after API version 2020-06-01 supports managed boot diagnostics. For m
- Managed storage accounts are supported in Resource Manager API version "2020-06-01" and later. - Azure Serial Console is currently incompatible with a managed storage account for boot diagnostics. Learn more about [Azure Serial Console](/troubleshoot/azure/virtual-machines/serial-console-overview). - Portal only supports the use of boot diagnostics with a managed storage account for single instance VMs.-- Users canont configure a retention period for Managed Boot Diagnostics. The logs will be overwritten when the total size crosses 1 GB.
+- Users cannot configure a retention period for Managed Boot Diagnostics. The logs will be overwritten when the total size crosses 1 GB.
## Next steps
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
Title: Learn about Azure Image Builder
-description: Learn more about Azure Image Builder for virtual machines in Azure.
+ Title: Azure VM Image Builder overview
+description: In this article, you learn about VM Image Builder for virtual machines in Azure.
Last updated 10/15/2021
-# Azure Image Builder overview
+# Azure VM Image Builder overview
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Standardized virtual machine (VM) images allow organizations to migrate to the cloud and ensure consistency in their deployments. Images typically include predefined security, configuration settings, and necessary software. Setting up your own imaging pipeline requires time, infrastructure, and setup. With Azure VM Image Builder (Image Builder), you just need to create a configuration describing your image and submit it to the service where the image is built and then distributed.
+By using standardized virtual machine (VM) images, your organization can more easily migrate to the cloud and help ensure consistency in your deployments. Images ordinarily include predefined security, configuration settings, and any necessary software. Setting up your own imaging pipeline requires time, infrastructure, and many other details. With Azure VM Image Builder, you need only create a configuration that describes your image and submit it to the service, where the image is built and then distributed.
-With Image Builder, you can migrate your existing image customization pipeline to Azure while continuing to use existing scripts, commands, and processes to customize images. Using Image Builder, you can integrate your core applications into a VM image so your VMs can take on workloads at once after creation. You can even add configurations to build images for Azure Virtual Desktop or as VHDs for use in Azure Stack or for ease of exporting.
+With VM Image Builder, you can migrate your existing image customization pipeline to Azure as you continue to use existing scripts, commands, and processes. You can integrate your core applications into a VM image, so that your VMs can take on workloads after the images are created. You can even add configurations to build images for Azure Virtual Desktop, as virtual hard discs (VHDs) for use in Azure Stack, or for ease of exporting.
-Image Builder lets you start with Windows or Linux images, from the Azure Marketplace or existing custom images, and add your own customizations. You can also specify where you would like your resulting images hosted in the [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery), as a managed image or as a VHD.
+VM Image Builder lets you start with Windows or Linux images either from Azure Marketplace or as existing custom images, and then add your own customizations. You can also specify where you want your resulting images to be hosted in [Azure Compute Gallery](shared-image-galleries.md) (formerly Shared Image Gallery), as managed images or as VHDs.
## Features
-While it is possible to create custom VM images by hand or by other tools, the process can be cumbersome and unreliable. Azure VM Image Builder, which is built on [HashiCorp Packer](https://www.packer.io/), provides you with benefits of a managed service.
+Although it's possible to create custom VM images by hand or by other tools, the process can be cumbersome and unreliable. VM Image Builder, which is built on [HashiCorp Packer](https://www.packer.io/), gives you the benefits of a managed service.
### Simplicity -- Removes the need to use complex tooling, processes, and manual steps for creating a VM image. Image Builder abstracts out all these details and hides away Azure specific requirements like the need to generalize the image (sysprep) while also giving more advanced users the ability to override them.-- Image Builder can integrate with existing image build pipelines for a click-and-go experience. You can just call Image Builder from your pipeline, or use the [Azure Image Builder Service DevOps Task (preview)](./linux/image-builder-devops-task.md).-- Image Builder can fetch customization data from various sources removing the need to collect them all together in one place to build a VM image.-- Integration of Image Builder with the Azure Compute Gallery gives you an image management system that allows you to distribute, replicate, version, and scale images globally. Additionally, you can distribute the same resulting image as a VHD, or as one or more managed images without rebuilding from scratch.
+To reduce the complexity of creating VM images, VM Image Builder:
-### Infrastructure As Code
+- Removes the need to use complex tooling, processes, and manual steps to create a VM image. VM Image Builder abstracts out all these details and hides Azure-specific requirements, such as the need to generalize the image (Sysprep). And it gives more advanced users the ability to override such requirements.
-- There is no need to manage long-term infrastructure (*like Storage Accounts to hold customization data*) or transient infrastructure (*like temporary Virtual Machine to build the image*). -- Image Builder stores your VM image build artifacts as Azure resources which removes the need to maintain offline definitions and the risk of environment drifts caused by accidental deletions or updates.
+- Can be integrated with existing image build pipelines for a click-and-go experience. To do so, you can either call VM Image Builder from your pipeline or use an [Azure VM Image Builder service DevOps task (preview)](./linux/image-builder-devops-task.md).
+
+- Can fetch customization data from various sources, which removes the need to collect them all from one place.
+
+- Can be integrated with Compute Gallery, which creates an image management system with which to distribute, replicate, version, and scale images globally. Additionally, you can distribute the same resulting image as a VHD or as one or more managed images, without having to rebuild them from scratch.
+
+### Infrastructure as code
+
+With VM Image Builder, there's no need to manage your long-term infrastructure (for example, storage accounts that hold customization data) or transient infrastructure (for example, temporary VMs for building images).
+
+VM Image Builder stores your VM image build artifacts as Azure resources. This feature removes both the need to maintain offline definitions and the risk of environment drifts that are caused by accidental deletions or updates.
### Security -- Image Builder enables creation of baseline images (*which can include your minimum security and corporate configurations*) and allows different departments to customize it further. These images can be kept secure and compliant by using Image Builder to quickly rebuild a golden image using the latest patched version of a source image. Image Builder also makes it easier for you to build images that meet the Azure Windows Baseline. For more information, see [Image Builder - Windows baseline template](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/imagebuilder-windowsbaseline).-- You do not have to make your customization artifacts publicly accessible for Image Builder to be able to fetch them. Image Builder can use your [Azure Managed Identity](../active-directory/managed-identities-azure-resources/overview.md) to fetch these resources and you can restrict the privileges of this identity as tightly as required using Azure-RBAC. This not only means you can keep your artifacts secret, but they also cannot be tampered with by unauthorized actors.-- Copies of customization artifacts, transient compute & storage resources, and resulting images are all stored securely within your subscription with access controlled by Azure-RBAC. This includes the build VM used to create the customized image and ensuring your customization scripts and files are not being copied to an unknown VM in an unknown subscription. Furthermore, you can achieve a high degree of isolation from other customersΓÇÖ workloads using [Isolated VM offerings](./isolation.md) for the build VM.-- You can connect Image Builder to your existing virtual networks so you can communicate with existing configuration servers (DSC, Chef, Puppet, etc.), file shares, or any other routable servers & services.-- You can configure Image Builder to assign your User Assigned Identities to the Image Builder Build VM. The Image Builder Build VM is created by the Image Builder service in your subscription and is used to build and customize the image. You can then use these identities at customization time to access Azure resources, including secrets, in your subscription. There is no need to assign Image Builder direct access to those resources.
+To help keep your images secure, VM Image Builder:
+
+- Enables you to create baseline images (that is, your minimum security and corporate configurations) and allows other departments to customize them further. You can help keep these images secure and compliant by using VM Image Builder to quickly rebuild a golden image that uses the latest patched version of a source image. VM Image Builder also makes it easier for you to build images that meet the Azure Windows security baseline. For more information, see [VM Image Builder - Windows baseline template](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/imagebuilder-windowsbaseline).
+
+- Enables you to fetch your customization artifacts without having to make them publicly accessible. VM Image Builder can use your [Azure Managed Identity](../active-directory/managed-identities-azure-resources/overview.md) to fetch these resources, and you can restrict the privileges of this identity as tightly as required by using Azure role-based access control (Azure RBAC). You can both keep your artifacts secret and prevent tampering by unauthorized actors.
+
+- Securely stores copies of customization artifacts, transient compute and storage resources, and their resulting images within your subscription, because access is controlled by Azure RBAC. This level of security, which also applies to the build VM that's used to create the customized image, helps prevent your customization scripts and files from being copied to an unknown VM in an unknown subscription. And you can achieve a high degree of separation from other customersΓÇÖ workloads by using [Isolated VM offerings](./isolation.md) for the build VM.
+
+- Enables you to connect VM Image Builder to your existing virtual networks, so that you can communicate with existing configuration servers, such as DSC (desired state configuration pull server), Chef, and Puppet, file shares, or any other routable servers and services.
+
+- Can be configured to assign your user-assigned identities to the VM Image Builder build VM (that is, the VM that the VM Image Builder service creates in your subscription and uses to build and customize the image). You can then use these identities at customization time to access Azure resources, including secrets, in your subscription. There's no need to assign VM Image Builder direct access to those resources.
+ ## Regions
-The Azure Image Builder Service is available in the following regions: regions.
+The VM Image Builder service is available in the following regions:
>[!NOTE]
-> Images can still be distributed outside of these regions.
+> You can still distribute images outside these regions.
> - East US - East US 2
The Azure Image Builder Service is available in the following regions: regions.
- East Asia - Korea Central - South Africa North-- USGov Arizona (Public Preview)-- USGov Virginia (Public Preview)
+- USGov Arizona (public preview)
+- USGov Virginia (public preview)
-> [!IMPORTANT]
-> Register the feature "Microsoft.VirtualMachineImages/FairfaxPublicPreview" to access the Azure Image Builder public preview in Fairfax regions (USGov Arizona and USGov Virginia).
+To access the Azure VM Image Builder public preview in the Fairfax regions (USGov Arizona and USGov Virginia), you must register the *Microsoft.VirtualMachineImages/FairfaxPublicPreview* feature. To do so, run the following command:
-Use the following command to register the feature for Azure Image Builder in Fairfax regions (USGov Arizona and USGov Virginia).
```azurecli-interactive az feature register --namespace Microsoft.VirtualMachineImages --name FairfaxPublicPreview ``` ## OS support
-Azure Image Builder will support Azure Marketplace base OS images:
+
+VM Image Builder supports the following Azure Marketplace base operating system images:
- Ubuntu 18.04 - Ubuntu 16.04 - RHEL 7.6, 7.7
Azure Image Builder will support Azure Marketplace base OS images:
- CBL-Mariner >[!IMPORTANT]
-> Listed operating systems have been tested and now work with Azure Image Builder. However, Azure Image Builder should work with any Linux or Windows image in the marketplace.
+> These operating systems have been tested and now work with VM Image Builder. However, VM Image Builder should work with any Linux or Windows image in the marketplace.
## How it works
-The Azure VM Image Builder is a fully managed Azure service that is accessible by an Azure resource provider. Provide a configuration to the service that specifies the source image, customization to perform and where the new image is to be distributed to, the diagram below shows a high-level workflow:
+VM Image Builder is a fully managed Azure service that's accessible to Azure resource providers. Resource providers configure it by specifying a source image, a customization to perform, and where the new image is to be distributed. A high-level workflow is illustrated in the following diagram:
+
+![Diagram of the VM Image Builder process, showing the sources (Windows/Linux), customizations (Shell, PowerShell, Windows Update and Restart, adding files), and global distribution with Compute Gallery](./media/image-builder-overview/image-builder-flow.png)
-![Conceptual drawing of the Azure Image Builder process showing the sources (Windows/Linux), customizations (Shell, PowerShell, Windows Restart & Update, adding files) and global distribution with the Azure Compute Gallery](./media/image-builder-overview/image-builder-flow.png)
+You can pass template configurations by using Azure PowerShell, the Azure CLI, or Azure Resource Manager templates, or by using a VM Image Builder DevOps task. When you submit the configuration to the service, Azure creates an *image template resource*. When the image template resource is created, a *staging resource group* is created in your subscription, in the following format: `IT_\<DestinationResourceGroup>_\<TemplateName>_\(GUID)`. The staging resource group contains files and scripts, which are referenced in the File, Shell, and PowerShell customization in the ScriptURI property.
-Template configurations can be passed using PowerShell, Azure CLI, Azure Resource Manager templates and using the Azure VM Image Builder DevOps task, when you submit it to the service we will create an Image Template Resource. When the Image Template Resource is created you will see a staging resource group created in your subscription, in the format: `IT_\<DestinationResourceGroup>_\<TemplateName>_\(GUID)`. The staging resource group contains files and scripts referenced in the File, Shell, PowerShell customization in the ScriptURI property.
+To run the build, you invoke `Run` on the VM Image Builder template resource. The service then deploys additional resources for the build, such as a VM, network, disk, and network adapter.
-To run the build you will invoke `Run` on the Image Template resource, the service will then deploy additional resources for the build, such as a VM, Network, Disk, Network Adapter etc. If you build an image without using an existing VNET Image Builder will also deploy a Public IP and NSG, the service connects to the build VM using SSH or WinRM. If you select an existing VNET, then the service will deploy using Azure Private Link, and a Public IP address is not required, for more details, see [Image Builder networking overview](./linux/image-builder-networking.md).
+If you build an image without using an existing virtual network, VM Image Builder also deploys a public IP and network security group, and it connects to the build VM by using Secure Shell (SSH) or Windows Remote Management (WinRM) protocol.
-When the build finishes all resources will be deleted, except for the staging resource group and the storage account, to remove these you will delete the Image Template resource, or you can leave them there to run the build again.
+If you select an existing virtual network, the service is deployed via Azure Private Link, and a public IP address isn't required. For more information, see [VM Image Builder networking overview](./linux/image-builder-networking.md).
-There are multiple examples and step-by-step guides in this documentation, which reference configuration templates and solutions in the [Azure Image Builder GitHub repository](https://github.com/azure/azvmimagebuilder).
+When the build finishes, all resources are deleted, except for the staging resource group and the storage account. You can remove them by deleting the image template resource, or you can leave them in place to run the build again.
-### Move Support
-The image template resource is immutable and contains links to resources and the staging resource group, therefore the resource type does not support being moved. If you wish to move the image template resource, ensure you have a copy of the configuration template (extract the existing configuration from the resource if you don't have it), create a new image template resource in the new resource group with a new name and delete the previous image template resource.
+For multiple examples, step-by-step guides, configuration templates, and solutions, go to the [VM Image Builder GitHub repository](https://github.com/azure/azvmimagebuilder).
+
+### Move support
+
+The image template resource is immutable, and it contains links to resources and the staging resource group. Therefore, this resource type doesn't support being moved.
+
+If you want to move the image template resource, either make sure that you have a copy of the configuration template or, if you don't have a copy, extract the existing configuration from the resource. Then, create a new image template resource in the new resource group with a new name, and delete the previous image template resource.
## Permissions
-When you register for the (AIB), this grants the AIB Service permission to create, manage and delete a staging resource group `(IT_*)`, and have rights to add resources to it, that are required for the image build. This is done by an AIB Service Principal Name (SPN) being made available in your subscription during a successful registration.
+When you register for the VM Image Builder service, you're granting the service permission to create, manage, and delete a staging resource group, which is prefixed with `IT_*`. And you have rights to add to it any resources that are required for the image build. This happens because a VM Image Builder service principal name is made available in your subscription after you've registered successfully.
-To allow Azure VM Image Builder to distribute images to either the managed images or to an Azure Compute Gallery, you will need to create an Azure user-assigned identity that has permissions to read and write images. If you are accessing Azure storage, then this will need permissions to read private and public containers.
+To allow VM Image Builder to distribute images to either the managed images or Compute Gallery, you need to create an Azure user-assigned identity that has permissions to read and write images. If you're accessing Azure Storage, you'll need permissions to read private and public containers.
-In API version 2021-10-01 and beyond, Azure VM Image Builder supports adding Azure user-assigned identities to the build VM to enable scenarios where you will need to authenticate with services like Azure Key Vault in your subscription.
+In API version 2021-10-01 and later, VM Image Builder supports adding Azure user-assigned identities to the build VM to enable scenarios where you need to authenticate with services such as Azure Key Vault in your subscription.
-For more information on permissions, please see the following links: [PowerShell](./linux/image-builder-permissions-powershell.md), [AZ CLI](./linux/image-builder-permissions-cli.md) and [Image Builder template reference: Identity](./linux/image-builder-json.md#identity).
+For more information about permissions, see
+* [Configure VM Image Builder permissions by using PowerShell](./linux/image-builder-permissions-powershell.md)
+* [Configure VM Image Builder permissions by using the Azure CLI](./linux/image-builder-permissions-cli.md)
+* [Create a VM Image Builder template](./linux/image-builder-json.md#identity)
## Costs
-You will incur some compute, networking and storage costs when creating, building and storing images with Azure Image Builder. These costs are similar to the costs incurred in manually creating custom images. For the resources, you will be charged at your Azure rates.
+You'll incur some compute, networking, and storage costs when you create, build, and store images by using VM Image Builder. These costs are similar to those that you incur when you create custom images manually. Your resources are charged at your Azure rates.
-During the image creation process, files are downloaded and stored in the `IT_<DestinationResourceGroup>_<TemplateName>` resource group, which will incur a small storage costs. If you do not want to keep these, delete the **Image Template** after the image build.
+During the image-creation process, files are downloaded and stored in the `IT_<DestinationResourceGroup>_<TemplateName>` resource group, which incurs a small storage cost. If you don't want to keep these files, delete the image template after you've built the image.
-Image Builder creates a VM using the default D1v2 VM size for Gen1 images and D2ds V4 for Gen2 images, along with the storage, and networking needed for the VM. These resources last for the duration of the build process and are deleted once Image Builder has finished creating the image.
+VM Image Builder creates a VM by using the default Standard_D1_v2 VM size for Gen1 images and Standard_D2ds_v4 for Gen2 images, along with the storage and networking that's needed for the VM. These resources last for the duration of the build process and are deleted after VM Image Builder has finished creating the image.
-Azure Image Builder will distribute the image to your chosen regions, which might incur network egress charges.
+VM Image Builder distributes the image to your chosen regions, which might incur network egress charges.
## Hyper-V generation
-Image Builder currently supports creating Hyper-V Gen1 and Gen2 images in the Azure Compute Gallery and as managed images or VHD. Please keep in mind, the image distributed will always be the same generation as the image provided.
+VM Image Builder currently supports creating Hyper-V Gen1 and Gen2 images in a Compute Gallery and as managed images or VHDs. Keep in mind that the distributed image is always in the same generation as the provided image.
+
+For Gen2 images, ensure that you're using the correct SKU. For example, the SKU for an Ubuntu Server 18.04 Gen2 image would be 18_04-lts-gen2. The SKU for an Ubuntu Server 18.04 Gen1 image would be 18.04-lts.
-For Gen2 images, please ensure you are using the correct SKU. For example, the SKU for a Ubuntu Server 18.04 Gen2 image would be ΓÇ£18_04-lts-gen2ΓÇ¥. The SKU for a Ubuntu Server 18.04 Gen1 image would be "18.04-lts".
+Here's how to find SKUs that are based on the image publisher:
-How to find SKUs based on the image publisher:
```azurecli-interactive # Find all Gen2 SKUs published by Microsoft Windows Desktop az vm image list --publisher MicrosoftWindowsDesktop --sku g2 --output table --all
az vm image list --publisher MicrosoftWindowsDesktop --sku g2 --output table --a
az vm image list --publisher Canonical --sku gen2 --output table --all ```
-For more information on which Azure VM images support Gen2, please visit: [Generation 2 VM images in Azure Marketplace
-](./generation-2.md)
+For more information about Azure VM images that support Gen2, see [Gen2 VM images in Azure Marketplace](./generation-2.md).
## Next steps
-To try out the Azure Image Builder, see the articles for building [Linux](./linux/image-builder.md) or [Windows](./windows/image-builder.md) images.
+To try out VM Image Builder, see the articles about building [Linux](./linux/image-builder.md) or [Windows](./windows/image-builder.md) images.
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
description: Learn how to create a template to use with Azure Image Builder.
Previously updated : 01/10/2022 Last updated : 06/29/2022
-# Create an Azure Image Builder template
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+# Create an Azure Image Builder template
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Azure Image Builder uses a .json file to pass information into the Image Builder service. In this article we will go over the sections of the json file, so you can build your own. To see examples of full .json files, see the [Azure Image Builder GitHub](https://github.com/Azure/azvmimagebuilder/tree/main/quickquickstarts). This is the basic template format: ```json
- {
- "type": "Microsoft.VirtualMachineImages/imageTemplates",
- "apiVersion": "2021-10-01",
- "location": "<region>",
- "tags": {
- "<name>": "<value>",
- "<name>": "<value>"
- },
- "identity": {},
- "properties": {
- "buildTimeoutInMinutes": <minutes>,
- "stagingResourceGroup": "/subscriptions/<subscriptionID>/resourceGroups/<stagingResourceGroupName>",
- "vmProfile": {
- "vmSize": "<vmSize>",
- "proxyVmSize": "<vmSize>",
- "osDiskSizeGB": <sizeInGB>,
- "vnetConfig": {
- "subnetId": "/subscriptions/<subscriptionID>/resourceGroups/<vnetRgName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>"
- },
- "userAssignedIdentities": [
- "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName1>",
- "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName2>",
- "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName3>",
- ...
- ]
+{
+ "type": "Microsoft.VirtualMachineImages/imageTemplates",
+ "apiVersion": "2021-10-01",
+ "location": "<region>",
+ "tags": {
+ "<name>": "<value>",
+ "<name>": "<value>"
+ },
+ "identity": {},
+ "properties": {
+ "buildTimeoutInMinutes": <minutes>,
+ "stagingResourceGroup": "/subscriptions/<subscriptionID>/resourceGroups/<stagingResourceGroupName>",
+ "vmProfile": {
+ "vmSize": "<vmSize>",
+ "proxyVmSize": "<vmSize>",
+ "osDiskSizeGB": <sizeInGB>,
+ "vnetConfig": {
+ "subnetId": "/subscriptions/<subscriptionID>/resourceGroups/<vnetRgName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>"
},
- "source": {},
- "customize": [],
- "validate": {},
- "distribute": []
- }
- }
+"userAssignedIdentities": [
+ "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName1>",
+ "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName2>",
+ "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName3>",
+ ...
+ ]
+ },
+ "source": {},
+ "customize": [],
+ "validate": {},
+ "distribute": []
+ }
+}
``` ## Type and API version
This is the basic template format:
The `type` is the resource type, which must be `"Microsoft.VirtualMachineImages/imageTemplates"`. The `apiVersion` will change over time as the API changes, but should be `"2021-10-01"` for now. ```json
- "type": "Microsoft.VirtualMachineImages/imageTemplates",
- "apiVersion": "2021-10-01",
+"type": "Microsoft.VirtualMachineImages/imageTemplates",
+"apiVersion": "2021-10-01",
``` ## Location
The location is the region where the custom image will be created. The following
> Register the feature "Microsoft.VirtualMachineImages/FairfaxPublicPreview" to access the Azure Image Builder public preview in Fairfax regions (USGov Arizona and USGov Virginia). Use the following command to register the feature for Azure Image Builder in Fairfax regions (USGov Arizona and USGov Virginia).+ ```azurecli-interactive az feature register --namespace Microsoft.VirtualMachineImages --name FairfaxPublicPreview ``` ```json
- "location": "<region>",
+"location": "<region>",
``` ### Data Residency+ The Azure VM Image Builder service doesn't store or process customer data outside regions that have strict single region data residency requirements when a customer requests a build in that region. In the event of a service outage for regions that have data residency requirements, you will need to create templates in a different region and geography. ### Zone Redundancy+ Distribution supports zone redundancy, VHDs are distributed to a Zone Redundant Storage (ZRS) account by default and the Azure Compute Gallery (formerly known as Shared Image Gallery) version will support a [ZRS storage type](../disks-redundancy.md#zone-redundant-storage-for-managed-disks) if specified.
-
+ ## vmProfile+ ## buildVM+ Image Builder will use a default SKU size of "Standard_D1_v2" for Gen1 images and "Standard_D2ds_v4" for Gen2 images. The generation is defined by the image you specify in the `source`. You can override this and may wish to do this for these reasons:+ 1. Performing customizations that require increased memory, CPU and handling large files (GBs). 2. Running Windows builds, you should use "Standard_D2_v2" or equivalent VM size. 3. Require [VM isolation](../isolation.md).
-4. Customize an image that requires specific hardware. For example, for a GPU VM, you need a GPU VM size.
+4. Customize an image that requires specific hardware. For example, for a GPU VM, you need a GPU VM size.
5. Require end to end encryption at rest of the build VM, you need to specify the support build [VM size](../azure-vms-no-temp-disk.yml) that don't use local temporary disks.
-
+ This is optional. ## osDiskSizeGB
This is optional.
By default, Image Builder will not change the size of the image, it will use the size from the source image. You can **only** increase the size of the OS Disk (Win and Linux), this is optional, and a value of 0 means leave the same size as the source image. You cannot reduce the OS Disk size to smaller than the size from the source image. ```json
- {
- "osDiskSizeGB": 100
- },
+{
+ "osDiskSizeGB": 100
+},
``` ## vnetConfig+ If you don't specify any VNET properties, then Image Builder will create its own VNET, Public IP, and network security group (NSG). The Public IP is used for the service to communicate with the build VM, however if you don't want a Public IP or want Image Builder to have access to your existing VNET resources, such as configuration servers (DSC, Chef, Puppet, Ansible), file shares, then you can specify a VNET. For more information, review the [networking documentation](image-builder-networking.md), this is optional. ```json
- "vnetConfig": {
- "subnetId": "/subscriptions/<subscriptionID>/resourceGroups/<vnetRgName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>"
- }
+"vnetConfig": {
+ "subnetId": "/subscriptions/<subscriptionID>/resourceGroups/<vnetRgName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>"
}
+}
```+ ## Tags These are key/value pairs you can specify for the image that's generated.
There are two ways to add user assigned identities explained below.
Required - For Image Builder to have permissions to read/write images, read in scripts from Azure Storage you must create an Azure User-Assigned Identity, that has permissions to the individual resources. For details on how Image Builder permissions work, and relevant steps, review the [documentation](image-builder-user-assigned-identity.md). - ```json
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<imgBuilderId>": {}
- }
- },
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<imgBuilderId>": {}
+ }
+},
``` - The Image Builder service User Assigned Identity:
-* Supports a single identity only
-* Doesn't support custom domain names
+
+- Supports a single identity only
+- Doesn't support custom domain names
To learn more, see [What is managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md). For more information on deploying this feature, see [Configure managed identities for Azure resources on an Azure VM using Azure CLI](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity).
Optional - The Image Builder Build VM, that is created by the Image Builder serv
> Be aware that multiple identities can be specified for the Image Builder Build VM, including the identity you created for the [image template resource](#user-assigned-identity-for-azure-image-builder-image-template-resource). By default, the identity you created for the image template resource will not automatically be added to the build VM. ```json
- "properties": {
- "vmProfile": {
- "userAssignedIdentities": [
- "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName>"
- ]
- },
- },
+"properties": {
+ "vmProfile": {
+ "userAssignedIdentities": [
+ "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName>"
+ ]
+ },
+},
``` The Image Builder Build VM User Assigned Identity:+ * Supports a list of one or more user assigned managed identities to be configured on the VM * Supports cross subscription scenarios (identity created in one subscription while the image template is created in another subscription under the same tenant) * Doesn't support cross tenant scenarios (identity created in one tenant while the image template is created in another tenant)
The Image Builder Build VM User Assigned Identity:
To learn more, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) and [How to use managed identities for Azure resources on an Azure VM for sign-in](../../active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md). ## Properties: stagingResourceGroup+ The `stagingResourceGroup` field contains information about the staging resource group that the Image Builder service will create for use during the image build process. The `stagingResourceGroup` is an optional field for anyone who wants more control over the resource group created by Image Builder during the image build process. You can create your own resource group and specify it in the `stagingResourceGroup` section or have Image Builder create one on your behalf.
-```json
- "properties": {
- "stagingResourceGroup": "/subscriptions/<subscriptionID>/resourceGroups/<stagingResourceGroupName>"
- }
+```json
+"properties": {
+ "stagingResourceGroup": "/subscriptions/<subscriptionID>/resourceGroups/<stagingResourceGroupName>"
+}
``` ### Template Creation Scenarios #### The stagingResourceGroup field is left empty+ If the `stagingResourceGroup` field is not specified or specified with an empty string, the Image Builder service will create a staging resource group with the default name convention "IT_***". The staging resource group will have the default tags applied to it: `createdBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Also, the default RBAC will be applied to the identity assigned to the Azure Image Builder template resource, which is "Contributor". #### The stagingResourceGroup field is specified with a resource group that exists+ If the `stagingResourceGroup` field is specified with a resource group that does exist, then the Image Builder service will check to make sure the resource group is empty (no resources inside), in the same region as the image template, and has either "Contributor" or "Owner" RBAC applied to the identity assigned to the Azure Image Builder image template resource. If any of the aforementioned requirements are not met an error will be thrown. The staging resource group will have the following tags added to it: `usedBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Preexisting tags are not deleted. #### The stagingResourceGroup field is specified with a resource group that DOES NOT exist+ If the `stagingResourceGroup` field is specified with a resource group that does not exist, then the Image Builder service will create a staging resource group with the name provided in the `stagingResourceGroup` field. There will be an error if the given name does not meet Azure naming requirements for resource groups. The staging resource group will have the default tags applied to it: `createdBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. By default the identity assigned to the Azure Image Builder image template resource will have the "Contributor" RBAC applied to it in the resource group. ### Template Deletion
-Any staging resource group created by the Image Builder service will be deleted after the image template is deleted. This includes staging resource groups that were specified in the `stagingResourceGroup` field, but did not exist prior to the image build.
-If Image Builder did not create the staging resource group, but it did create resources inside of it, those resources will be deleted after the image template is deleted as long as the Image Builder service has the appropriate permissions or role required to delete resources.
+Any staging resource group created by the Image Builder service will be deleted after the image template is deleted. This includes staging resource groups that were specified in the `stagingResourceGroup` field, but did not exist prior to the image build.
+If Image Builder did not create the staging resource group, but it did create resources inside of it, those resources will be deleted after the image template is deleted as long as the Image Builder service has the appropriate permissions or role required to delete resources.
## Properties: source The `source` section contains information about the source image that will be used by Image Builder. Image Builder currently only natively supports creating Hyper-V generation (Gen1) 1 images to the Azure Compute Gallery (SIG) or managed image. If you want to create Gen2 images, then you need to use a source Gen2 image, and distribute to VHD. After, you will then need to create a managed image from the VHD, and inject it into the SIG as a Gen2 image. The API requires a `SourceType` that defines the source for the image build, currently there are three types:+ - PlatformImage - indicated the source image is a Marketplace image. - ManagedImage - use this when starting from a regular managed image. - SharedImageVersion - this is used when you're using an image version in an Azure Compute Gallery as the source.
The API requires a `SourceType` that defines the source for the image build, cur
> [!NOTE] > When using existing Windows custom images, you can run the Sysprep command up to 3 times on a single Windows 7 or Windows Server 2008 R2 image, or 1001 times on a single Windows image for later versions; for more information, see the [sysprep](/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation#limits-on-how-many-times-you-can-run-sysprep) documentation.
-### PlatformImage source
-Azure Image Builder supports Windows Server and client, and Linux Azure Marketplace images, see [Learn about Azure Image Builder](../image-builder-overview.md#os-support) for the full list.
+### PlatformImage source
+
+Azure Image Builder supports Windows Server and client, and Linux Azure Marketplace images, see [Learn about Azure Image Builder](../image-builder-overview.md#os-support) for the full list.
```json
- "source": {
- "type": "PlatformImage",
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "18.04-LTS",
- "version": "latest"
- },
+"source": {
+ "type": "PlatformImage",
+ "publisher": "Canonical",
+ "offer": "UbuntuServer",
+ "sku": "18.04-LTS",
+ "version": "latest"
+},
```
+The properties here are the same that are used to create VM's, using AZ CLI, run the below to get the properties:
-The properties here are the same that are used to create VM's, using AZ CLI, run the below to get the properties:
-
```azurecli-interactive
-az vm image list -l westus -f UbuntuServer -p Canonical --output table --all
+az vm image list -l westus -f UbuntuServer -p Canonical --output table --all
``` You can use `latest` in the version, the version is evaluated when the image build takes place, not when the template is submitted. If you use this functionality with the Azure Compute Gallery destination, you can avoid resubmitting the template, and rerun the image build at intervals, so your images are recreated from the most recent images. #### Support for Market Place Plan Information+ You can also specify plan information, for example:+ ```json
- "source": {
- "type": "PlatformImage",
- "publisher": "RedHat",
- "offer": "rhel-byos",
- "sku": "rhel-lvm75",
- "version": "latest",
- "planInfo": {
- "planName": "rhel-lvm75",
- "planProduct": "rhel-byos",
- "planPublisher": "redhat"
- }
+"source": {
+ "type": "PlatformImage",
+ "publisher": "RedHat",
+ "offer": "rhel-byos",
+ "sku": "rhel-lvm75",
+ "version": "latest",
+ "planInfo": {
+ "planName": "rhel-lvm75",
+ "planProduct": "rhel-byos",
+ "planPublisher": "redhat"
+ }
```+ ### ManagedImage source Sets the source image as an existing managed image of a generalized VHD or VM.
Sets the source image as an existing managed image of a generalized VHD or VM.
> The source managed image must be of a supported OS and the image must reside in the same subscription and region as your Azure Image Builder template. ```json
- "source": {
- "type": "ManagedImage",
- "imageId": "/subscriptions/<subscriptionId>/resourceGroups/{destinationResourceGroupName}/providers/Microsoft.Compute/images/<imageName>"
- }
+"source": {
+ "type": "ManagedImage",
+ "imageId": "/subscriptions/<subscriptionId>/resourceGroups/{destinationResourceGroupName}/providers/Microsoft.Compute/images/<imageName>"
+}
``` The `imageId` should be the ResourceId of the managed image. Use `az image list` to list available images. - ### SharedImageVersion source+ Sets the source image as an existing image version in an Azure Compute Gallery. > [!NOTE] > The source shared image version must be of a supported OS and the image version must reside in the same region as your Azure Image Builder template, if not, replicate the image version to the Image Builder Template region. - ```json
- "source": {
- "type": "SharedImageVersion",
- "imageVersionID": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/p roviders/Microsoft.Compute/galleries/<sharedImageGalleryName>/images/<imageDefinitionName/versions/<imageVersion>"
- }
+"source": {
+ "type": "SharedImageVersion",
+ "imageVersionID": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/p roviders/Microsoft.Compute/galleries/<sharedImageGalleryName>/images/<imageDefinitionName/versions/<imageVersion>"
+}
``` The `imageVersionId` should be the ResourceId of the image version. Use [az sig image-version list](/cli/azure/sig/image-version#az-sig-image-version-list) to list image versions. - ## Properties: buildTimeoutInMinutes By default, the Image Builder will run for 240 minutes. After that, it will timeout and stop, whether or not the image build is complete. If the timeout is hit, you will see an error similar to this:
By default, the Image Builder will run for 240 minutes. After that, it will time
[ERROR] complete: 'context deadline exceeded' ```
-If you don't specify a buildTimeoutInMinutes value, or set it to 0, this will use the default value. You can increase or decrease the value, up to the maximum of 960mins (16hrs). For Windows, we don't recommend setting this below 60 minutes. If you find you're hitting the timeout, review the [logs](image-builder-troubleshoot.md#customization-log), to see if the customization step is waiting on something like user input.
+If you don't specify a buildTimeoutInMinutes value, or set it to 0, this will use the default value. You can increase or decrease the value, up to the maximum of 960mins (16hrs). For Windows, we don't recommend setting this below 60 minutes. If you find you're hitting the timeout, review the [logs](image-builder-troubleshoot.md#customization-log), to see if the customization step is waiting on something like user input.
-If you find you need more time for customizations to complete, set this to what you think you need, with a little overhead. But, don't set it too high because you might have to wait for it to timeout before seeing an error.
+If you find you need more time for customizations to complete, set this to what you think you need, with a little overhead. But, don't set it too high because you might have to wait for it to timeout before seeing an error.
> [!NOTE] > If you don't set the value to 0, the minimum supported value is 6 minutes. Using values 1 through 5 will fail. ## Properties: customize
-Image Builder supports multiple `customizers`. Customizers are functions that are used to customize your image, such as running scripts, or rebooting servers.
+Image Builder supports multiple `customizers`. Customizers are functions that are used to customize your image, such as running scripts, or rebooting servers.
+
+When using `customize`:
-When using `customize`:
- You can use multiple customizers - Customizers execute in the order specified in the template. - If one customizer fails, then the whole customization component will fail and report back an error. - It is advised you test the script thoroughly before using it in a template. Debugging the script on your own VM will be easier.-- don't put sensitive data in the scripts.
+- don't put sensitive data in the scripts.
- The script locations need to be publicly accessible, unless you're using [MSI](./image-builder-user-assigned-identity.md). ```json
- "customize": [
- {
- "type": "Shell",
- "name": "<name>",
- "scriptUri": "<path to script>",
- "sha256Checksum": "<sha256 checksum>"
- },
- {
- "type": "Shell",
- "name": "<name>",
- "inline": [
- "<command to run inline>",
- ]
- }
-
- ],
-```
-
-
-The customize section is an array. Azure Image Builder will run through the customizers in sequential order. Any failure in any customizer will fail the build process.
+"customize": [
+ {
+ "type": "Shell",
+ "name": "<name>",
+ "scriptUri": "<path to script>",
+ "sha256Checksum": "<sha256 checksum>"
+ },
+ {
+ "type": "Shell",
+ "name": "<name>",
+ "inline": [
+ "<command to run inline>",
+ ]
+ }
+],
+```
+
+The customize section is an array. Azure Image Builder will run through the customizers in sequential order. Any failure in any customizer will fail the build process.
> [!NOTE] > Inline commands can be viewed in the image template definition. If you have sensitive information (including passwords, SAS token, authentication tokens, etc), it should be moved into scripts in Azure Storage, where access requires authentication.
-
+ ### Shell customizer The shell customizer supports running shell scripts. The shell scripts must be publicly accessible or you must have configured an [MSI](./image-builder-user-assigned-identity.md) for Image Builder to access them. ```json
- "customize": [
- {
- "type": "Shell",
- "name": "<name>",
- "scriptUri": "<link to script>",
- "sha256Checksum": "<sha256 checksum>"
- },
- ],
- "customize": [
- {
- "type": "Shell",
- "name": "<name>",
- "inline": "<commands to run>"
- },
- ],
+"customize": [
+ {
+ "type": "Shell",
+ "name": "<name>",
+ "scriptUri": "<link to script>",
+ "sha256Checksum": "<sha256 checksum>"
+ },
+],
+"customize": [
+ {
+ "type": "Shell",
+ "name": "<name>",
+ "inline": "<commands to run>"
+ },
+],
```
-OS Support: Linux
-
+OS Support: Linux
+ Customize properties: -- **type** ΓÇô Shell -- **name** - name for tracking the customization -- **scriptUri** - URI to the location of the file
+- **type** ΓÇô Shell
+- **name** - name for tracking the customization
+- **scriptUri** - URI to the location of the file
- **inline** - array of shell commands, separated by commas. - **sha256Checksum** - Value of sha256 checksum of the file, you generate this locally, and then Image Builder will checksum and validate.
- * To generate the sha256Checksum, using a terminal on Mac/Linux run: `sha256sum <fileName>`
+
+ To generate the sha256Checksum, using a terminal on Mac/Linux run: `sha256sum <fileName>`
> [!NOTE] > Inline commands are stored as part of the image template definition, you can see these when you dump out the image definition. If you have sensitive commands or values (including passwords, SAS token, authentication tokens etc), it is recommended these are moved into scripts, and use a user identity to authenticate to Azure Storage. #### Super user privileges+ For commands to run with super user privileges, they must be prefixed with `sudo`, you can add these into scripts or use it inline commands, for example:+ ```json
- "type": "Shell",
- "name": "setupBuildPath",
- "inline": [
- "sudo mkdir /buildArtifacts",
- "sudo cp /tmp/https://docsupdatetracker.net/index.html /buildArtifacts/https://docsupdatetracker.net/index.html"
+"type": "Shell",
+"name": "setupBuildPath",
+"inline": [
+ "sudo mkdir /buildArtifacts",
+ "sudo cp /tmp/https://docsupdatetracker.net/index.html /buildArtifacts/https://docsupdatetracker.net/index.html"
+]
```+ Example of a script using sudo that you can reference using scriptUri:+ ```bash #!/bin/bash -e
echo "Telemetry: running sudo 'as-is' in a script"
sudo touch /myfiles/somethingElevated.txt ```
-### Windows restart customizer
-The Restart customizer allows you to restart a Windows VM and wait for it come back online, this allows you to install software that requires a reboot.
+### Windows restart customizer
-```json
- "customize": [
+The Restart customizer allows you to restart a Windows VM and wait for it come back online, this allows you to install software that requires a reboot.
- {
- "type": "WindowsRestart",
- "restartCommand": "shutdown /r /f /t 0",
- "restartCheckCommand": "echo Azure-Image-Builder-Restarted-the-VM > c:\\buildArtifacts\\azureImageBuilderRestart.txt",
- "restartTimeout": "5m"
- }
-
- ],
+```json
+"customize": [
+ {
+ "type": "WindowsRestart",
+ "restartCommand": "shutdown /r /f /t 0",
+ "restartCheckCommand": "echo Azure-Image-Builder-Restarted-the-VM > c:\\buildArtifacts\\azureImageBuilderRestart.txt",
+ "restartTimeout": "5m"
+ }
+],
``` OS Support: Windows
-
+ Customize properties:+ - **Type**: WindowsRestart - **restartCommand** - Command to execute the restart (optional). The default is `'shutdown /r /f /t 0 /c \"packer restart\"'`.-- **restartCheckCommand** ΓÇô Command to check if restart succeeded (optional).
+- **restartCheckCommand** ΓÇô Command to check if restart succeeded (optional).
- **restartTimeout** - Restart timeout specified as a string of magnitude and unit. For example, `5m` (5 minutes) or `2h` (2 hours). The default is: '5m'
-### Linux restart
+### Linux restart
+ There is no Linux restart customizer. If you're installing drivers, or components that require a restart, you can install them and invoke a restart using the Shell customizer. There is a 20min SSH timeout to the build VM.
-### PowerShell customizer
+### PowerShell customizer
+ The shell customizer supports running PowerShell scripts and inline command, the scripts must be publicly accessible for the IB to access them.
-```json
- "customize": [
- {
- "type": "PowerShell",
- "name": "<name>",
- "scriptUri": "<path to script>",
- "runElevated": <true false>,
- "sha256Checksum": "<sha256 checksum>"
- },
- {
- "type": "PowerShell",
- "name": "<name>",
- "inline": "<PowerShell syntax to run>",
- "validExitCodes": "<exit code>",
- "runElevated": <true or false>
- }
- ],
+```json
+"customize": [
+ {
+ "type": "PowerShell",
+ "name": "<name>",
+ "scriptUri": "<path to script>",
+ "runElevated": <true false>,
+ "sha256Checksum": "<sha256 checksum>"
+ },
+ {
+ "type": "PowerShell",
+ "name": "<name>",
+ "inline": "<PowerShell syntax to run>",
+ "validExitCodes": "<exit code>",
+ "runElevated": <true or false>
+ }
+],
``` OS support: Windows
OS support: Windows
Customize properties: - **type** ΓÇô PowerShell.-- **scriptUri** - URI to the location of the PowerShell script file.
+- **scriptUri** - URI to the location of the PowerShell script file.
- **inline** ΓÇô Inline commands to be run, separated by commas. - **validExitCodes** ΓÇô Optional, valid codes that can be returned from the script/inline command, this will avoid reported failure of the script/inline command. - **runElevated** ΓÇô Optional, boolean, support for running commands and scripts with elevated permissions. - **sha256Checksum** - Value of sha256 checksum of the file, you generate this locally, and then Image Builder will checksum and validate.
- * To generate the sha256Checksum, using a PowerShell on Windows [Get-Hash](/powershell/module/microsoft.powershell.utility/get-filehash)
+ To generate the sha256Checksum, using a PowerShell on Windows [Get-Hash](/powershell/module/microsoft.powershell.utility/get-filehash)
### File customizer
-The File customizer lets Image Builder download a file from a GitHub repo or Azure storage. If you have an image build pipeline that relies on build artifacts, you can set the file customizer to download from the build share, and move the artifacts into the image.
+The File customizer lets Image Builder download a file from a GitHub repo or Azure storage. If you have an image build pipeline that relies on build artifacts, you can set the file customizer to download from the build share, and move the artifacts into the image.
```json
- "customize": [
- {
- "type": "File",
- "name": "<name>",
- "sourceUri": "<source location>",
- "destination": "<destination>",
- "sha256Checksum": "<sha256 checksum>"
- }
- ]
+"customize": [
+ {
+ "type": "File",
+ "name": "<name>",
+ "sourceUri": "<source location>",
+ "destination": "<destination>",
+ "sha256Checksum": "<sha256 checksum>"
+ }
+]
```
-OS support: Linux and Windows
+OS support: Linux and Windows
File customizer properties: -- **sourceUri** - an accessible storage endpoint, this can be GitHub or Azure storage. You can only download one file, not an entire directory. If you need to download a directory, use a compressed file, then uncompress it using the Shell or PowerShell customizers.
+- **sourceUri** - an accessible storage endpoint, this can be GitHub or Azure storage. You can only download one file, not an entire directory. If you need to download a directory, use a compressed file, then uncompress it using the Shell or PowerShell customizers.
> [!NOTE] > If the sourceUri is an Azure Storage Account, irrespective if the blob is marked public, you will to grant the Managed User Identity permissions to read access on the blob. See this [example](./image-builder-user-assigned-identity.md#create-a-resource-group) to set the storage permissions. -- **destination** ΓÇô this is the full destination path and file name. Any referenced path and subdirectories must exist, use the Shell or PowerShell customizers to set these up beforehand. You can use the script customizers to create the path.
+- **destination** ΓÇô this is the full destination path and file name. Any referenced path and subdirectories must exist, use the Shell or PowerShell customizers to set these up beforehand. You can use the script customizers to create the path.
+
+This is supported by Windows directories and Linux paths, but there are some differences:
-This is supported by Windows directories and Linux paths, but there are some differences:
- Linux OSΓÇÖs ΓÇô the only path Image builder can write to is /tmp. - Windows ΓÇô No path restriction, but the path must exist.
-
If there is an error trying to download the file, or put it in a specified directory, then customize step will fail, and this will be in the customization.log.
If there is an error trying to download the file, or put it in a specified direc
> The file customizer is only suitable for small file downloads, < 20MB. For larger file downloads, use a script or inline command, then use code to download files, such as, Linux `wget` or `curl`, Windows, `Invoke-WebRequest`. ### Windows Update Customizer+ This customizer is built on the [community Windows Update Provisioner](https://packer.io/docs/provisioners/community-supported.html) for Packer, which is an open source project maintained by the Packer community. Microsoft tests and validate the provisioner with the Image Builder service, and will support investigating issues with it, and work to resolve issues, however the open source project is not officially supported by Microsoft. For detailed documentation on and help with the Windows Update Provisioner, see the project repository. ```json
- "customize": [
- {
- "type": "WindowsUpdate",
- "searchCriteria": "IsInstalled=0",
- "filters": [
- "exclude:$_.Title -like '*Preview*'",
- "include:$true"
- ],
- "updateLimit": 20
- }
- ],
+"customize": [
+ {
+ "type": "WindowsUpdate",
+ "searchCriteria": "IsInstalled=0",
+ "filters": [
+ "exclude:$_.Title -like '*Preview*'",
+ "include:$true"
+ ],
+ "updateLimit": 20
+ }
+],
``` OS support: Windows Customizer properties:+ - **type** ΓÇô WindowsUpdate. - **searchCriteria** - Optional, defines which type of updates are installed (like Recommended or Important), BrowseOnly=0 and IsInstalled=0 (Recommended) is the default. - **filters** ΓÇô Optional, allows you to specify a filter to include or exclude updates. - **updateLimit** ΓÇô Optional, defines how many updates can be installed, default 1000.
-
+ > [!NOTE] > The Windows Update customizer can fail if there are any outstanding Windows restarts, or application installations still running, typically you may see this error in the customization.log, `System.Runtime.InteropServices.COMException (0x80240016): Exception from HRESULT: 0x80240016`. We strongly advise you consider adding in a Windows Restart, and/or allowing applications enough time to complete their installations using [sleep](/powershell/module/microsoft.powershell.utility/start-sleep) or wait commands in the inline commands or scripts before running Windows Update.
-### Generalize
-By default, Azure Image Builder will also run `deprovision` code at the end of each image customization phase, to generalize the image. Generalizing is a process where the image is set up so it can be reused to create multiple VMs. For Windows VMs, Azure Image Builder uses Sysprep. For Linux, Azure Image Builder runs `waagent -deprovision`.
+### Generalize
+
+By default, Azure Image Builder will also run `deprovision` code at the end of each image customization phase, to generalize the image. Generalizing is a process where the image is set up so it can be reused to create multiple VMs. For Windows VMs, Azure Image Builder uses Sysprep. For Linux, Azure Image Builder runs `waagent -deprovision`.
-The commands Image Builder users to generalize may not be suitable for every situation, so Azure Image Builder will allow you to customize this command, if needed.
+The commands Image Builder users to generalize may not be suitable for every situation, so Azure Image Builder will allow you to customize this command, if needed.
If you're migrating existing customization, and you're using different Sysprep/waagent commands, you can use the Image Builder generic commands, and if the VM creation fails, use your own Sysprep or waagent commands. If Azure Image Builder creates a Windows custom image successfully, and you create a VM from it, then find that the VM creation fails or doesn't complete successfully, you will need to review the Windows Server Sysprep documentation or raise a support request with the Windows Server Sysprep Customer Services Support team, who can troubleshoot and advise on the correct Sysprep usage. - #### Default Sysprep command+ ```powershell Write-Output '>>> Waiting for GA Service (RdAgent) to start ...' while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 }
while($true) {
} Write-Output '>>> Sysprep complete ...' ```+ #### Default Linux deprovision command ```bash
$WAAGENT -force -deprovision+user && export HISTSIZE=0 && sync
``` #### Overriding the Commands+ To override the commands, use the PowerShell or Shell script provisioners to create the command files with the exact file name, and put them in the correct directories:
-* Windows: c:\DeprovisioningScript.ps1
-* Linux: /tmp/DeprovisioningScript.sh
+- Windows: c:\DeprovisioningScript.ps1
+- Linux: /tmp/DeprovisioningScript.sh
Image Builder will read these commands, these are written out to the AIB logs, `customization.log`. See [troubleshooting](image-builder-troubleshoot.md#customization-log) on how to collect logs. ## Properties: validate+ You can use the `validate` property to validate platform images and any customized images you create regardless of if you used Azure Image Builder to create them. Azure Image Builder supports a 'Source-Validation-Only' mode that can be set using the `sourceValidationOnly` field. If the `sourceValidationOnly` field is set to true, the image specified in the `source` section will directly be validated. No separate build will be run to generate and then validate a customized image.
The `inVMValidations` field takes a list of validators that will be performed on
The `continueDistributeOnFailure` field is responsible for whether the output image(s) will be distributed if validation fails. If validation fails and this field is set to false, the output image(s) will not be distributed (this is the default behavior). If validation fails and this field is set to true, the output image(s) will still be distributed. Use this option with caution as it may result in failed images being distributed for use. In either case (true or false), the end to end image run will be reported as a failed in the case of a validation failure. This field has no effect on whether validation succeeds or not.
-When using `validate`:
+When using `validate`:
+ - You can use multiple validators - Validators execute in the order specified in the template. - If one validator fails, then the whole validation component will fail and report back an error. - It is advised you test the script thoroughly before using it in a template. Debugging the script on your own VM will be easier.-- Don't put sensitive data in the scripts.
+- Don't put sensitive data in the scripts.
- The script locations need to be publicly accessible, unless you're using [MSI](./image-builder-user-assigned-identity.md). How to use the `validate` property to validate Windows images
-
+ ```json {
- "properties": {
- "validate": {
- "continueDistributeOnFailure": false,
- "sourceValidationOnly": false,
- "inVMValidations": [
- {
- "type": "PowerShell",
- "name": "test PowerShell validator inline",
- "inline": [
- "<command to run inline>"
- ],
- "validExitCodes": "<exit code>",
- "runElevated": <true or false>,
- "runAsSystem": <true or false>
- },
- {
- "type": "PowerShell",
- "name": "<name>",
- "scriptUri": "<path to script>",
- "runElevated": <true false>,
- "sha256Checksum": "<sha256 checksum>"
- }
- ]
- },
- }
+ "properties": {
+ "validate": {
+ "continueDistributeOnFailure": false,
+ "sourceValidationOnly": false,
+ "inVMValidations": [
+ {
+ "type": "PowerShell",
+ "name": "test PowerShell validator inline",
+ "inline": [
+ "<command to run inline>"
+ ],
+ "validExitCodes": "<exit code>",
+ "runElevated": <true or false>,
+ "runAsSystem": <true or false>
+ },
+ {
+ "type": "PowerShell",
+ "name": "<name>",
+ "scriptUri": "<path to script>",
+ "runElevated": <true false>,
+ "sha256Checksum": "<sha256 checksum>"
+ }
+ ]
+ },
+ }
} ```
How to use the `validate` property to validate Windows images
- **type** ΓÇô PowerShell. - **name** - name of the validator-- **scriptUri** - URI of the PowerShell script file.
+- **scriptUri** - URI of the PowerShell script file.
- **inline** ΓÇô array of commands to be run, separated by commas. - **validExitCodes** ΓÇô Optional, valid codes that can be returned from the script/inline command, this will avoid reported failure of the script/inline command. - **runElevated** ΓÇô Optional, boolean, support for running commands and scripts with elevated permissions. - **sha256Checksum** - Value of sha256 checksum of the file, you generate this locally, and then Image Builder will checksum and validate.
- * To generate the sha256Checksum, using a PowerShell on Windows [Get-Hash](/powershell/module/microsoft.powershell.utility/get-filehash)
+
+ To generate the sha256Checksum, using a PowerShell on Windows [Get-Hash](/powershell/module/microsoft.powershell.utility/get-filehash)
How to use the `validate` property to validate Linux images
-
+ ```json {
- "properties": {
- "validate": {
- "continueDistributeOnFailure": false,
- "sourceValidationOnly": false,
- "inVMValidations": [
- {
- "type": "Shell",
- "name": "<name>",
- "inline": [
- "<command to run inline>"
- ]
- },
- {
- "type": "Shell",
- "name": "<name>",
- "scriptUri": "<path to script>",
- "sha256Checksum": "<sha256 checksum>"
- }
- ]
- },
- }
+ "properties": {
+ "validate": {
+ "continueDistributeOnFailure": false,
+ "sourceValidationOnly": false,
+ "inVMValidations": [
+ {
+ "type": "Shell",
+ "name": "<name>",
+ "inline": [
+ "<command to run inline>"
+ ]
+ },
+ {
+ "type": "Shell",
+ "name": "<name>",
+ "scriptUri": "<path to script>",
+ "sha256Checksum": "<sha256 checksum>"
+ }
+ ]
+ },
+ }
} ``` `inVMValidations` properties: -- **type** ΓÇô Shell
+- **type** ΓÇô Shell
- **name** - name of the validator-- **scriptUri** - URI of the script file
+- **scriptUri** - URI of the script file
- **inline** - array of commands to be run, separated by commas. - **sha256Checksum** - Value of sha256 checksum of the file, you generate this locally, and then Image Builder will checksum and validate.
- * To generate the sha256Checksum, using a terminal on Mac/Linux run: `sha256sum <fileName>`
-
+
+ To generate the sha256Checksum, using a terminal on Mac/Linux run: `sha256sum <fileName>`
+ ## Properties: distribute
-Azure Image Builder supports three distribution targets:
+Azure Image Builder supports three distribution targets:
- **managedImage** - managed image. - **sharedImage** - Azure Compute Gallery.
az resource show \
``` Output:+ ```json { "id": "/subscriptions/xxxxxx/resourcegroups/rheltest/providers/Microsoft.VirtualMachineImages/imageTemplates/ImageTemplateLinuxRHEL77/runOutputs/rhel77",
The image output will be a managed image resource.
```json {
- "type":"managedImage",
- "imageId": "<resource ID>",
- "location": "<region>",
- "runOutputName": "<name>",
- "artifactTags": {
- "<name>": "<value>",
- "<name>": "<value>"
- }
+ "type":"managedImage",
+ "imageId": "<resource ID>",
+ "location": "<region>",
+ "runOutputName": "<name>",
+ "artifactTags": {
+ "<name>": "<value>",
+ "<name>": "<value>"
+ }
} ```
-
+ Distribute properties:-- **type** ΓÇô managedImage +
+- **type** ΓÇô managedImage
- **imageId** ΓÇô Resource ID of the destination image, expected format: /subscriptions/\<subscriptionId>/resourceGroups/\<destinationResourceGroupName>/providers/Microsoft.Compute/images/\<imageName>-- **location** - location of the managed image. -- **runOutputName** ΓÇô unique name for identifying the distribution.
+- **location** - location of the managed image.
+- **runOutputName** ΓÇô unique name for identifying the distribution.
- **artifactTags** - Optional user specified key\value tags.
-
-
+ > [!NOTE] > The destination resource group must exist.
-> If you want the image distributed to a different region, it will increase the deployment time.
+> If you want the image distributed to a different region, it will increase the deployment time.
+
+### Distribute: sharedImage
+
+The Azure Compute Gallery is a new Image Management service that allows managing of image region replication, versioning and sharing custom images. Azure Image Builder supports distributing with this service, so you can distribute images to regions supported by Azure Compute Galleries.
+
+an Azure Compute Gallery is made up of:
-### Distribute: sharedImage
-The Azure Compute Gallery is a new Image Management service that allows managing of image region replication, versioning and sharing custom images. Azure Image Builder supports distributing with this service, so you can distribute images to regions supported by Azure Compute Galleries.
-
-an Azure Compute Gallery is made up of:
-
- Gallery - Container for multiple images. A gallery is deployed in one region.-- Image definitions - a conceptual grouping for images.
+- Image definitions - a conceptual grouping for images.
- Image versions - this is an image type used for deploying a VM or scale set. Image versions can be replicated to other regions where VMs need to be deployed.
-
-Before you can distribute to the gallery, you must create a gallery and an image definition, see [Create a gallery](../create-gallery.md).
+
+Before you can distribute to the gallery, you must create a gallery and an image definition, see [Create a gallery](../create-gallery.md).
```json {
- "type": "SharedImage",
- "galleryImageId": "<resource ID>",
- "runOutputName": "<name>",
- "artifactTags": {
- "<name>": "<value>",
- "<name>": "<value>"
- },
- "replicationRegions": [
- "<region where the gallery is deployed>",
- "<region>"
- ]
+ "type": "SharedImage",
+ "galleryImageId": "<resource ID>",
+ "runOutputName": "<name>",
+ "artifactTags": {
+ "<name>": "<value>",
+ "<name>": "<value>"
+ },
+ "replicationRegions": [
+ "<region where the gallery is deployed>",
+ "<region>"
+ ]
}
-```
+```
Distribute properties for galleries: -- **type** - sharedImage
+- **type** - sharedImage
- **galleryImageId** ΓÇô ID of the Azure Compute Gallery, this can specified in two formats:
- * Automatic versioning - Image Builder will generate a monotonic version number for you, this is useful for when you want to keep rebuilding images from the same template: The format is: `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/galleries/<sharedImageGalleryName>/images/<imageGalleryName>`.
- * Explicit versioning - You can pass in the version number you want image builder to use. The format is:
- `/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<sharedImageGalName>/images/<imageDefName>/versions/<version - for example: 1.1.1>`
-- **runOutputName** ΓÇô unique name for identifying the distribution.
+ - Automatic versioning - Image Builder will generate a monotonic version number for you, this is useful for when you want to keep rebuilding images from the same template: The format is: `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/galleries/<sharedImageGalleryName>/images/<imageGalleryName>`.
+ - Explicit versioning - You can pass in the version number you want image builder to use. The format is:
+ `/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<sharedImageGalName>/images/<imageDefName>/versions/<version - for example: 1.1.1>`
+
+- **runOutputName** ΓÇô unique name for identifying the distribution.
- **artifactTags** - Optional user specified key\value tags. - **replicationRegions** - Array of regions for replication. One of the regions must be the region where the Gallery is deployed. Adding regions will mean an increase of build time, as the build doesn't complete until the replication has completed. - **excludeFromLatest** (optional) This allows you to mark the image version you create not be used as the latest version in the gallery definition, the default is 'false'. - **storageAccountType** (optional) AIB supports specifying these types of storage for the image version that is to be created:
- * "Standard_LRS"
- * "Standard_ZRS"
+ - "Standard_LRS"
+ - "Standard_ZRS"
> [!NOTE] > If the image template and referenced `image definition` are not in the same location, you will see additional time to create images. Image Builder currently doesn't have a `location` parameter for the image version resource, we take it from its parent `image definition`. For example, if an image definition is in westus and you want the image version replicated to eastus, a blob is copied to westus, from this, an image version resource in westus is created, and then replicate to eastus. To avoid the additional replication time, ensure the `image definition` and image template are in the same location.
+### Distribute: VHD
-### Distribute: VHD
-You can output to a VHD. You can then copy the VHD, and use it to publish to Azure MarketPlace, or use with Azure Stack.
+You can output to a VHD. You can then copy the VHD, and use it to publish to Azure MarketPlace, or use with Azure Stack.
```json
-{
- "type": "VHD",
- "runOutputName": "<VHD name>",
- "artifactTags": {
- "<name>": "<value>",
- "<name>": "<value>"
- }
+{
+ "type": "VHD",
+ "runOutputName": "<VHD name>",
+ "artifactTags": {
+ "<name>": "<value>",
+ "<name>": "<value>"
+ }
} ```
-
+ OS Support: Windows and Linux Distribute VHD parameters: - **type** - VHD.-- **runOutputName** ΓÇô unique name for identifying the distribution.
+- **runOutputName** ΓÇô unique name for identifying the distribution.
- **tags** - Optional user specified key value pair tags.
-
-Azure Image Builder doesn't allow the user to specify a storage account location, but you can query the status of the `runOutputs` to get the location.
+
+Azure Image Builder doesn't allow the user to specify a storage account location, but you can query the status of the `runOutputs` to get the location.
```azurecli-interactive az resource show \
- --ids "/subscriptions/$subscriptionId/resourcegroups/<imageResourceGroup>/providers/Microsoft.VirtualMachineImages/imageTemplates/<imageTemplateName>/runOutputs/<runOutputName>" | grep artifactUri
+ --ids "/subscriptions/$subscriptionId/resourcegroups/<imageResourceGroup>/providers/Microsoft.VirtualMachineImages/imageTemplates/<imageTemplateName>/runOutputs/<runOutputName>" | grep artifactUri
``` > [!NOTE]
-> Once the VHD has been created, copy it to a different location, as soon as possible. The VHD is stored in a storage account in the temporary resource group created when the image template is submitted to the Azure Image Builder service. If you delete the image template, then you will lose the VHD.
+> Once the VHD has been created, copy it to a different location, as soon as possible. The VHD is stored in a storage account in the temporary resource group created when the image template is submitted to the Azure Image Builder service. If you delete the image template, then you will lose the VHD.
## Image Template Operations ### Starting an Image Build+ To start a build, you need to invoke 'Run' on the Image Template resource, examples of `run` commands: ```PowerShell Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2021-10-01" -Action Run -Force ``` - ```azurecli az resource invoke-action \
- --resource-group $imageResourceGroup \
- --resource-type Microsoft.VirtualMachineImages/imageTemplates \
- -n helloImageTemplateLinux01 \
- --action Run
+ --resource-group $imageResourceGroup \
+ --resource-type Microsoft.VirtualMachineImages/imageTemplates \
+ -n helloImageTemplateLinux01 \
+ --action Run
``` ### Cancelling an Image Build+ If you're running an image build that you believe is incorrect, waiting for user input, or you feel will never complete successfully, then you can cancel the build. The build can be canceled any time. If the distribution phase has started you can still cancel, but you will need to clean up any images that may not be completed. The cancel command doesn't wait for cancel to complete, monitor `lastrunstatus.runstate` for canceling progress, using these status [commands](image-builder-troubleshoot.md#customization-log). - Examples of `cancel` commands: ```powerShell
az resource invoke-action \
--resource-group $imageResourceGroup \ --resource-type Microsoft.VirtualMachineImages/imageTemplates \ -n helloImageTemplateLinux01 \
- --action Cancel
+ --action Cancel
``` ## Next steps
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder.md
Title: Use Azure Image Builder with an Azure Compute Gallery for Linux VMs
-description: Create Linux VM images with Azure Image Builder and Azure Compute Gallery.
+ Title: Use Azure VM Image Builder with an Azure Compute Gallery for Linux VMs
+description: Create Linux VM images with Azure VM Image Builder and Azure Compute Gallery.
-# Create a Linux image and distribute it to an Azure Compute Gallery by using Azure CLI
+# Create a Linux image and distribute it to an Azure Compute Gallery by using the Azure CLI
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-This article shows you how you can use the Azure Image Builder, and the Azure CLI, to create an image version in an [Azure Compute Gallery](../shared-image-galleries.md) (formerly known as Shared Image Gallery), then distribute the image globally. You can also do this using [Azure PowerShell](../windows/image-builder-gallery.md).
+In this article, you learn how to use Azure VM Image Builder and the Azure CLI to create an image version in an [Azure Compute Gallery](../shared-image-galleries.md) (formerly Shared Image Gallery) and then distribute the image globally. You can also create an image version by using [Azure PowerShell](../windows/image-builder-gallery.md).
-We will be using a sample .json template to configure the image. The .json file we are using is here: [helloImageTemplateforSIG.json](https://github.com/danielsollondon/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json).
+This article uses a sample JSON template to configure the image. The JSON file is at [helloImageTemplateforSIG.json](https://github.com/danielsollondon/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json).
To distribute the image to an Azure Compute Gallery, the template uses [sharedImage](image-builder-json.md#distribute-sharedimage) as the value for the `distribute` section of the template. ## Register the features
-To use Azure Image Builder, you need to register the new feature.
-
-Check your registration.
+To use VM Image Builder, you need to register the feature. Check your registration by running the following commands:
```azurecli-interactive az provider show -n Microsoft.VirtualMachineImages -o json | grep registrationState
az provider show -n Microsoft.Storage -o json | grep registrationState
az provider show -n Microsoft.Network -o json | grep registrationState ```
-If they do not say registered, run the following:
+If the output doesn't say *registered*, run the following commands:
```azurecli-interactive az provider register -n Microsoft.VirtualMachineImages
az provider register -n Microsoft.Network
## Set variables and permissions
-We will be using some pieces of information repeatedly, so we will create some variables to store that information.
+Because you'll be using some pieces of information repeatedly, create some variables to store that information.
-Image builder only supports creating custom images in the same Resource Group as the source managed image. Update the resource group name in this example to be the same resource group as your source managed image.
+VM Image Builder supports creating custom images only in the same resource group as the source-managed image. In the following example, update the resource group name to be the same resource group as your source-managed image.
```azurecli-interactive
-# Resource group name - we are using ibLinuxGalleryRG in this example
+# Resource group name - ibLinuxGalleryRG in this example
sigResourceGroup=ibLinuxGalleryRG
-# Datacenter location - we are using West US 2 in this example
+# Datacenter location - West US 2 in this example
location=westus2
-# Additional region to replicate the image to - we are using East US in this example
+# Additional region to replicate the image to - East US in this example
additionalregion=eastus
-# name of the Azure Compute Gallery - in this example we are using myGallery
+# Name of the Azure Compute Gallery - myGallery in this example
sigName=myIbGallery
-# name of the image definition to be created - in this example we are using myImageDef
+# Name of the image definition to be created - myImageDef in this example
imageDefName=myIbImageDef
-# image distribution metadata reference name
+# Reference name in the image distribution metadata
runOutputName=aibLinuxSIG ```
-Create a variable for your subscription ID.
+Create a variable for your subscription ID:
```azurecli-interactive subscriptionID=$(az account show --query id --output tsv) ```
-Create the resource group.
+Create the resource group:
```azurecli-interactive az group create -n $sigResourceGroup -l $location ``` ## Create a user-assigned identity and set permissions on the resource group
-Image Builder will use the [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) provided to inject the image into the Azure Compute Gallery (SIG). In this example, you will create an Azure role definition that has the granular actions to perform distributing the image to the SIG. The role definition will then be assigned to the user-identity.
+
+VM Image Builder uses the provided [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) to inject the image into an Azure Compute Gallery. In this example, you create an Azure role definition with specific actions for distributing the image. The role definition is then assigned to the user identity.
```bash
-# create user assigned identity for image builder to access the storage account where the script is located
+# Create user-assigned identity for VM Image Builder to access the storage account where the script is stored
identityName=aibBuiUserId$(date +'%s') az identity create -g $sigResourceGroup -n $identityName
-# get identity id
+# Get the identity ID
imgBuilderCliId=$(az identity show -g $sigResourceGroup -n $identityName --query clientId -o tsv)
-# get the user identity URI, needed for the template
+# Get the user identity URI that's needed for the template
imgBuilderId=/subscriptions/$subscriptionID/resourcegroups/$sigResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$identityName
-# this command will download an Azure role definition template, and update the template with the parameters specified earlier.
+# Download an Azure role-definition template, and update the template with the parameters that were specified earlier
curl https://raw.githubusercontent.com/Azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json -o aibRoleImageCreation.json imageRoleDefName="Azure Image Builder Image Def"$(date +'%s')
-# update the definition
+# Update the definition
sed -i -e "s/<subscriptionID>/$subscriptionID/g" aibRoleImageCreation.json sed -i -e "s/<rgName>/$sigResourceGroup/g" aibRoleImageCreation.json sed -i -e "s/Azure Image Builder Service Image Creation Role/$imageRoleDefName/g" aibRoleImageCreation.json
-# create role definitions
+# Create role definitions
az role definition create --role-definition ./aibRoleImageCreation.json
-# grant role definition to the user assigned identity
+# Grant a role definition to the user-assigned identity
az role assignment create \ --assignee $imgBuilderCliId \ --role "$imageRoleDefName" \
az role assignment create \
## Create an image definition and gallery
-To use Image Builder with an Azure Compute Gallery, you need to have an existing gallery and image definition. Image Builder will not create the gallery and image definition for you.
+To use VM Image Builder with Azure Compute Gallery, you need to have an existing gallery and image definition. VM Image Builder doesn't create the gallery and image definition for you.
-If you don't already have a gallery and image definition to use, start by creating them. First, create a gallery.
+If you don't already have a gallery and image definition to use, start by creating them.
+
+First, create a gallery:
```azurecli-interactive az sig create \
az sig create \
--gallery-name $sigName ```
-Then, create an image definition.
+Then, create an image definition:
```azurecli-interactive az sig image-definition create \
az sig image-definition create \
```
-## Download and configure the .json
+## Download and configure the JSON file
-Download the .json template and configure it with your variables.
+Download the JSON template and configure it with your variables:
```azurecli-interactive curl https://raw.githubusercontent.com/Azure/azvmimagebuilder/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json -o helloImageTemplateforSIG.json
sed -i -e "s%<imgBuilderId>%$imgBuilderId%g" helloImageTemplateforSIG.json
## Create the image version
-This next part will create the image version in the gallery.
+In this section you create the image version in the gallery.
-Submit the image configuration to the Azure Image Builder service.
+Submit the image configuration to the Azure VM Image Builder service:
```azurecli-interactive az resource create \
az resource create \
-n helloImageTemplateforSIG01 ```
-Start the image build.
+Start the image build:
```azurecli-interactive az resource invoke-action \
az resource invoke-action \
--action Run ```
-Creating the image and replicating it to both regions can take a while. Wait until this part is finished before moving on to creating a VM.
+It can take a few moments to create the image and replicate it to both regions. Wait until this part is finished before you move on to create a VM.
## Create the VM
-Create a VM from the image version that was created by Azure Image Builder.
+Create the VM from the image version that was created by VM Image Builder.
```azurecli-interactive az vm create \
az vm create \
--generate-ssh-keys ```
-SSH into the VM.
+Connect to the VM via Secure Shell (SSH):
```azurecli-interactive ssh aibuser@<publicIpAddress> ```
-You should see the image was customized with a *Message of the Day* as soon as your SSH connection is established!
+As soon as your SSH connection is established, you should see that the image was customized with a *Message of the Day*:
```console *******************************************************
You should see the image was customized with a *Message of the Day* as soon as y
******************************************************* ```
-## Clean up resources
+## Clean up your resources
-If you want to now try re-customizing the image version to create a new version of the same image, skip the next steps and go on to [Use Azure Image Builder to create another image version](image-builder-gallery-update-image-version.md).
+> [!NOTE]
+> If you now want to try to recustomize the image version to create a new version of the same image, *skip the step outlined here* and go to [Use VM Image Builder to create another image version](image-builder-gallery-update-image-version.md).
+If you no longer need the resources that were created as you followed the process in this article, you can delete them by doing the following.
-This will delete the image that was created, along with all of the other resource files. Make sure you are finished with this deployment before deleting the resources.
+This process deletes both the image that you created and all the other resource files. Make sure that you've finished this deployment before you delete the resources.
-When deleting gallery resources, you need delete all of the image versions before you can delete the image definition used to create them. To delete a gallery, you first need to have deleted all of the image definitions in the gallery.
+When you're deleting gallery resources, you need delete all the image versions before you can delete the image definition that was used to create them. To delete a gallery, you first need to have deleted all the image definitions in the gallery.
-Delete the image builder template.
+1. Delete the VM Image Builder template.
-```azurecli-interactive
-az resource delete \
- --resource-group $sigResourceGroup \
- --resource-type Microsoft.VirtualMachineImages/imageTemplates \
- -n helloImageTemplateforSIG01
-```
+ ```azurecli-interactive
+ az resource delete \
+ --resource-group $sigResourceGroup \
+ --resource-type Microsoft.VirtualMachineImages/imageTemplates \
+ -n helloImageTemplateforSIG01
+ ```
-Delete permissions assignments, roles and identity
-```azurecli-interactive
-az role assignment delete \
- --assignee $imgBuilderCliId \
- --role "$imageRoleDefName" \
- --scope /subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup
+1. Delete permissions assignments, roles, and identity.
-az role definition delete --name "$imageRoleDefName"
+ ```azurecli-interactive
+ az role assignment delete \
+ --assignee $imgBuilderCliId \
+ --role "$imageRoleDefName" \
+ --scope /subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup
-az identity delete --ids $imgBuilderId
-```
+ az role definition delete --name "$imageRoleDefName"
-Get the image version created by image builder, this always starts with `0.`, and then delete the image version
+ az identity delete --ids $imgBuilderId
+ ```
-```azurecli-interactive
-sigDefImgVersion=$(az sig image-version list \
- -g $sigResourceGroup \
- --gallery-name $sigName \
- --gallery-image-definition $imageDefName \
- --subscription $subscriptionID --query [].'name' -o json | grep 0. | tr -d '"')
-az sig image-version delete \
- -g $sigResourceGroup \
- --gallery-image-version $sigDefImgVersion \
- --gallery-name $sigName \
- --gallery-image-definition $imageDefName \
- --subscription $subscriptionID
-```
+1. Get the image version that was created by VM Image Builder (it always starts with `0.`), and then delete it.
+ ```azurecli-interactive
+ sigDefImgVersion=$(az sig image-version list \
+ -g $sigResourceGroup \
+ --gallery-name $sigName \
+ --gallery-image-definition $imageDefName \
+ --subscription $subscriptionID --query [].'name' -o json | grep 0. | tr -d '"')
+ az sig image-version delete \
+ -g $sigResourceGroup \
+ --gallery-image-version $sigDefImgVersion \
+ --gallery-name $sigName \
+ --gallery-image-definition $imageDefName \
+ --subscription $subscriptionID
+ ```
-Delete the image definition.
+1. Delete the image definition.
-```azurecli-interactive
-az sig image-definition delete \
- -g $sigResourceGroup \
- --gallery-name $sigName \
- --gallery-image-definition $imageDefName \
- --subscription $subscriptionID
-```
+ ```azurecli-interactive
+ az sig image-definition delete \
+ -g $sigResourceGroup \
+ --gallery-name $sigName \
+ --gallery-image-definition $imageDefName \
+ --subscription $subscriptionID
+ ```
-Delete the gallery.
+1. Delete the gallery.
-```azurecli-interactive
-az sig delete -r $sigName -g $sigResourceGroup
-```
+ ```azurecli-interactive
+ az sig delete -r $sigName -g $sigResourceGroup
+ ```
-Delete the resource group.
+1. Delete the resource group.
-```azurecli-interactive
-az group delete -n $sigResourceGroup -y
-```
+ ```azurecli-interactive
+ az group delete -n $sigResourceGroup -y
+ ```
## Next steps
-Learn more about [Azure Compute Galleries](../shared-image-galleries.md).
+Learn more about [Azure Compute Gallery](../shared-image-galleries.md).
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
No additional cost to existing VM pricing.
- Azure Site Recovery - Shared disk - Ultra disk-- Managed image - Azure Dedicated Host - Nested Virtualization
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/build-image-with-packer.md
$sp.AppId
To authenticate to Azure, you also need to obtain your Azure tenant and subscription IDs with [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription): ```powershell
-Get-AzSubscription
+$subName = "mySubscriptionName"
+$sub = Get-AzSubscription -SubscriptionName $subName
```
virtual-machines Image Builder Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-powershell.md
Title: Create a Windows VM with Azure Image Builder using PowerShell
-description: Create a Windows VM with the Azure Image Builder PowerShell module.
+ Title: Create a Windows VM with Azure VM Image Builder by using PowerShell
+description: In this article, you create a Windows VM by using the VM Image Builder PowerShell module.
-# Create a Windows VM with Azure Image Builder using PowerShell
+# Create a Windows VM with VM Image Builder by using PowerShell
**Applies to:** :heavy_check_mark: Windows VMs
-This article demonstrates how you can create a customized Windows image using the Azure VM Image
+This article demonstrates how to create a customized Windows VM image by using the Azure VM Image
Builder PowerShell module. ## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)
-cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
+If you choose to use PowerShell locally, this article requires that you install the Azure PowerShell
+module and connect to your Azure account by using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
[!INCLUDE [cloud-shell-try-it](../../../includes/cloud-shell-try-it.md)] If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
-should be billed. Select a specific subscription using the
+should be billed. Select a specific subscription by using the
[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet. ```azurepowershell-interactive
Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
### Register features
-Register the following resource providers for use with your Azure subscription if they
-aren't already registered.
+If you haven't already done so, register the following resource providers to use with your Azure subscription:
- Microsoft.Compute - Microsoft.KeyVault
Get-AzResourceProvider -ProviderNamespace Microsoft.Compute, Microsoft.KeyVault,
## Define variables
-You'll be using several pieces of information repeatedly. Create variables to store the information.
+Because you'll be using some pieces of information repeatedly, create some variables to store that information:
```azurepowershell-interactive # Destination image resource group name
$imageTemplateName = 'myWinImage'
$runOutputName = 'myDistResults' ```
-Create a variable for your Azure subscription ID. To confirm that the `subscriptionID` variable
-contains your subscription ID, you can run the second line in the following example.
+Create a variable for your Azure subscription ID. To confirm that the `subscriptionID` variable contains your subscription ID, you can run the second line in the following example:
```azurepowershell-interactive # Your Azure Subscription ID
Write-Output $subscriptionID
## Create a resource group
-Create an [Azure resource group](../../azure-resource-manager/management/overview.md)
-using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)
-cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as
-a group.
+Create an [Azure resource group](../../azure-resource-manager/management/overview.md) by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as a group.
-The following example creates a resource group based on the name in the `$imageResourceGroup`
-variable in the region specified in the `$location` variable. This resource group is used to store
-the image configuration template artifact and the image.
+The following example creates a resource group that's based on the name in the `$imageResourceGroup` variable in the region that you've specified in the `$location` variable. This resource group is used to store the image configuration template artifact and the image.
```azurepowershell-interactive New-AzResourceGroup -Name $imageResourceGroup -Location $location ```
-## Create user identity and set role permissions
+## Create a user identity and set role permissions
-Grant Azure image builder permissions to create images in the specified resource group using the
-following example. Without this permission, the image build process won't complete successfully.
+Grant Azure image builder permissions to create images in the specified resource group by using the following example. Without this permission, the image build process won't finish successfully.
-Create variables for the role definition and identity names. These values must be unique.
+1. Create variables for the role definition and identity names. These values must be unique.
-```azurepowershell-interactive
-[int]$timeInt = $(Get-Date -UFormat '%s')
-$imageRoleDefName = "Azure Image Builder Image Def $timeInt"
-$identityName = "myIdentity$timeInt"
-```
+ ```azurepowershell-interactive
+ [int]$timeInt = $(Get-Date -UFormat '%s')
+ $imageRoleDefName = "Azure Image Builder Image Def $timeInt"
+ $identityName = "myIdentity$timeInt"
+ ```
-Create a user identity.
+1. Create a user identity.
-```azurepowershell-interactive
-New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName
-```
+ ```azurepowershell-interactive
+ New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName
+ ```
-Store the identity resource and principal IDs in variables.
+1. Store the identity resource and principal IDs in variables.
-```azurepowershell-interactive
-$identityNameResourceId = (Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id
-$identityNamePrincipalId = (Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
-```
+ ```azurepowershell-interactive
+ $identityNameResourceId = (Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id
+ $identityNamePrincipalId = (Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
+ ```
-### Assign permissions for identity to distribute images
+### Assign permissions for the identity to distribute the images
-Download .json config file and modify it based on the settings defined in this article.
+1. Download the JSON configuration file, and then modify it based on the settings that are defined in this article.
-```azurepowershell-interactive
-$myRoleImageCreationUrl = 'https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json'
-$myRoleImageCreationPath = "$env:TEMP\myRoleImageCreation.json"
+ ```azurepowershell-interactive
+ $myRoleImageCreationUrl = 'https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json'
+ $myRoleImageCreationPath = "$env:TEMP\myRoleImageCreation.json"
-Invoke-WebRequest -Uri $myRoleImageCreationUrl -OutFile $myRoleImageCreationPath -UseBasicParsing
+ Invoke-WebRequest -Uri $myRoleImageCreationUrl -OutFile $myRoleImageCreationPath -UseBasicParsing
-$Content = Get-Content -Path $myRoleImageCreationPath -Raw
-$Content = $Content -replace '<subscriptionID>', $subscriptionID
-$Content = $Content -replace '<rgName>', $imageResourceGroup
-$Content = $Content -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName
-$Content | Out-File -FilePath $myRoleImageCreationPath -Force
-```
+ $Content = Get-Content -Path $myRoleImageCreationPath -Raw
+ $Content = $Content -replace '<subscriptionID>', $subscriptionID
+ $Content = $Content -replace '<rgName>', $imageResourceGroup
+ $Content = $Content -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName
+ $Content | Out-File -FilePath $myRoleImageCreationPath -Force
+ ```
-Create the role definition.
+1. Create the role definition.
-```azurepowershell-interactive
-New-AzRoleDefinition -InputFile $myRoleImageCreationPath
-```
+ ```azurepowershell-interactive
+ New-AzRoleDefinition -InputFile $myRoleImageCreationPath
+ ```
-Grant the role definition to the image builder service principal.
+1. Grant the role definition to the VM Image Builder service principal.
-```azurepowershell-interactive
-$RoleAssignParams = @{
- ObjectId = $identityNamePrincipalId
- RoleDefinitionName = $imageRoleDefName
- Scope = "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
-}
-New-AzRoleAssignment @RoleAssignParams
-```
+ ```azurepowershell-interactive
+ $RoleAssignParams = @{
+ ObjectId = $identityNamePrincipalId
+ RoleDefinitionName = $imageRoleDefName
+ Scope = "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+ }
+ New-AzRoleAssignment @RoleAssignParams
+ ```
> [!NOTE]
-> If you receive the error: "_New-AzRoleDefinition: Role definition limit exceeded. No more role
-> definitions can be created._", see
-> [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md).
+> If you receive the error "New-AzRoleDefinition: Role definition limit exceeded. No more role definitions can be created," see [Troubleshoot Azure RBAC (role-based access control)](../../role-based-access-control/troubleshooting.md).
-## Create an Azure Compute Gallery (formerly known as Shared Image Gallery)
+## Create an Azure Compute Gallery
-Create the gallery.
+1. Create the gallery.
-```azurepowershell-interactive
-$myGalleryName = 'myImageGallery'
-$imageDefName = 'winSvrImages'
+ ```azurepowershell-interactive
+ $myGalleryName = 'myImageGallery'
+ $imageDefName = 'winSvrImages'
-New-AzGallery -GalleryName $myGalleryName -ResourceGroupName $imageResourceGroup -Location $location
-```
+ New-AzGallery -GalleryName $myGalleryName -ResourceGroupName $imageResourceGroup -Location $location
+ ```
-Create a gallery definition.
+1. Create a gallery definition.
-```azurepowershell-interactive
-$GalleryParams = @{
- GalleryName = $myGalleryName
- ResourceGroupName = $imageResourceGroup
- Location = $location
- Name = $imageDefName
- OsState = 'generalized'
- OsType = 'Windows'
- Publisher = 'myCo'
- Offer = 'Windows'
- Sku = 'Win2019'
-}
-New-AzGalleryImageDefinition @GalleryParams
-```
+ ```azurepowershell-interactive
+ $GalleryParams = @{
+ GalleryName = $myGalleryName
+ ResourceGroupName = $imageResourceGroup
+ Location = $location
+ Name = $imageDefName
+ OsState = 'generalized'
+ OsType = 'Windows'
+ Publisher = 'myCo'
+ Offer = 'Windows'
+ Sku = 'Win2019'
+ }
+ New-AzGalleryImageDefinition @GalleryParams
+ ```
## Create an image
-Create an Azure image builder source object. See
-[Find Windows VM images in the Azure Marketplace with Azure PowerShell](./cli-ps-findimage.md)
-for valid parameter values.
-
-```azurepowershell-interactive
-$SrcObjParams = @{
- SourceTypePlatformImage = $true
- Publisher = 'MicrosoftWindowsServer'
- Offer = 'WindowsServer'
- Sku = '2019-Datacenter'
- Version = 'latest'
-}
-$srcPlatform = New-AzImageBuilderSourceObject @SrcObjParams
-```
-
-Create an Azure image builder distributor object.
-
-```azurepowershell-interactive
-$disObjParams = @{
- SharedImageDistributor = $true
- ArtifactTag = @{tag='dis-share'}
- GalleryImageId = "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup/providers/Microsoft.Compute/galleries/$myGalleryName/images/$imageDefName"
- ReplicationRegion = $location
- RunOutputName = $runOutputName
- ExcludeFromLatest = $false
-}
-$disSharedImg = New-AzImageBuilderDistributorObject @disObjParams
-```
-
-Create an Azure image builder customization object.
-
-```azurepowershell-interactive
-$ImgCustomParams01 = @{
- PowerShellCustomizer = $true
- CustomizerName = 'settingUpMgmtAgtPath'
- RunElevated = $false
- Inline = @("mkdir c:\\buildActions", "mkdir c:\\buildArtifacts", "echo Azure-Image-Builder-Was-Here > c:\\buildActions\\buildActionsOutput.txt")
-}
-$Customizer01 = New-AzImageBuilderCustomizerObject @ImgCustomParams01
-```
-
-Create a second Azure image builder customization object.
-
-```azurepowershell-interactive
-$ImgCustomParams02 = @{
- FileCustomizer = $true
- CustomizerName = 'downloadBuildArtifacts'
- Destination = 'c:\\buildArtifacts\\https://docsupdatetracker.net/index.html'
- SourceUri = 'https://raw.githubusercontent.com/azure/azvmimagebuilder/master/quickquickstarts/exampleArtifacts/buildArtifacts/https://docsupdatetracker.net/index.html'
-}
-$Customizer02 = New-AzImageBuilderCustomizerObject @ImgCustomParams02
-```
-
-Create an Azure image builder template.
-
-```azurepowershell-interactive
-$ImgTemplateParams = @{
- ImageTemplateName = $imageTemplateName
- ResourceGroupName = $imageResourceGroup
- Source = $srcPlatform
- Distribute = $disSharedImg
- Customize = $Customizer01, $Customizer02
- Location = $location
- UserAssignedIdentityId = $identityNameResourceId
-}
-New-AzImageBuilderTemplate @ImgTemplateParams
-```
-
-When complete, a message is returned and an image builder configuration template is created in the
-`$imageResourceGroup`.
-
-To determine if the template creation process was successful, you can use the following example.
+1. Create a VM Image Builder source object. For valid parameter values, see [Find Windows VM images in Azure Marketplace with Azure PowerShell](./cli-ps-findimage.md).
+
+ ```azurepowershell-interactive
+ $SrcObjParams = @{
+ SourceTypePlatformImage = $true
+ Publisher = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Sku = '2019-Datacenter'
+ Version = 'latest'
+ }
+ $srcPlatform = New-AzImageBuilderSourceObject @SrcObjParams
+ ```
+
+1. Create a VM Image Builder distributor object.
+
+1. Create a VM Image Builder customization object.
+
+ ```azurepowershell-interactive
+ $ImgCustomParams01 = @{
+ PowerShellCustomizer = $true
+ CustomizerName = 'settingUpMgmtAgtPath'
+ RunElevated = $false
+ Inline = @("mkdir c:\\buildActions", "mkdir c:\\buildArtifacts", "echo Azure-Image-Builder-Was-Here > c:\\buildActions\\buildActionsOutput.txt")
+ }
+ $Customizer01 = New-AzImageBuilderCustomizerObject @ImgCustomParams01
+ ```
+
+1. Create a second VM Image Builder customization object.
+
+ ```azurepowershell-interactive
+ $ImgCustomParams02 = @{
+ FileCustomizer = $true
+ CustomizerName = 'downloadBuildArtifacts'
+ Destination = 'c:\\buildArtifacts\\https://docsupdatetracker.net/index.html'
+ SourceUri = 'https://raw.githubusercontent.com/azure/azvmimagebuilder/master/quickquickstarts/exampleArtifacts/buildArtifacts/https://docsupdatetracker.net/index.html'
+ }
+ $Customizer02 = New-AzImageBuilderCustomizerObject @ImgCustomParams02
+ ```
+
+1. Create a VM Image Builder template.
+
+ ```azurepowershell-interactive
+ $ImgTemplateParams = @{
+ ImageTemplateName = $imageTemplateName
+ ResourceGroupName = $imageResourceGroup
+ Source = $srcPlatform
+ Distribute = $disSharedImg
+ Customize = $Customizer01, $Customizer02
+ Location = $location
+ UserAssignedIdentityId = $identityNameResourceId
+ }
+ New-AzImageBuilderTemplate @ImgTemplateParams
+ ```
+
+When the template has been created, a message is returned, and a VM Image Builder configuration template is created in `$imageResourceGroup`.
+
+To determine whether the template creation process was successful, use the following example:
```azurepowershell-interactive Get-AzImageBuilderTemplate -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup | Select-Object -Property Name, LastRunStatusRunState, LastRunStatusMessage, ProvisioningState ```
-In the background, image builder also creates a staging resource group in your subscription. This
-resource group is used for the image build. It's in the format:
-`IT_<DestinationResourceGroup>_<TemplateName>`.
+In the background, VM Image Builder also creates a staging resource group in your subscription. This resource group is used for the image build. It's in the format `IT_<DestinationResourceGroup>_<TemplateName>`.
> [!WARNING]
-> Do not delete the staging resource group directly. Delete the image template artifact, this will
-> cause the staging resource group to be deleted.
+> Don't delete the staging resource group directly. To cause the staging resource group to be deleted, delete the image template artifact.
-If the service reports a failure during the image configuration template submission:
+If the service reports a failure when the image configuration template is submitted, do the following:
-- See [Troubleshooting Azure VM Image Build (AIB) Failures](../linux/image-builder-troubleshoot.md).-- Delete the template using the following example before you retry.
+- See [Troubleshoot Azure VM Image Builder failures](../linux/image-builder-troubleshoot.md).
+- Before you retry submitting the template, delete it by following this example:
-```azurepowershell-interactive
-Remove-AzImageBuilderTemplate -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup
-```
+ ```azurepowershell-interactive
+ Remove-AzImageBuilderTemplate -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup
+ ```
## Start the image build
-Submit the image configuration to the VM image builder service.
+Submit the image configuration to the VM Image Builder service by running the following command:
```azurepowershell-interactive Start-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName ```
-Wait for the image build process to complete. This step could take up to an hour.
+Wait for the image building process to finish, which could take up to an hour.
-If you encounter errors, review [Troubleshooting Azure VM Image Build (AIB) Failures](../linux/image-builder-troubleshoot.md).
+If you encounter errors, review [Troubleshoot Azure VM Image Builder failures](../linux/image-builder-troubleshoot.md).
## Create a VM
-Store login credentials for the VM in a variable. The password must be complex.
+1. Store the VM login credentials in a variable. The password must be complex.
-```azurepowershell-interactive
-$Cred = Get-Credential
-```
+ ```azurepowershell-interactive
+ $Cred = Get-Credential
+ ```
-Create the VM using the image you created.
+1. Create the VM by using the image you created.
-```azurepowershell-interactive
-$ArtifactId = (Get-AzImageBuilderRunOutput -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup).ArtifactId
+ ```azurepowershell-interactive
+ $ArtifactId = (Get-AzImageBuilderRunOutput -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup).ArtifactId
-New-AzVM -ResourceGroupName $imageResourceGroup -Image $ArtifactId -Name myWinVM01 -Credential $Cred
-```
+ New-AzVM -ResourceGroupName $imageResourceGroup -Image $ArtifactId -Name myWinVM01 -Credential $Cred
+ ```
## Verify the customizations
-Create a Remote Desktop connection to the VM using the username and password you set when you
-created the VM. Inside the VM, open PowerShell and run `Get-Content` as shown in the following example:
+1. Create a Remote Desktop connection to the VM by using the username and password that you set when you created the VM.
-```azurepowershell-interactive
-Get-Content -Path C:\buildActions\buildActionsOutput.txt
-```
+1. Inside the VM, open PowerShell and run `Get-Content`, as shown in the following example:
-You should see output based on the contents of the file created during the image customization
-process.
+ ```azurepowershell-interactive
+ Get-Content -Path C:\buildActions\buildActionsOutput.txt
+ ```
-```Output
-Azure-Image-Builder-Was-Here
-```
+ The output is based on the contents of the file that you created during the image customization process.
-From the same PowerShell session, verify that the second customization completed successfully by checking
-for the presence of the file `c:\buildArtifacts\https://docsupdatetracker.net/index.html` as shown in the following example:
+ ```Output
+ Azure-Image-Builder-Was-Here
+ ```
-```azurepowershell-interactive
-Get-ChildItem c:\buildArtifacts\
-```
+1. From the same PowerShell session, verify that the second customization finished successfully by checking for the presence of `c:\buildArtifacts\https://docsupdatetracker.net/index.html`, as shown in the following example:
-The result should be a directory listing showing the file downloaded during the image customization
-process.
+ ```azurepowershell-interactive
+ Get-ChildItem c:\buildArtifacts\
+ ```
-```Output
- Directory: C:\buildArtifacts
+ The result should be a directory listing showing that the file was downloaded during the image customization process.
-Mode LastWriteTime Length Name
-- - ---a 29/01/2021 10:04 276 https://docsupdatetracker.net/index.html
-```
+ ```Output
+ Directory: C:\buildArtifacts
+ Mode LastWriteTime Length Name
+ - - -
+ -a 29/01/2021 10:04 276 https://docsupdatetracker.net/index.html
+ ```
-## Clean up resources
+## Clean up your resources
-If the resources created in this article aren't needed, you can delete them by running the following
-examples.
+If you no longer need the resources that were created during this process, you can delete them by doing the following:
-### Delete the image builder template
+1. Delete the VM Image Builder template.
-```azurepowershell-interactive
-Remove-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName
-```
+ ```azurepowershell-interactive
+ Remove-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName
+ ```
-### Delete the image resource group
+1. Delete the image resource group.
-> [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this article exist in the specified resource group, they will
-> also be deleted.
+ > [!CAUTION]
+ > The following example deletes the specified resource group and all the resources that it contains. If any resources outside the scope of this article exist in the resource group, they'll also be deleted.
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name $imageResourceGroup
-```
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name $imageResourceGroup
+ ```
## Next steps
-To learn more about the components of the .json file used in this article, see
-[Image builder template reference](../linux/image-builder-json.md).
+To learn more about the components of the JSON file that this article uses, see the [VM Image Builder template reference](../linux/image-builder-json.md).
virtual-machines Image Builder Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-virtual-desktop.md
Title: Image Builder - Create a Azure Virtual Desktop image
-description: Create an Azure VM image of Azure Virtual Desktop using Azure Image Builder in PowerShell.
+ Title: Create an Azure Virtual Desktop image by using Azure VM Image Builder
+description: Create an Azure VM image of Azure Virtual Desktop by using VM Image Builder and PowerShell.
-# Create an Azure Virtual Desktop image using Azure VM Image Builder and PowerShell
+# Create an Azure Virtual Desktop image by using VM Image Builder and PowerShell
**Applies to:** :heavy_check_mark: Windows VMs
-This article shows you how to create an Azure Virtual Desktop image with these customizations:
+In this article, you learn how to create an Azure Virtual Desktop image with these customizations:
-* Installing [FsLogix](https://github.com/DeanCefola/Azure-WVD/blob/master/PowerShell/FSLogixSetup.ps1).
-* Running a [Azure Virtual Desktop Optimization script](https://github.com/The-Virtual-Desktop-Team/Virtual-Desktop-Optimization-Tool) from the community repo.
-* Install [Microsoft Teams](../../virtual-desktop/teams-on-avd.md).
-* [Restart](../linux/image-builder-json.md?bc=%2fazure%2fvirtual-machines%2fwindows%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json#windows-restart-customizer)
-* Run [Windows Update](../linux/image-builder-json.md?bc=%2fazure%2fvirtual-machines%2fwindows%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json#windows-update-customizer)
+* [FSLogix setup](https://github.com/DeanCefola/Azure-WVD/blob/master/PowerShell/FSLogixSetup.ps1)
+* [Azure Virtual Desktop optimization](https://github.com/The-Virtual-Desktop-Team/Virtual-Desktop-Optimization-Tool)
+* [Microsoft Teams installation](../../virtual-desktop/teams-on-avd.md)
+* [Windows Restart customizer](../linux/image-builder-json.md?bc=%2fazure%2fvirtual-machines%2fwindows%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json#windows-restart-customizer)
+* [Windows Update customizer](../linux/image-builder-json.md?bc=%2fazure%2fvirtual-machines%2fwindows%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json#windows-update-customizer)
-We will show you how to automate this using the Azure VM Image Builder, and distribute the image to an [Azure Compute Gallery](../shared-image-galleries.md) (formerly known as Shared Image Gallery), where you can replicate to other regions, control the scale, and share the image inside and outside your organizations.
+The article discusses how to automate the customizations by using Azure VM Image Builder. You can then distribute the image to an [Azure Compute Gallery](../shared-image-galleries.md) (formerly Shared Image Gallery), where you can replicate it to other regions, control the scale, and share the image within and beyond your organization.
+To simplify deploying a VM Image Builder configuration, our example uses an Azure Resource Manager template with the VM Image Builder template nested within it. This approach gives you a few more benefits, such as variables and parameter inputs. You can also pass parameters from the command line.
-To simplify deploying an Image Builder configuration, this example uses an Azure Resource Manager template with the Image Builder template nested inside. This gives you some other benefits, like variables and parameter inputs. You can also pass parameters from the command line.
-
-This article is intended to be a copy and paste exercise.
+This article is intended as a copy-and-paste exercise.
> [!NOTE]
-> The scripts to install the apps are located on [GitHub](https://github.com/danielsollondon/azvmimagebuilder/tree/master/solutions/14_Building_Images_WVD). They are for illustration and testing only, and not for production workloads.
+> You'll find the scripts for installing the apps on [GitHub](https://github.com/danielsollondon/azvmimagebuilder/tree/master/solutions/14_Building_Images_WVD). They're for illustration and testing purposes only. Do not use them for production workloads.
## Tips for building Windows images -- VM Size - the default VM size is a `Standard_D1_v2`, which is not suitable for Windows. Use a `Standard_D2_v2` or greater.-- This example uses the [PowerShell customizer scripts](../linux/image-builder-json.md). You need to use these settings or the build will stop responding.
+- VM size: For Windows, use `Standard_D2_v2` or greater. The default size is `Standard_D1_v2`, which isn't suitable for Windows.
+- This article uses [PowerShell customizer scripts](../linux/image-builder-json.md). Use the following settings, or the build will stop responding:
```json "runElevated": true,
This article is intended to be a copy and paste exercise.
```json { "type": "PowerShell",
- "name": "installFsLogix",
+ "name": "installFSLogix",
"runElevated": true, "runAsSystem": true,
- "scriptUri": "https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/14_Building_Images_WVD/0_installConfFsLogix.ps1"
+ "scriptUri": "https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/14_Building_Images_WVD/0_installConfFSLogix.ps1"
```-- Comment your code - The AIB build log (customization.log) is extremely verbose, if you comment your scripts using 'write-host' these will be sent to the logs, and make troubleshooting easier.
+- Comment your code: The VM Image Builder build log, *customization.log*, is verbose. If you comment your scripts by using 'write-host', they'll be sent to the logs, which should make troubleshooting easier.
```PowerShell write-host 'AIB Customization: Starting OS Optimizations script' ``` -- Exit Codes - AIB expects all scripts to return a 0 exit code, any non-zero exit code will result in AIB failing the customization and stopping the build. If you have complex scripts, add instrumentation and emit exit codes, these will be shown in the customization.log.
+- Exit codes: VM Image Builder expects all scripts to return a `0` exit code. If you use a non-zero exit code, VM Image Builder fails the customization and stops the build. If you have complex scripts, add instrumentation and emit exit codes, which will be shown in the *customization.log* file.
```PowerShell Write-Host "Exit code: " $LASTEXITCODE ```-- Test: Please test and test your code before on a standalone VM, ensure there are no user prompts, you are using the right privilege etc.
+- Test: Test and retest your code on a standalone VM. Ensure that there are no user prompts, that you're using the correct privileges, and so on.
-- Networking - `Set-NetAdapterAdvancedProperty`. This is being set in the optimization script, but fails the AIB build, as it disconnects the network, this is commented out. It is under investigation.
+- Networking: `Set-NetAdapterAdvancedProperty` is set in the optimization script but fails the VM Image Builder build. Because it disconnects the network, it's commented out. We're investigating this issue.
## Prerequisites
-You must have the latest Azure PowerShell CmdLets installed, see [Overview of Azure PowerShell](/powershell/azure/overview) for install details.
+You must have the latest Azure PowerShell cmdlets installed. For more information, see [Overview of Azure PowerShell](/powershell/azure/overview).
```PowerShell
-# check you are registered for the providers, ensure RegistrationState is set to 'Registered'.
+# Check to ensure that you're registered for the providers and RegistrationState is set to 'Registered'
Get-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages Get-AzResourceProvider -ProviderNamespace Microsoft.Storage Get-AzResourceProvider -ProviderNamespace Microsoft.Compute Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
-# If they do not show as registered, run the commented out code below.
+# If they don't show as 'Registered', run the the following commented-out code
## Register-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages ## Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
## Register-AzResourceProvider -ProviderNamespace Microsoft.KeyVault ```
-## Set up environment and variables
+## Set up the environment and variables
```azurepowershell-interactive # Step 1: Import module
Import-Module Az.Accounts
# Step 2: get existing context $currentAzContext = Get-AzContext
-# destination image resource group
+# Destination image resource group
$imageResourceGroup="avdImageDemoRg"
-# location (see possible locations in main docs)
+# Location (see possible locations in the main docs)
$location="westus2"
-# your subscription, this will get your current subscription
+# Your subscription. This command gets your current subscription
$subscriptionID=$currentAzContext.Subscription.Id
-# image template name
+# Image template name
$imageTemplateName="avd10ImageTemplate01"
-# distribution properties object name (runOutput), i.e. this gives you the properties of the managed image on completion
+# Distribution properties object name (runOutput). Gives you the properties of the managed image on completion
$runOutputName="sigOutput"
-# create resource group
+# Create resource group
New-AzResourceGroup -Name $imageResourceGroup -Location $location ```
-## Permissions, user identity and role
-
+## Permissions, user identity, and role
- Create a user identity.
+1. Create a user identity.
-```azurepowershell-interactive
-# setup role def names, these need to be unique
-$timeInt=$(get-date -UFormat "%s")
-$imageRoleDefName="Azure Image Builder Image Def"+$timeInt
-$identityName="aibIdentity"+$timeInt
+ ```azurepowershell-interactive
+ # setup role def names, these need to be unique
+ $timeInt=$(get-date -UFormat "%s")
+ $imageRoleDefName="Azure Image Builder Image Def"+$timeInt
+ $identityName="aibIdentity"+$timeInt
-## Add AZ PS modules to support AzUserAssignedIdentity and Az AIB
-'Az.ImageBuilder', 'Az.ManagedServiceIdentity' | ForEach-Object {Install-Module -Name $_ -AllowPrerelease}
+ ## Add Azure PowerShell modules to support AzUserAssignedIdentity and Azure VM Image Builder
+ 'Az.ImageBuilder', 'Az.ManagedServiceIdentity' | ForEach-Object {Install-Module -Name $_ -AllowPrerelease}
-# create identity
-New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName
+ # Create the identity
+ New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName
-$identityNameResourceId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id
-$identityNamePrincipalId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
+ $identityNameResourceId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id
+ $identityNamePrincipalId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
-```
+ ```
-Assign permissions to the identity to distribute images. This command will download and update the template with the parameters specified earlier.
+1. Assign permissions to the identity to distribute images. The following commands download and update the template with the previously specified parameters.
-```azurepowershell-interactive
-$aibRoleImageCreationUrl="https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
-$aibRoleImageCreationPath = "aibRoleImageCreation.json"
+ ```azurepowershell-interactive
+ $aibRoleImageCreationUrl="https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
+ $aibRoleImageCreationPath = "aibRoleImageCreation.json"
-# download config
-Invoke-WebRequest -Uri $aibRoleImageCreationUrl -OutFile $aibRoleImageCreationPath -UseBasicParsing
+ # Download the config
+ Invoke-WebRequest -Uri $aibRoleImageCreationUrl -OutFile $aibRoleImageCreationPath -UseBasicParsing
-((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<subscriptionID>',$subscriptionID) | Set-Content -Path $aibRoleImageCreationPath
-((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<rgName>', $imageResourceGroup) | Set-Content -Path $aibRoleImageCreationPath
-((Get-Content -path $aibRoleImageCreationPath -Raw) -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<subscriptionID>',$subscriptionID) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<rgName>', $imageResourceGroup) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName) | Set-Content -Path $aibRoleImageCreationPath
-# create role definition
-New-AzRoleDefinition -InputFile ./aibRoleImageCreation.json
+ # Create a role definition
+ New-AzRoleDefinition -InputFile ./aibRoleImageCreation.json
-# grant role definition to image builder service principal
-New-AzRoleAssignment -ObjectId $identityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
-```
+ # Grant the role definition to the VM Image Builder service principal
+ New-AzRoleAssignment -ObjectId $identityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+ ```
> [!NOTE]
-> If you see this error: 'New-AzRoleDefinition: Role definition limit exceeded. No more role definitions can be created.' see [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md).
+> If you receive the error "New-AzRoleDefinition: Role definition limit exceeded. No more role definitions can be created," see [Troubleshoot Azure RBAC (role-based access control)](../../role-based-access-control/troubleshooting.md).
-## Create the Azure Compute Gallery
+## Create an Azure Compute Gallery
If you don't already have an Azure Compute Gallery, you need to create one.
If you don't already have an Azure Compute Gallery, you need to create one.
$sigGalleryName= "myaibsig01" $imageDefName ="win10avd"
-# create gallery
+# Create the gallery
New-AzGallery -GalleryName $sigGalleryName -ResourceGroupName $imageResourceGroup -Location $location
-# create gallery definition
+# Create the gallery definition
New-AzGalleryImageDefinition -GalleryName $sigGalleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCo' -Offer 'Windows' -Sku '10avd' ```
-## Configure the Image Template
+## Configure the VM Image Builder template
-For this example, we have a template ready to that will download and update the template with the parameters specified earlier, it will install FsLogix, OS optimizations, Microsoft Teams, and run Windows Update at the end.
+For this example, we've prepared a template that downloads and updates the VM Image Builder template with the parameters that were specified earlier. The template installs FSLogix, operating system optimizations, and Microsoft Teams, and it runs Windows Update at the end.
-If you open the template you can see in the source property the image that is being used, in this example it uses a Win 10 Multi session image.
+If you open the template, you can see in the source property the image that's being used. In this example, it uses a Windows 10 multi-session image.
### Windows 10 images
-Two key types you should be aware of: multisession and single-session.
+You should be aware of two key types of images: multi-session and single-session.
-Multi session images are intended for pooled usage. Here is an example of the image details in Azure:
+Multi-session images are intended for pooled usage. Here's an example of the image details in Azure:
```json "publisher": "MicrosoftWindowsDesktop",
Multi session images are intended for pooled usage. Here is an example of the im
"version": "latest" ```
-Single session images are intend for individual usage. Here is an example of the image details in Azure:
+Single-session images are intended for individual usage. Here's an example of the image details in Azure:
```json "publisher": "MicrosoftWindowsDesktop",
Single session images are intend for individual usage. Here is an example of the
"version": "latest" ```
-You can also change the Win10 images available:
+You can also change which Windows 10 images are available:
```azurepowershell-interactive Get-AzVMImageSku -Location westus2 -PublisherName MicrosoftWindowsDesktop -Offer windows-10 ```
-## Download template and configure
+## Download and configure the template
-Now, you need to download the template and configure it for your use.
+Now, download the template and configure it for your own use.
```azurepowershell-interactive $templateUrl="https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/14_Building_Images_WVD/armTemplateWVD.json"
Invoke-WebRequest -Uri $templateUrl -OutFile $templateFilePath -UseBasicParsing
```
-Feel free to view the [template](https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/14_Building_Images_WVD/armTemplateWVD.json), all the code is viewable.
+Feel free to view the [template](https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/14_Building_Images_WVD/armTemplateWVD.json). All the code is viewable.
## Submit the template
-Your template must be submitted to the service, this will download any dependent artifacts (like scripts), validate, check permissions, and store them in the staging Resource Group, prefixed, *IT_*.
+Your template must be submitted to the service. Doing so downloads any dependent artifacts, such as scripts, and validates, checks permissions, and stores them in the staging resource group, which is prefixed with *IT_*.
```azurepowershell-interactive New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -TemplateParameterObject @{"api-Version" = "2020-02-14"} -imageTemplateName $imageTemplateName -svclocation $location
-# Optional - if you have any errors running the above, run:
+# Optional - if you have any errors running the preceding command, run:
$getStatus=$(Get-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName) $getStatus.ProvisioningErrorCode $getStatus.ProvisioningErrorMessage ``` ## Build the image+ ```azurepowershell-interactive Start-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName -NoWait ``` > [!NOTE]
-> The command will not wait for the image builder service to complete the image build, you can query the status below.
+> The command doesn't wait for the VM Image Builder service to complete the image build, so you can query the status as shown here.
```azurepowershell-interactive $getStatus=$(Get-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName)
-# this shows all the properties
+# Shows all the properties
$getStatus | Format-List -Property *
-# these show the status the build
+# Shows the status of the build
$getStatus.LastRunStatusRunState $getStatus.LastRunStatusMessage $getStatus.LastRunStatusRunSubState ``` ## Create a VM
-Now the build is finished you can build a VM from the image, use the examples from [New-AzVM (Az.Compute)](/powershell/module/az.compute/new-azvm#examples).
-## Clean up
+Now that the image is built, you can build a VM from it. Use the examples from [New-AzVM (Az PowerShell module.Compute)](/powershell/module/az.compute/new-azvm#examples).
-Delete the resource group template first, do not just delete the entire resource group, otherwise the staging resource group (*IT_*) used by AIB will not be cleaned up.
+## Clean up your resources
-Remove the Image Template.
+If you no longer need the resources that were created during this process, you can delete them by doing the following:
-```azurepowershell-interactive
-Remove-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name vd10ImageTemplate
-```
+> [!IMPORTANT]
+> Delete the resource group template first. If you delete only the resource group, the staging resource group (*IT_*) that's used by VM Image Builder won't be cleaned up.
-Delete the role assignment.
+1. Remove the VM Image Builder template.
-```azurepowershell-interactive
-Remove-AzRoleAssignment -ObjectId $identityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+ ```azurepowershell-interactive
+ Remove-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name vd10ImageTemplate
+ ```
-## remove definitions
-Remove-AzRoleDefinition -Name "$identityNamePrincipalId" -Force -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+1. Delete the role assignment.
-## delete identity
-Remove-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName -Force
-```
+ ```azurepowershell-interactive
+ Remove-AzRoleAssignment -ObjectId $identityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
-Delete the resource group.
+ ## Remove the definitions
+ Remove-AzRoleDefinition -Name "$identityNamePrincipalId" -Force -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
-```azurepowershell-interactive
-Remove-AzResourceGroup $imageResourceGroup -Force
-```
+ ## Delete the identity
+ Remove-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName -Force
+ ```
+
+1. Delete the resource group.
+
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup $imageResourceGroup -Force
+ ```
## Next steps
-You can try more examples [on GitHub](https://github.com/azure/azvmimagebuilder/tree/master/quickquickstarts).
+To try more VM Image Builder examples, go to [GitHub](https://github.com/azure/azvmimagebuilder/tree/master/quickquickstarts).
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder.md
Title: Create a Windows VM with Azure Image Builder
-description: Create a Windows VM with the Azure Image Builder.
+ Title: Create a Windows VM by using Azure VM Image Builder
+description: In this article, you learn how to create a Windows VM by using VM Image Builder.
-# Create a Windows VM with Azure Image Builder
+# Create a Windows VM by using Azure VM Image Builder
**Applies to:** :heavy_check_mark: Windows VMs
-This article is to show you how you can create a customized Windows image using the Azure VM Image Builder. The example in this article uses [customizers](../linux/image-builder-json.md#properties-customize) for customizing the image:
-- PowerShell (ScriptUri) - download and run a [PowerShell script](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/testPsScript.ps1).-- Windows Restart - restarts the VM.-- PowerShell (inline) - run a specific command. In this example, it creates a directory on the VM using `mkdir c:\\buildActions`.-- File - copy a file from GitHub onto the VM. This example copies [index.md](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/exampleArtifacts/buildArtifacts/https://docsupdatetracker.net/index.html) to `c:\buildArtifacts\https://docsupdatetracker.net/index.html` on the VM.-- buildTimeoutInMinutes - Increase a build time to allow for longer running builds, the default is 240 minutes, and you can increase a build time to allow for longer running builds.-- vmProfile - specifying a vmSize and Network properties-- osDiskSizeGB - you can increase the size of image-- identity - providing an identity for Azure Image Builder to use during the build--
-You can also specify a `buildTimeoutInMinutes`. The default is 240 minutes, and you can increase a build time to allow for longer running builds. The minimum allowed value is 6 minutes; shorter values will cause errors.
-
-We will be using a sample .json template to configure the image. The .json file we are using is here: [helloImageTemplateWin.json](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/0_Creating_a_Custom_Windows_Managed_Image/helloImageTemplateWin.json).
+In this article, you learn how to create a customized Windows image by using Azure VM Image Builder. The example in this article uses [customizers](../linux/image-builder-json.md#properties-customize) for customizing the image:
+- PowerShell (ScriptUri): Download and run a [PowerShell script](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/testPsScript.ps1).
+- Windows Restart: Restarts the VM.
+- PowerShell (inline): Runs a specific command. In this example, it creates a directory on the VM by using `mkdir c:\\buildActions`.
+- File: Copies a file from GitHub to the VM. This example copies [index.md](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/exampleArtifacts/buildArtifacts/https://docsupdatetracker.net/index.html) to `c:\buildArtifacts\https://docsupdatetracker.net/index.html` on the VM.
+- `buildTimeoutInMinutes`: Specifies a build time, in minutes. The default is 240 minutes, which you can increase to allow for longer-running builds. The minimum allowed value is 6 minutes. Values shorter than 6 minutes will cause errors.
+- `vmProfile`: Specifies a `vmSize` and network properties.
+- `osDiskSizeGB`: Can be used to increase the size of an image.
+- `identity`. Provides an identity for VM Image Builder to use during the build.
+Use the following sample JSON template to configure the image: [helloImageTemplateWin.json](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/0_Creating_a_Custom_Windows_Managed_Image/helloImageTemplateWin.json).
> [!NOTE]
-> For Windows users, the Azure CLI examples below can be run on [Azure Cloud Shell](https://shell.azure.com) using Bash.
+> Windows users can run the following Azure CLI examples on [Azure Cloud Shell](https://shell.azure.com) by using Bash.
## Register the features
-To use Azure Image Builder, you need to register the feature.
-
-Check your registration.
+To use VM Image Builder, you need to register the feature. Check your registration by running the following commands:
```azurecli-interactive az provider show -n Microsoft.VirtualMachineImages | grep registrationState
az provider show -n Microsoft.Storage | grep registrationState
az provider show -n Microsoft.Network | grep registrationState ```
-If they do not say registered, run the following:
+If the output doesn't say *registered*, run the following commands:
```azurecli-interactive az provider register -n Microsoft.VirtualMachineImages
az provider register -n Microsoft.Network
## Set variables
-We will be using some pieces of information repeatedly, so we will create some variables to store that information.
+Because you'll be using some pieces of information repeatedly, create some variables to store that information:
```azurecli-interactive
-# Resource group name - we are using myImageBuilderRG in this example
+# Resource group name - we're using myImageBuilderRG in this example
imageResourceGroup='myWinImgBuilderRG' # Region location location='WestUS2' # Run output name runOutputName='aibWindows'
-# name of the image to be created
+# The name of the image to be created
imageName='aibWinImage' ```
-Create a variable for your subscription ID.
+Create a variable for your subscription ID:
```azurecli-interactive subscriptionID=$(az account show --query id --output tsv) ```
-## Create a resource group
-This resource group is used to store the image configuration template artifact and the image.
+## Create the resource group
+To store the image configuration template artifact and the image, use the following resource group:
```azurecli-interactive az group create -n $imageResourceGroup -l $location ``` ## Create a user-assigned identity and set permissions on the resource group
-Image Builder will use the [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) provided to inject the image into the resource group. In this example, you will create an Azure role definition that has the granular actions to perform distributing the image. The role definition will then be assigned to the user-identity.
-## Create user-assigned managed identity and grant permissions
+VM Image Builder uses the provided [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) to inject the image into the resource group. In this example, you create an Azure role definition with specific permissions for distributing the image. The role definition is then assigned to the user identity.
+
+## Create a user-assigned managed identity and grant permissions
+
+Create a user-assigned identity so that VM Image Builder can access the storage account where the script is stored.
+ ```bash
-# create user assigned identity for image builder to access the storage account where the script is located
identityName=aibBuiUserId$(date +'%s') az identity create -g $imageResourceGroup -n $identityName
-# get identity id
+# Get the identity ID
imgBuilderCliId=$(az identity show -g $imageResourceGroup -n $identityName --query clientId -o tsv)
-# get the user identity URI, needed for the template
+# Get the user identity URI that's needed for the template
imgBuilderId=/subscriptions/$subscriptionID/resourcegroups/$imageResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$identityName
-# download preconfigured role definition example
+# Download the preconfigured role definition example
curl https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json -o aibRoleImageCreation.json imageRoleDefName="Azure Image Builder Image Def"$(date +'%s')
-# update the definition
+# Update the definition
sed -i -e "s/<subscriptionID>/$subscriptionID/g" aibRoleImageCreation.json sed -i -e "s/<rgName>/$imageResourceGroup/g" aibRoleImageCreation.json sed -i -e "s/Azure Image Builder Service Image Creation Role/$imageRoleDefName/g" aibRoleImageCreation.json
-# create role definitions
+# Create role definitions
az role definition create --role-definition ./aibRoleImageCreation.json
-# grant role definition to the user assigned identity
+# Grant a role definition to the user-assigned identity
az role assignment create \ --assignee $imgBuilderCliId \ --role "$imageRoleDefName" \ --scope /subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup ```
+## Download the image configuration template
-
-## Download the image configuration template example
-
-A parameterized image configuration template has been created for you to try. Download the example .json file and configure it with the variables you set previously.
+We've created a parameterized image configuration template for you to try. Download the example JSON file, and then configure it with the variables that you set earlier.
```azurecli-interactive curl https://raw.githubusercontent.com/azure/azvmimagebuilder/master/quickquickstarts/0_Creating_a_Custom_Windows_Managed_Image/helloImageTemplateWin.json -o helloImageTemplateWin.json
sed -i -e "s/<region>/$location/g" helloImageTemplateWin.json
sed -i -e "s/<imageName>/$imageName/g" helloImageTemplateWin.json sed -i -e "s/<runOutputName>/$runOutputName/g" helloImageTemplateWin.json sed -i -e "s%<imgBuilderId>%$imgBuilderId%g" helloImageTemplateWin.json- ```
-You can modify this example, in the terminal using a text editor like `vi`.
+You can modify this example in the terminal by using a text editor such as `vi`.
```azurecli-interactive vi helloImageTemplateWin.json ``` > [!NOTE]
-> For the source image, you must always [specify a version](../linux/image-builder-troubleshoot.md#build--step-failed-for-image-version), you cannot use `latest`.
-> If you add or change the resource group where the image is distributed to, you must make sure the [permissions are set](#create-a-user-assigned-identity-and-set-permissions-on-the-resource-group) on the resource group.
+> For the source image, always [specify a version](../linux/image-builder-troubleshoot.md#the-build-step-failed-for-the-image-version). You can't specify `latest` as the version.
+>
+> If you add or change the resource group that the image is distributed to, make sure that the [permissions are set](#create-a-user-assigned-identity-and-set-permissions-on-the-resource-group) on the resource group.
## Create the image
-Submit the image configuration to the VM Image Builder service
+Submit the image configuration to the VM Image Builder service by running the following commands:
```azurecli-interactive az resource create \
az resource create \
-n helloImageTemplateWin01 ```
-When complete, this will return a success message back to the console, and create an `Image Builder Configuration Template` in the `$imageResourceGroup`. You can see this resource in the resource group in the Azure portal, if you enable 'Show hidden types'.
+When you're done, a success message is returned to the console, and a VM Image Builder configuration template is created in the `$imageResourceGroup`. To view this resource in the resource group, go to the Azure portal, and then enable **Show hidden types**.
-In the background, Image Builder will also create a staging resource group in your subscription. This resource group is used for the image build. It will be in this format: `IT_<DestinationResourceGroup>_<TemplateName>`
+In the background, VM Image Builder also creates a staging resource group in your subscription. This resource group is used to build the image in the following format: `IT_<DestinationResourceGroup>_<TemplateName>`.
> [!Note]
-> You must not delete the staging resource group directly. First delete the image template artifact, this will cause the staging resource group to be deleted.
+> Don't delete the staging resource group directly. First, delete the image template artifact, which causes the staging resource group to be deleted.
-If the service reports a failure during the image configuration template submission:
-- Review these [troubleshooting](../linux/image-builder-troubleshoot.md#troubleshoot-image-template-submission-errors) steps. -- You will need to delete the template, using the following snippet, before you retry submission.
+If the service reports a failure when you submit the image configuration template, do the following:
+- See [Troubleshoot the Azure VM Image Builder service](../linux/image-builder-troubleshoot.md#troubleshoot-image-template-submission-errors).
+- Before you try to resubmit the template, delete it by running the following commands:
```azurecli-interactive az resource delete \
az resource delete \
``` ## Start the image build
-Start the image building process using [az resource invoke-action](/cli/azure/resource#az-resource-invoke-action).
+
+Start the image-building process by using [az resource invoke-action](/cli/azure/resource#az-resource-invoke-action).
```azurecli-interactive az resource invoke-action \
az resource invoke-action \
Wait until the build is complete.
-If you encounter any errors, please review these [troubleshooting](../linux/image-builder-troubleshoot.md#troubleshoot-common-build-errors) steps.
+If you encounter any errors, see [Troubleshoot the Azure VM Image Builder service](../linux/image-builder-troubleshoot.md#troubleshoot-common-build-errors).
## Create the VM
-Create the VM using the image you built. Replace *\<password>* with your own password for the `aibuser` on the VM.
+Create the VM by using the image that you built. In the following code, replace *\<password>* with your own password for the *aibuser* on the VM.
```azurecli-interactive az vm create \
az vm create \
## Verify the customization
-Create a Remote Desktop connection to the VM using the username and password you set when you created the VM. Inside the VM, open a cmd prompt and type:
+Create a Remote Desktop connection to the VM by using the username and password that you set when you created the VM. In the VM, open a Command Prompt window, and then type:
```console dir c:\ ```
-You should see these two directories created during image customization:
+The following two directories are created during the image customization:
+ - buildActions - buildArtifacts
-## Clean up
+## Clean up your resources
-When you are done, delete the resources.
+When you're done, delete the resources you've created.
-### Delete the image builder template
+1. Delete the VM Image Builder template.
-```azurecli-interactive
-az resource delete \
- --resource-group $imageResourceGroup \
- --resource-type Microsoft.VirtualMachineImages/imageTemplates \
- -n helloImageTemplateWin01
-```
+ ```azurecli-interactive
+ az resource delete \
+ --resource-group $imageResourceGroup \
+ --resource-type Microsoft.VirtualMachineImages/imageTemplates \
+ -n helloImageTemplateWin01
+ ```
-### Delete the role assignment, role definition and user-identity.
-```azurecli-interactive
-az role assignment delete \
- --assignee $imgBuilderCliId \
- --role "$imageRoleDefName" \
- --scope /subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup
+1. Delete the role assignment, role definition, and user identity.
-az role definition delete --name "$imageRoleDefName"
+ ```azurecli-interactive
+ az role assignment delete \
+ --assignee $imgBuilderCliId \
+ --role "$imageRoleDefName" \
+ --scope /subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup
-az identity delete --ids $imgBuilderId
-```
+ az role definition delete --name "$imageRoleDefName"
-### Delete the image resource group
+ az identity delete --ids $imgBuilderId
+ ```
-```azurecli-interactive
-az group delete -n $imageResourceGroup
-```
+1. Delete the image resource group.
+
+ ```azurecli-interactive
+ az group delete -n $imageResourceGroup
+ ```
## Next steps
-To learn more about the components of the .json file used in this article, see [Image builder template reference](../linux/image-builder-json.md).
+To learn more about the components of the JSON file that this article uses, see the [VM Image Builder template reference](../linux/image-builder-json.md).
virtual-machines Oracle Database Backup Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-storage.md
In this section, we will be using Oracle Recovery Manager (RMAN) to take a full
RMAN> configure channel 2 device type disk format '/mnt/orabkup/%d/Full_%d_%U_%T_%s'; ```
-2. Because Azure standard file shares have a maximum file size of 1 TiB, we will limit the size of RMAN backup pieces to 1 TiB. (Note that Premium File Shares have a maximum file size limit of 4 TiB. For more information, see [Azure Files Scalability and Performance Targets](../../../storage/files/storage-files-scale-targets.md).)
+2. In this example, we are limiting the size of RMAN backup pieces to 1 TiB. Please note the RMAN backup MAXPIECESIZE can go upto 4TiB as Azure standard file shares and Premium File Shares have a maximum file size limit of 4 TiB. For more information, see [Azure Files Scalability and Performance Targets](../../../storage/files/storage-files-scale-targets.md).)
```bash RMAN> configure channel device type disk maxpiecesize 1000G;
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
You will need to authenticate with your S-User or P-User. You can create a P-Use
| Solution | Link | | -- | : |
+| **SAP NetWeaver 7.5 SP15 on SAP ASE** January 20 2020 | [Create Instance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|SAP NetWeaver 7.5 SP15 on SAP ASE | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) |
| **SAP S/4HANA 2020 FPS01** March 22 2022 | [Create Instance](https://cal.sap.com/registration?sguid=4bad009a-cb02-4992-a8b6-28c331a79c66&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |This solution comes as a standard S/4HANA system installation including a remote desktop for easy frontend access. It contains a pre-configured and activated SAP S/4HANA Fiori UI in client 100, with prerequisite components activated as per SAP note 3009827 Rapid Activation for SAP Fiori in SAP S/4HANA 2020 FPS01. See More Information Link. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/4bad009a-cb02-4992-a8b6-28c331a79c66) | | **SAP Financial Services Data Platform 1.15** March 16 2022 | [Create Instance](https://cal.sap.com/registration?sguid=310f0bd9-fcad-4ecb-bfea-c61cdc67152b&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
virtual-network Routing Preference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-overview.md
The price difference between both options is reflected in the internet egress da
## Limitations
+* Internet routing preference is only compatible with zone-redundant standard SKU of public IP address. Basic SKU of public IP address is not supported.
+* Internet routing preference currently supports only IPv4 public IP addresses. IPv6 public IP addresses are not supported.
-* Routing preference is only compatible with zone-redundant standard SKU of public IP address. Basic SKU of public IP address is not supported.
-* Routing preference currently supports only IPv4 public IP addresses. IPv6 public IP addresses are not supported.
+### Regional Unavailability
+Internet routing preference is available in all regions except:
+* Australia Central
+* Austria East
+* Brazil Southeast
+* Germany Central
+* Germany NorthEast
+* Norway West
+* Sweden Central
+* West US 3
## Next steps * [Learn more about how optimize connectivity to your Microsoft Azure services over the internet - Video](https://www.youtube.com/watch?v=j6A_Mbpuh6s&list=PLLasX02E8BPA5V-waZPcelhg9l3IkeUQo&index=12) * [Configure routing preference for a VM using the Azure PowerShell](./configure-routing-preference-virtual-machine-powershell.md)
-* [Configure routing preference for a VM using the Azure CLI](./configure-routing-preference-virtual-machine-cli.md)
+* [Configure routing preference for a VM using the Azure CLI](./configure-routing-preference-virtual-machine-cli.md)
virtual-network Tutorial Restrict Network Access To Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md
Title: Restrict access to PaaS resources - tutorial - Azure portal
+ Title: 'Tutorial: Restrict access to PaaS resources with service endpoints - Azure portal'
description: In this tutorial, you learn how to limit and restrict network access to Azure resources, such as an Azure Storage, with virtual network service endpoints using the Azure portal.- documentationcenter: virtual-network + tags: azure-resource-manager
-# Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account.
-+ virtual-network Previously updated : 05/17/2022- Last updated : 06/29/2022+
+# Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account.
# Tutorial: Restrict network access to PaaS resources with virtual network service endpoints using the Azure portal
-Virtual network service endpoints enable you to limit network access to some Azure service resources to a virtual network subnet. You can also remove internet access to the resources. Service endpoints provide direct connection from your virtual network to supported Azure services, allowing you to use your virtual network's private address space to access the Azure services. Traffic destined to Azure resources through service endpoints always stays on the Microsoft Azure backbone network. In this tutorial, you learn how to:
+Virtual network service endpoints enable you to limit network access to some Azure service resources to a virtual network subnet. You can also remove internet access to the resources. Service endpoints provide direct connection from your virtual network to supported Azure services, allowing you to use your virtual network's private address space to access the Azure services. Traffic destined to Azure resources through service endpoints always stays on the Microsoft Azure backbone network.
+
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a virtual network with one subnet
Virtual network service endpoints enable you to limit network access to some Azu
> * Confirm access to a resource from a subnet > * Confirm access is denied to a resource from a subnet and the internet
-If you prefer, you can complete this tutorial using the [Azure CLI](tutorial-restrict-network-access-to-resources-cli.md) or [Azure PowerShell](tutorial-restrict-network-access-to-resources-powershell.md).
+This tutorial uses the Azure portal. You can also complete it using the [Azure CLI](tutorial-restrict-network-access-to-resources-cli.md) or [PowerShell](tutorial-restrict-network-access-to-resources-powershell.md).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Prerequisites
+
+- An Azure subscription
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+ ## Create a virtual network
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. From the Azure portal menu, select **+ Create a resource**.
-1. Select **+ Create a resource** on the upper left corner of the Azure portal. Search for **Virtual Network**, and then select **Create**.
+1. Search for *Virtual Network*, and then select **Create**.
:::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-resources.png" alt-text="Screenshot of search for virtual network in create a resource page.":::
If you don't have an Azure subscription, create a [free account](https://azure.m
| Setting | Value | |-|-|
- | Subscription | Select your subscription|
+ | Subscription | Select your subscription. |
| Resource group | Select **Create new** and enter *myResourceGroup*.|
- | Name | Enter *myVirtualNetwork* |
- | Region | Select **(US) East US** |
+ | Name | Enter *myVirtualNetwork*. |
+ | Region | Select **East US** |
:::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-virtual-network.png" alt-text="Screenshot of basics tab for create a virtual network.":::
If you don't have an Azure subscription, create a [free account](https://azure.m
Service endpoints are enabled per service, per subnet. To create a subnet and enable a service endpoint for the subnet:
-1. If you're not already on the virtual network resource page, you can search for the newly created network in the box at the top of the portal. Enter *myVirtualNetwork*, and select it from the list.
+1. If you're not already on the virtual network resource page, you can search for the newly created virtual network in the box at the top of the portal. Enter *myVirtualNetwork*, and select it from the list.
-1. Select **Subnets** under *Settings*, and then select **+ Subnet**, as shown:
+1. Select **Subnets** under **Settings**, and then select **+ Subnet**, as shown:
:::image type="content" source="./media/tutorial-restrict-network-access-to-resources/add-subnet.png" alt-text="Screenshot of adding subnet to an existing virtual network.":::
-1. On the **Add subnet** page, select or enter the following information, and then select **Save**:
+1. On the **Add subnet** page, enter or select the following information, and then select **Save**:
| Setting |Value | | | |
By default, all virtual machine instances in a subnet can communicate with any r
1. Select **Review + create**, and when the validation check is passed, select **Create**.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-nsg-page.png" alt-text="Screenshot of create an network security group page.":::
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-nsg-page.png" alt-text="Screenshot of create a network security group page.":::
1. After the network security group is created, select **Go to resource** or search for *myNsgPrivate* at the top of the Azure portal.
virtual-wan Certificates Point To Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/certificates-point-to-site.md
You must perform the steps in this article on a computer running Windows 10 or W
## Next steps
-Continue with the [Virtual WAN steps for user VPN connection](virtual-wan-about.md)
+Continue with the [Virtual WAN steps for user VPN connection](virtual-wan-point-to-site-portal.md#p2sconfig).