Updates from: 06/30/2022 01:09:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 06/27/2022 Last updated : 06/29/2022
Within a Conditional Access policy, an administrator can make use of access controls to either grant or block access to resources.
-![Conditional Access policy with a grant control requiring multi-factor authentication](./media/concept-conditional-access-grant/conditional-access-grant.png)
## Block access
Block is a powerful control that should be wielded with appropriate knowledge. P
Administrators can choose to enforce one or more controls when granting access. These controls include the following options: -- [Require multi-factor authentication (Azure AD Multi-Factor Authentication)](../authentication/concept-mfa-howitworks.md)
+- [Require multifactor authentication (Azure AD Multi-Factor Authentication)](../authentication/concept-mfa-howitworks.md)
- [Require device to be marked as compliant (Microsoft Intune)](/intune/protect/device-compliance-get-started) - [Require hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md) - [Require approved client app](app-based-conditional-access.md)
When administrators choose to combine these options, they can choose the followi
By default Conditional Access requires all selected controls.
-### Require multi-factor authentication
+### Require multifactor authentication
-Selecting this checkbox will require users to perform Azure AD Multi-Factor Authentication. More information about deploying Azure AD Multi-Factor Authentication can be found in the article [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
+Selecting this checkbox will require users to perform Azure AD Multifactor Authentication. More information about deploying Azure AD Multifactor Authentication can be found in the article [Planning a cloud-based Azure AD Multifactor Authentication deployment](../authentication/howto-mfa-getstarted.md).
-[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multi-factor authentication in Conditional Access policies.
+[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multifactor authentication in Conditional Access policies.
### Require device to be marked as compliant
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Previously updated : 04/21/2022 Last updated : 06/29/2022
The sign-in frequency setting works with apps that have implemented OAuth2 or OI
The sign-in frequency setting works with 3rd party SAML applications and apps that have implemented OAuth2 or OIDC protocols, as long as they don't drop their own cookies and are redirected back to Azure AD for authentication on regular basis.
-### User sign-in frequency and multi-factor authentication
+### User sign-in frequency and multifactor authentication
-Sign-in frequency previously applied to only to the first factor authentication on devices that were Azure AD joined, Hybrid Azure AD joined, and Azure AD registered. There was no easy way for our customers to re-enforce multi factor authentication (MFA) on those devices. Based on customer feedback, sign-in frequency will apply for MFA as well.
+Sign-in frequency previously applied to only to the first factor authentication on devices that were Azure AD joined, Hybrid Azure AD joined, and Azure AD registered. There was no easy way for our customers to re-enforce multifactor authentication (MFA) on those devices. Based on customer feedback, sign-in frequency will apply for MFA as well.
[![Sign in frequency and MFA](media/howto-conditional-access-session-lifetime/conditional-access-flow-chart-small.png)](media/howto-conditional-access-session-lifetime/conditional-access-flow-chart.png#lightbox)
The public preview supports the following scenarios:
- Require user reauthentication during [Intune device enrollment](/mem/intune/fundamentals/deployment-guide-enrollment), regardless of their current MFA status. - Require user reauthentication for risky users with the [require password change](concept-conditional-access-grant.md#require-password-change) grant control.-- Require user reauthentication for risky sign-ins with the [require multi-factor authentication](concept-conditional-access-grant.md#require-multi-factor-authentication) grant control.
+- Require user reauthentication for risky sign-ins with the [require multifactor authentication](concept-conditional-access-grant.md#require-multifactor-authentication) grant control.
When administrators select **Every time**, it will require full reauthentication when the session is evaluated.
Conditional Access is an Azure AD Premium capability and requires a premium lice
> [!WARNING] > If you are using the [configurable token lifetime](../develop/active-directory-configurable-token-lifetimes.md) feature currently in public preview, please note that we donΓÇÖt support creating two different policies for the same user or app combination: one with this feature and another one with configurable token lifetime feature. Microsoft retired the configurable token lifetime feature for refresh and session token lifetimes on January 30, 2021 and replaced it with the Conditional Access authentication session management feature. >
-> Before enabling Sign-in Frequency, make sure other reauthentication settings are disabled in your tenant. If "Remember MFA on trusted devices" is enabled, be sure to disable it before using Sign-in frequency, as using these two settings together may lead to prompting users unexpectedly. To learn more about reauthentication prompts and session lifetime, see the article, [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+> Before enabling Sign-in Frequency, make sure other reauthentication settings are disabled in your tenant. If "Remember MFA on trusted devices" is enabled, be sure to disable it before using Sign-in frequency, as using these two settings together may lead to prompting users unexpectedly. To learn more about reauthentication prompts and session lifetime, see the article, [Optimize reauthentication prompts and understand session lifetime for Azure AD Multifactor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
## Policy deployment
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to a Python web app" description: In this quickstart, learn how a Python web app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API. -+
Last updated 11/22/2021 -+
> > - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > - [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/)
-> - [Flask](http://flask.pocoo.org/), [Flask-Session](https://pypi.org/project/Flask-Session/), [requests](https://requests.kennethreitz.org/en/master/)
+> - [Flask](http://flask.pocoo.org/), [Flask-Session](https://pypi.org/project/Flask-Session/), [requests](https://github.com/psf/requests/graphs/contributors)
> - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) > > #### Step 1: Configure your application in Azure portal
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
From any of the plan pages, use your browser's Print to PDF capability to create
| [Privileged Identity Management](../privileged-identity-management/pim-deployment-plan.md)| Azure AD Privileged Identity Management (PIM) helps you manage privileged administrative roles across Azure AD, Azure resources, and other Microsoft Online Services. PIM provides solutions like just-in-time access, request approval workflows, and fully integrated access reviews so you can identify, uncover, and prevent malicious activities of privileged roles in real time. | | [Reporting and Monitoring](../reports-monitoring/plan-monitoring-and-reporting.md)| The design of your Azure AD reporting and monitoring solution depends on your legal, security, and operational requirements as well as your existing environment and processes. This article presents the various design options and guides you to the right deployment strategy. | | [Access Reviews](../governance/deploy-access-reviews.md) | Access Reviews are an important part of your governance strategy, enabling you to know and manage who has access, and to what they have access. This article helps you plan and deploy access reviews to achieve your desired security and collaboration postures. |
+| [Identity governance for applications](../governance/identity-governance-applications-prepare.md) | As part of your organization's controls to meet your compliance and risk management objectives for managing access for critical applications, you can use Azure AD features to set up and enforce appropriate access.|
## Include the right stakeholders
active-directory Custom Security Attributes Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md
Previously updated : 11/16/2021 Last updated : 06/30/2022
Once you have a better understanding of how your attributes will be organized an
To grant access to the appropriate people, follow these steps to assign one of the custom security attribute roles.
-#### Assign roles at attribute set scope
+### Assign roles at attribute set scope
+
+#### Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
To grant access to the appropriate people, follow these steps to assign one of t
> [!NOTE] > Users with attribute set scope role assignments currently can see other attribute sets and custom security attribute definitions.
-
-#### Assign roles at tenant scope
+
+#### PowerShell
+
+Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role. The following example assigns the Attribute Assignment Administrator role to a principal with an attribute set scope named Engineering.
+
+```powershell
+$roleDefinitionId = "58a13ea3-c632-46ae-9ee0-9c0d43cd7f3d"
+$directoryScope = "/attributeSets/Engineering"
+$principalId = "f8ca5a85-489a-49a0-b555-0a6d81e56f0d"
+$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope -RoleDefinitionId $roleDefinitionId -PrincipalId $principalId
+```
+
+#### Microsoft Graph API
+
+Use the [Create unified Role Assignment](/graph/api/rbacapplication-post-roleassignments?view=graph-rest-beta&preserve-view=true) API to assign the role. The following example assigns the Attribute Assignment Administrator role to a principal with an attribute set scope named Engineering.
+
+```http
+POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+Content-type: application/json
+
+{
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "roleDefinitionId": "58a13ea3-c632-46ae-9ee0-9c0d43cd7f3d",
+ "principalId": "f8ca5a85-489a-49a0-b555-0a6d81e56f0d",
+ "directoryScopeId": "/attributeSets/Engineering"
+}
+```
+
+### Assign roles at tenant scope
+
+#### Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
To grant access to the appropriate people, follow these steps to assign one of t
1. Add assignments for the custom security attribute roles.
+#### PowerShell
+
+Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role. For more information, see [Assign Azure AD roles at different scopes](../roles/assign-roles-different-scopes.md).
+
+#### Microsoft Graph API
+
+Use the [Create unified Role Assignment](/graph/api/rbacapplication-post-roleassignments?view=graph-rest-beta&preserve-view=true) API to assign the role. For more information, see [Assign Azure AD roles at different scopes](../roles/assign-roles-different-scopes.md).
+ ## View audit logs for attribute changes Sometimes you need information about custom security attribute changes, such as for auditing or troubleshooting purposes. Anytime someone makes changes to definitions or assignments, the changes get logged in the [Azure AD audit logs](../reports-monitoring/concept-audit-logs.md).
active-directory Access Reviews Application Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-application-preparation.md
[Azure Active Directory (Azure AD) Identity Governance](identity-governance-overview.md) allows you to balance your organization's need for security and employee productivity with the right processes and visibility. It provides you with capabilities to ensure that the right people have the right access to the right resources.
-Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. The application sensitivity may be based on its purpose or the data it contains, such as financial information or personal information of the organization's customers. For those applications, only a subset of all the users in the organization will typically be authorized to have access, and access should only be permitted based on documented business requirements. Azure AD can be integrated with many popular SaaS applications, on-premises applications, and applications that your organization has developed, using [standard protocol](../fundamentals/auth-sync-overview.md) and API interfaces. Through these interfaces, Azure AD can be the authoritative source to control who has access to those applications. As you integrate your applications with Azure AD, you can then use Azure AD access reviews to recertify the users who have access to those applications, and remove access of those users who no longer need access.
+Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. The application sensitivity may be based on its purpose or the data it contains, such as financial information or personal information of the organization's customers. For those applications, only a subset of all the users in the organization will typically be authorized to have access, and access should only be permitted based on documented business requirements. Azure AD can be integrated with many popular SaaS applications, on-premises applications, and applications that your organization has developed, using [standard protocol](../fundamentals/auth-sync-overview.md) and API interfaces. Through these interfaces, Azure AD can be the authoritative source to control who has access to those applications. As you integrate your applications with Azure AD, you can then use Azure AD access reviews to recertify the users who have access to those applications, and remove access of those users who no longer need access. You can also use other features, including terms of use, conditional access and entitlement management, for governing access to applications, as described in [how to govern access to applications in your environment](identity-governance-applications-prepare.md).
## Prerequisites for reviewing access
Also, while not required for reviewing access to an application, we recommend al
In order for Azure AD access reviews to be used for an application, then the application must first be integrated with Azure AD. An application being integrated with Azure AD means one of two requirements must be met: * The application relies upon Azure AD for federated SSO, and Azure AD controls authentication token issuance. If Azure AD is the only identity provider for the application, then only users who are assigned to one of the application's roles in Azure AD are able to sign into the application. Those users that are denied by a review lose their application role assignment and can no longer get a new token to sign in to the application.
-* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as SCIM or by the application querying Azure AD via Microsoft Graph. Those users that are denied by a review lose their application role assignment or group membership, and when those changes are made available to the application, then the denied users will no longer have access.
+* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as System for Cross-Domain Identity Management (SCIM) or by the application querying Azure AD via Microsoft Graph. Those users that are denied by a review lose their application role assignment or group membership, and when those changes are made available to the application, then the denied users will no longer have access.
If neither of those criteria are met for an application, as the application doesn't rely upon Azure AD, then access reviews can still be used, however there may be some limitations. Users that aren't in your Azure AD or are not assigned to the application roles in Azure AD, won't be included in the review. Also, the changes to remove denied won't be able to be automatically sent to the application if there is no provisioning protocol that the application supports. The organization must instead have a process to send the results of a completed review to the application.
In order to permit a wide variety of applications and IT requirements to be addr
|:||--| |A| The application supports federated SSO, Azure AD is the only identity provider, and the application doesn't rely upon group or role claims. | In this pattern, you'll configure that the application requires individual application role assignments, and that users are assigned to the application. Then to perform the review, you'll create a single access review for the application, of the users assigned to this application role. When the review completes, if a user was denied, then they will be removed from the application role. Azure AD will then no longer issue that user with federation tokens and the user will be unable to sign into that application.| |B|If the application uses group claims in addition to application role assignments.| An application may use Azure AD group membership, distinct from application roles to express finer-grained access. Here, you can choose based on your business requirements either to have the users who have application role assignments reviewed, or to review the users who have group memberships. If the groups do not provide comprehensive access coverage, in particular if users may have access to the application even if they aren't a member of those groups, then we recommend reviewing the application role assignments, as in pattern A above.|
-|C| If the application doesn't rely solely on Azure AD for federated SSO, but does support provisioning, via SCIM, or via updates to a SQL table of users or an LDAP directory. | In this pattern, you'll configure Azure AD to provision the users with application role assignments to the application's database or directory, update the application role assignments in Azure AD with a list of the users who currently have access, and then create a single access review of the application role assignments.|
+|C| If the application doesn't rely solely on Azure AD for federated SSO, but does support provisioning via SCIM, or via updates to a SQL table of users or an LDAP directory. | In this pattern, you'll configure Azure AD to provision the users with application role assignments to the application's database or directory, update the application role assignments in Azure AD with a list of the users who currently have access, and then create a single access review of the application role assignments. For more information, see [Governing an application's existing users](identity-governance-applications-existing-users.md) to update the application role assignments in Azure AD.|
### Other options
Now that you have identified the integration pattern for the application, check
1. If the application supports federated SSO, then change to the **Conditional Access** tab. Inspect the enabled policies for this application. If there are policies that are enabled, block access, have users assigned to the policies, but no other conditions, then those users may be already blocked from being able to get federated SSO to the application. 1. Change to the **Users and groups** tab. This list contains all the users who are assigned to the application in Azure AD. If the list is empty, then a review of the application will complete immediately, since there isn't any task for the reviewer to perform.
-1. If your application is integrated with pattern C, then you'll need to confirm that the users in this list are the same as those in the applications' internal data store, prior to starting the review. Azure AD does not automatically import the users or their access rights from an application, but you can [assign users to an application role via PowerShell](../manage-apps/assign-user-or-group-access-portal.md).
+1. If your application is integrated with pattern C, then you'll need to confirm that the users in this list are the same as those in the applications' internal data store, prior to starting the review. Azure AD does not automatically import the users or their access rights from an application, but you can [assign users to an application role via PowerShell](../manage-apps/assign-user-or-group-access-portal.md). See [Governing an application's existing users](identity-governance-applications-existing-users.md) for how to bring in users from different application data stores into Azure AD.
1. Check whether all users are assigned to the same application role, such as **User**. If users are assigned to multiple roles, then if you create an access review of the application, then all assignments to all of the application's roles will be reviewed together. 1. Check the list of directory objects assigned to the roles to confirm that there are no groups assigned to the application roles. It's possible to review this application if there is a group assigned to a role; however, a user who is a member of the group assigned to the role, and whose access was denied, won't be automatically removed from the group. We recommend first converting the application to have direct user assignments, rather than members of groups, so that a user whose access is denied during the access review can have their application role assignment removed automatically.
Once the reviews have started, you can monitor their progress, and update the ap
## Next steps * [Plan an Azure Active Directory access reviews deployment](deploy-access-reviews.md)
-* [Create an access review of a group or application](create-access-review.md)
+* [Create an access review of a group or application](create-access-review.md)
+* [Govern access to applications](identity-governance-applications-prepare.md)
active-directory Access Reviews Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-overview.md
Azure AD enables you to collaborate with users from inside your organization and
- **Too many users in privileged roles:** It's a good idea to check how many users have administrative access, how many of them are Global Administrators, and if there are any invited guests or partners that have not been removed after being assigned to do an administrative task. You can recertify the role assignment users in [Azure AD roles](../privileged-identity-management/pim-perform-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) such as Global Administrators, or [Azure resources roles](../privileged-identity-management/pim-perform-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) such as User Access Administrator in the [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) experience. - **When automation is not possible:** You can create rules for dynamic membership on security groups or Microsoft 365 Groups, but what if the HR data is not in Azure AD or if users still need access after leaving the group to train their replacement? You can then create a review on that group to ensure those who still need access should have continued access. - **When a group is used for a new purpose:** If you have a group that is going to be synced to Azure AD, or if you plan to enable the application Salesforce for everyone in the Sales team group, it would be useful to ask the group owner to review the group membership prior to the group being used in a different risk content.-- **Business critical data access:** for certain resources, it might be required to ask people outside of IT to regularly sign out and give a justification on why they need access for auditing purposes.
+- **Business critical data access:** for certain resources, such as [business critical applications](identity-governance-applications-prepare.md), it might be required as part of compliance processes to ask people to regularly reconfirm and give a justification on why they need continued access.
- **To maintain a policy's exception list:** In an ideal world, all users would follow the access policies to secure access to your organization's resources. However, sometimes there are business cases that require you to make exceptions. As the IT admin, you can manage this task, avoid oversight of policy exceptions, and provide auditors with proof that these exceptions are reviewed regularly. - **Ask group owners to confirm they still need guests in their groups:** Employee access might be automated with some on premises Identity and Access Management (IAM), but not invited guests. If a group gives guests access to business sensitive content, then it's the group owner's responsibility to confirm the guests still have a legitimate business need for access. - **Have reviews recur periodically:** You can set up recurring access reviews of users at set frequencies such as weekly, monthly, quarterly or annually, and the reviewers will be notified at the start of each review. Reviewers can approve or deny access with a friendly interface and with the help of smart recommendations.
Azure AD enables you to collaborate with users from inside your organization and
## Where do you create reviews?
-Depending on what you want to review, you will create your access review in Azure AD access reviews, Azure AD enterprise apps (in preview), or Azure AD PIM.
+Depending on what you want to review, you will create your access review in Azure AD access reviews, Azure AD enterprise apps (in preview), Azure AD PIM, or Azure AD entitlement management.
| Access rights of users | Reviewers can be | Review created in | Reviewer experience | | | | | |
Depending on what you want to review, you will create your access review in Azur
| Assigned to a connected app | Specified reviewers</br>Self-review | Azure AD access reviews</br>Azure AD enterprise apps (in preview) | Access panel | | Azure AD role | Specified reviewers</br>Self-review | [Azure AD PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) | Azure portal | | Azure resource role | Specified reviewers</br>Self-review | [Azure AD PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) | Azure portal |
+| Access package assignments | Specified reviewers</br>Group members</br>Self-review | Azure AD entitlement management | Access panel |
## License requirements
Here are some example license scenarios to help you determine the number of lice
## Next steps
+- [Prepare for an access review of users' access to an application](access-reviews-application-preparation.md)
- [Create an access review of groups or applications](create-access-review.md) - [Create an access review of users in an Azure AD administrative role](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md?toc=%2fazure%2factive-directory%2fgovernance%2ftoc.json) - [Review access to groups or applications](perform-access-review.md)
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
For a demonstration of how to add a multi-stage approval to a request policy, wa
>[!VIDEO https://www.microsoft.com/videoplayer/embed/RE4d1Jw]
-## Change approval settings of an existing access package
+## Change approval settings of an existing access package assignment policy
-Follow these steps to specify the approval settings for requests for the access package:
+Follow these steps to specify the approval settings for requests for the access package through a policy:
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
-#Customer intent: As an administrator, I want detailed information about how I can edit an access package to include requestor infromation to screen requestors and get requestors the resources they need to perform their job.
+#Customer intent: As an administrator, I want detailed information about how I can edit an access package to include requestor information to screen requestors and get requestors the resources they need to perform their job.
# Change lifecycle settings for an access package in Azure AD entitlement management As an access package manager, you can change the lifecycle settings for assignments in an access package at any time by editing an existing policy. If you change the expiration date for assignments on a policy, the expiration date for requests that are already in a pending approval or approved state will not change.
-This article describes how to change the lifecycle settings for an existing access package.
+This article describes how to change the lifecycle settings for an existing access package assignment policy.
## Open requestor information To ensure users have the right access to an access package, custom questions can be configured to ask users requesting access to certain access packages. Configuration options include: localization, required/optional, and text/multiple choice answer formats. Requestors will see the questions when they request the package and approvers see the answers to the questions to help them make their decision. Use the following steps to configure questions in an access package:
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
# Change request settings for an access package in Azure AD entitlement management
-As an access package manager, you can change the users who can request an access package at any time by editing the policy or adding a new policy. This article describes how to change the request settings for an existing access package.
+As an access package manager, you can change the users who can request an access package at any time by editing a policy for access package assignment requests, or adding a new policy to the access package. This article describes how to change the request settings for an existing access package assignment policy.
## Choose between one or multiple policies The way you specify who can request an access package is with a policy. Before creating a new policy or editing an existing policy in an access package, you need to determine how many policies the access package needs.
-When you create an access package, you specify the request, approval and lifecycle settings, which are stored on the first policy of the access package. Most access packages will have a single policy, but a single access package can have multiple policies. You would create multiple policies for an access package if you want to allow different sets of users to be granted assignments with different request and approval settings.
+When you create an access package, you can specify the request, approval and lifecycle settings, which are stored on the first policy of the access package. Most access packages will have a single policy for users to request access, but a single access package can have multiple policies. You would create multiple policies for an access package if you want to allow different sets of users to be granted assignments with different request and approval settings.
For example, a single policy cannot be used to assign internal and external users to the same access package. However, you can create two policies in the same access package, one for internal users and one for external users. If there are multiple policies that apply to a user, they will be prompted at the time of their request to select the policy they would like to be assigned to. The following diagram shows an access package with two policies.
For example, a single policy cannot be used to assign internal and external user
| | | | I want all users in my directory to have the same request and approval settings for an access package | One | | I want all users in certain connected organizations to be able to request an access package | One |
-| I want to allow users in my directory and also users outside my directory to request an access package | Multiple |
-| I want to specify different approval settings for some users | Multiple |
-| I want some users access package assignments to expire while other users can extend their access | Multiple |
+| I want to allow users in my directory and also users outside my directory to request an access package | Two |
+| I want to specify different approval settings for some users | One for each group of users |
+| I want some users access package assignments to expire while other users can extend their access | One for each group of users |
+| I want users to request access and other users to be assigned access by an administrator | Two |
For information about the priority logic that is used when multiple policies apply, see [Multiple policies](entitlement-management-troubleshoot.md#multiple-policies ).
-## Open an existing access package and add a new policy of request settings
+## Open an existing access package and add a new policy with different request settings
If you have a set of users that should have different request and approval settings, you'll likely need to create a new policy. Follow these steps to start adding a new policy to an existing access package:
Follow these steps if you want to bypass access requests and allow administrator
> When assigning users to an access package, administrators will need to verify that the users are eligible for that access package based on the existing policy requirements. Otherwise, the users won't successfully be assigned to the access package. If the access package contains a policy that requires user requests to be approved, users can't be directly assigned to the package without necessary approval(s) from the designated approver(s).
-## Open and edit an existing policy of request settings
+## Open and edit an existing policy's request settings
-To change the request and approval settings for an access package, you need to open the corresponding policy. Follow these steps to open and edit the request settings for an access package:
+To change the request and approval settings for an access package, you need to open the corresponding policy with those settings. Follow these steps to open and edit the request settings for an access package assignment policy:
**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md
# Create an access review of an access package in Azure AD entitlement management
-To reduce the risk of stale access, you should enable periodic reviews of users who have active assignments to an access package in Azure AD entitlement management. You can enable reviews when you create a new access package or edit an existing access package. This article describes how to enable access reviews of access packages.
+To reduce the risk of stale access, you should enable periodic reviews of users who have active assignments to an access package in Azure AD entitlement management. You can enable reviews when you create a new access package or edit an existing access package assignment policy. This article describes how to enable access reviews of access packages.
## Prerequisites
For more information, see [License requirements](entitlement-management-overview
## Create an access review of an access package
-You can enable access reviews when [creating a new access package](entitlement-management-access-package-create.md) or [editing an existing access package](entitlement-management-access-package-lifecycle-policy.md) policy. Follow these steps to enable access reviews of an access package:
+You can enable access reviews when [creating a new access package](entitlement-management-access-package-create.md) or [editing an existing access package assignment policy](entitlement-management-access-package-lifecycle-policy.md) policy. If you have multiple policies, for different communities of users to request access, you can have independent access review schedules for each policy. Follow these steps to enable access reviews of an access package's assignments:
-1. Open the **Lifecycle** tab for an access package to specify when a user's assignment to the access package expires. You can also specify whether users can extend their assignments.
+1. Open the **Lifecycle** tab for an access package assignment policy to specify when a user's assignment to the access package expires. You can also specify whether users can extend their assignments.
1. In the **Expiration** section, set Access package assignments expires to **On date**, **Number of days**, **Number of hours**, or **Never**.
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
These triggers to Logic Apps are controlled in a new tab within access package p
> [!NOTE] > Select **New access package** if you want to create a new access package.
- > For more information about how to create an access package see [Create a new access package in entitlement management](entitlement-management-access-package-create.md). For more information about how to edit an existing access package, see [Change request settings for an access package in Azure AD entitlement management](entitlement-management-access-package-request-policy.md#open-and-edit-an-existing-policy-of-request-settings).
+ > For more information about how to create an access package see [Create a new access package in entitlement management](entitlement-management-access-package-create.md). For more information about how to edit an existing access package, see [Change request settings for an access package in Azure AD entitlement management](entitlement-management-access-package-request-policy.md#open-and-edit-an-existing-policys-request-settings).
1. Change to the policy tab, select the policy and select **Edit**.
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
Azure AD entitlement management can help address these challenges. To learn mor
Here are some of capabilities of entitlement management:
+- Control who can get access to applications, groups, Teams and SharePoint sites, with multi-stage approval, and ensure users do not retain access indefinitely through time-limited assignments and recurring access reviews.
- Delegate to non-administrators the ability to create access packages. These access packages contain resources that users can request, and the delegated access package managers can define policies with rules for which users can request, who must approve their access, and when access expires. - Select connected organizations whose users can request access. When a user who is not yet in your directory requests access, and is approved, they are automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
There are several ways that you can configure entitlement management for your or
### Access package 1. [Watch video: Day-to-day management: Things have changed](https://www.microsoft.com/videoplayer/embed/RE3LD4Z)
-1. [Open an existing policy of request settings](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-of-request-settings)
-1. [Update the approval settings](entitlement-management-access-package-approval-policy.md#change-approval-settings-of-an-existing-access-package)
+1. [Open an existing policy's request settings](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings)
+1. [Update the approval settings](entitlement-management-access-package-approval-policy.md#change-approval-settings-of-an-existing-access-package-assignment-policy)
### Access package 1. [Watch video: Day-to-day management: Things have changed](https://www.microsoft.com/videoplayer/embed/RE3LD4Z) 1. [Remove users that no longer need access](entitlement-management-access-package-assignments.md)
-1. [Open an existing policy of request settings](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-of-request-settings)
+1. [Open an existing policy's request settings](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings)
1. [Add users that need access](entitlement-management-access-package-request-policy.md#for-users-in-your-directory) ### Access package
-1. [If users need different lifecycle settings, add a new policy to the access package](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-of-request-settings)
+1. [If users need different lifecycle settings, add a new policy to the access package](entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings)
1. [Directly assign specific users to the access package](entitlement-management-access-package-assignments.md#directly-assign-a-user) ## Assignments and reports
active-directory Identity Governance Applications Define https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-define.md
+
+ Title: Define organizational policies for governing access to applications in your environment| - Azure AD
+description: Azure Active Directory Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. You can define policies for how users should obtain access to your business critical applications integrated with Azure AD.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 6/28/2022+++++
+# Define organizational policies for governing access to applications in your environment
+
+Once you've identified one or more applications that you want to use Azure AD to [govern access](identity-governance-applications-prepare.md), write down the organization's policies for determining which users should have access, and any other constraints that the system should provide.
+
+## Identifies applications and their roles in scope
+
+Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. If this application is an existing application in your environment, you may already have documented the access policies for who 'should have access' to this application. If not, you may need to consult with various stakeholders, such as compliance and risk management teams, to ensure that the policies being used to automate access decisions are appropriate for your scenario.
+
+1. **Collect the roles and permissions that each application provides.** Some applications may have only a single role, for example, an application that only has the role "User". More complex applications may surface multiple roles to be managed through Azure AD. These application roles typically make broad constraints on the access a user with that role would have within the app. For example, an application that has an administrator persona might have two roles, "User" and "Administrator". Other applications may also rely upon group memberships or claims for finer-grained role checks, which can be provided to the application from Azure AD in provisioning or claims issued using federation SSO protocols. Finally, there may be roles that don't surface in Azure AD - perhaps the application doesn't permit defining the administrators in Azure AD, instead relying upon its own authorization rules to identify administrators.
+ > [!Note]
+ > If you're using an application from the Azure AD application gallery that supports provisioning, then Azure AD may import defined roles in the application and automatically update the application manifest with the application's roles automatically, once provisioning is configured.
+
+1. **Select which roles and groups have membership that are to be governed in Azure AD.** Based on compliance and risk management requirements, organizations often prioritize those roles or groups that give privileged access or access to sensitive information.
+
+## Define the organization's policy with prerequisites and other constraints for access to the application
+
+In this section, you'll write down the organizational policies you plan to use to determine access to the application. You can record this as a table in a spreadsheet, for example
+
+|Role|Prerequisite for access|Approvers|Default duration of access|Separation of duties constraints|Conditional access policies|
+|:--|-|-|-|-|-|
+|*Western Sales*|Member of sales team|user's manager|Yearly review|Cannot have *Eastern Sales* access|Multifactor authentication (MFA) and registered device required for access|
+|*Western Sales*|Any employee outside of sales|head of Sales department|90 days|N/A|MFA and registered device required for access|
+|*Western Sales*|Non-employee sales rep|head of Sales department|30 days|N/A|MFA required for access|
+|*Eastern Sales*|Member of sales team|user's manager|Yearly review|Cannot have *Western Sales* access|MFA and registered device required for access|
+|*Eastern Sales*|Any employee outside of sales|head of Sales department|90 days|N/A|MFA and registered device required for access|
+|*Eastern Sales*|Non-employee sales rep|head of Sales department|30 days|N/A|MFA required for access|
+
+1. **Identify if there are prerequisite requirements, standards that a user must meet before to they're given access to an application.** For example, under normal circumstances, only full time employees, or those in a particular department or cost center, should be allowed to have access to a particular department's application. Also, you may require the entitlement management policy for a user from some other department requesting access to have one or more additional approvers. While having multiple stages of approval may slow the overall process of a user gaining access, these extra stages ensure access requests are appropriate and decisions are accountable. For example, requests for access by an employee could have two stages of approval, first by the requesting user's manager, and second by one of the resource owners responsible for data held in the application.
+
+1. **Determine how long a user who has been approved for access, should have access, and when that access should go away.** For many applications, a user might retain access indefinitely, until they're no longer affiliated with the organization. In some situations, access may be tied to particular projects or milestones, so that when the project ends, access is removed automatically. Or, if only a few users are using an application through a policy, you may configure quarterly or yearly reviews of everyone's access through that policy, so that there's regular oversight. These processes can ensure users lose access eventually when access is no longer needed, even if there isn't a pre-determined project end date.
+
+1. **Inquire if there are separation of duties constraints.** For example, you may have an application with two roles, *Western Sales* and *Eastern Sales*, and you want to ensure that a user can only have one sales territory at a time. Include a list of any pairs of roles that are incompatible for your application, so that if a user has one role, they aren't allowed to request the second role.
+
+1. **Select the appropriate conditional access policy for access to the application.** We recommend that you analyze your applications and group them into applications that have the same resource requirements for the same users. If this is the first federated SSO application you're integrating with Azure AD for identity governance, you may need to create a new conditional access policy to express constraints, such as requirements for Multifactor authentication (MFA) or location-based access. You can configure users to be required to agree to [a terms of use](../conditional-access/require-tou.md). See [plan a conditional access deployment](../conditional-access/plan-conditional-access.md) for more considerations on how to define a conditional access policy.
+
+1. **Determine how exceptions to your criteria should be handled.** For example, an application may typically only be available for designated employees, but an auditor or vendor may need temporary access for a specific project. Or, an employee who is traveling may require access from a location that is normally blocked as your organization has no presence in that location. In these situations, you may choose to also have an entitlement management policy for approval that may have different stages, or a different time limit, or a different approver. A vendor who is signed in as a guest user in your Azure AD tenant may not have a manager, so instead their access requests could be approved by a sponsor for their organization, or by a resource owner, or a security officer.
+
+As the organizational policy for who should have access is being reviewed by the stakeholders, then you can begin [integrating the application](identity-governance-applications-integrate.md) with Azure AD. That way at a later step you'll be ready to [deploy the organization-approved policies](identity-governance-applications-deploy.md) for access in Azure AD identity governance.
+
+## Next steps
+
+- [Integrate an application with Azure AD](identity-governance-applications-integrate.md)
+- [Deploy governance policies](identity-governance-applications-deploy.md)
+
active-directory Identity Governance Applications Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-deploy.md
+
+ Title: Deploying policies for governing access to applications integrated with Azure AD| Microsoft Docs
+description: Azure Active Directory Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. You can use entitlement management and other identity governance features to enforce the policies for access.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 6/28/2022+++++
+# Deploying organizational policies for governing access to applications integrated with Azure AD
++
+In previous sections, you [defined your governance policies for an application](identity-governance-applications-define.md) and [integrated that application with Azure AD](identity-governance-applications-integrate.md). In this section, you'll configure the Azure AD conditional access and entitlement management features to control ongoing access to your applications. You'll establish
+* Conditional access policies, for how a user authenticates to Azure AD for an application integrated with Azure AD for single sign-on
+* Entitlement management policies, for how a user obtains and keeps assignments to application roles and membership in groups
+* Access review policies, for how often group memberships are reviewed
+
+Once these policies are deployed, you can then monitor the ongoing behavior of Azure AD as users request and are assigned access to the application.
+
+## Deploy conditional access policies for SSO enforcement
+
+In this section, you'll establish the Conditional Access policies that are in scope for determining whether an authorized user is able to sign into the app, based on factors like the user's authentication strength or device status.
+
+Conditional access is only possible for applications that rely upon Azure AD for single sign-on (SSO). If the application isn't able to be integrated for SSO, then continue in the next section.
+
+1. **Upload the terms of use (TOU) document, if needed.** If you require users to accept a terms of use (TOU) prior to accessing the application, then create and [upload the TOU document](../conditional-access/terms-of-use.md) so that it can be included in a conditional access policy.
+1. **Verify users are ready for Azure Active Directory Multi-Factor Authentication.** We recommend requiring Azure AD Multi-Factor Authentication for business critical applications integrated via federation. For these applications, there should be a policy that requires the user to have met a multi-factor authentication requirement prior to Azure AD permitting them to sign into the application. Some organizations may also block access by locations, or [require the user to access from a registered device](../conditional-access/howto-conditional-access-policy-compliant-device.md). If there's no suitable policy already that includes the necessary conditions for authentication, location, device and TOU, then [add a policy to your conditional access deployment](../conditional-access/plan-conditional-access.md).
+1. **Bring the application into scope of the appropriate conditional access policy**. If you have an existing conditional access policy that was created for another application subject to the same governance requirements, you could update that policy to have it apply to this application as well, to avoid having a large number of policies. Once you have made the updates, check to ensure that the expected policies are being applied. You can see what policies would apply to a user with the [Conditional Access what if tool](../conditional-access/troubleshoot-conditional-access-what-if.md).
+1. **Create a recurring access review if any users will need temporary policy exclusions**. In some cases, it may not be possible to immediately enforce conditional access policies for every authorized user. For example, some users may not have an appropriate registered device. If it's necessary to exclude one or more users from the CA policy and allow them access, then configure an access review for the group of [users who are excluded from Conditional Access policies](../governance/conditional-access-exclusion.md).
+1. **Document the token lifetime and applications' session settings.** How long a user who has been denied continued access can continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. The session lifetime for an application depends upon the application itself. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/active-directory-configurable-token-lifetimes.md).
+
+## Deploy entitlement management policies for automating access assignment
+
+In this section, you'll configure Azure AD entitlement management so users can request access to your application's roles or to groups used by the application. In order to perform these tasks, you'll need to be in the *Global Administrator*, *Identity Governance Administrator* role, or be [delegated as a catalog creator](entitlement-management-delegate-catalog.md) and the owner of the application.
+
+1. **Access packages for governed applications should be in a designated catalog.** If you don't already have a catalog for your application governance scenario, [create a catalog](../governance/entitlement-management-catalog-create.md) in Azure AD entitlement management.
+1. **Populate the catalog with necessary resources.** Add the application, as well as any Azure AD groups that the application relies upon, [as resources in that catalog](../governance/entitlement-management-catalog-create.md).
+1. **Create an access package for each role or group which users can request.** For each of the applications' roles or groups, [create an access package](../governance/entitlement-management-access-package-create.md) that includes that role or group as its resource. At this stage of configuring that access package, configure the access package assignment policy for direct assignment, so that only administrators can create assignments. In that policy, set the access review requirements for existing users, if any, so that they don't keep access indefinitely.
+1. **Configure access packages to enforce separation of duties requirements.** If you have [separation of duties](entitlement-management-access-package-incompatible.md) requirements, then configure the incompatible access packages or existing groups for your access package. If your scenario requires the ability to override a separation of duties check, then you can also [set up additional access packages for those override scenarios](entitlement-management-access-package-incompatible.md#configuring-multiple-access-packages-for-override-scenarios).
+1. **Add assignments of existing users, who already have access to the application, to the access packages.** For each access package, assign existing users of the application in that role, or members of that group, to the access package. You can [directly assign a user](entitlement-management-access-package-assignments.md) to an access package using the Azure portal, or in bulk via Graph or PowerShell.
+1. **Create policies for users to request access.** In each access package, [create additional access package assignment policies](../governance/entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings) for users to request access. Configure the approval and recurring access review requirements in that policy.
+1. **Create recurring access reviews for other groups used by the application.** If there are groups that are used by the application but aren't resource roles for an access package, then [create access reviews](create-access-review.md) for the membership of those groups.
+
+## View reports on access
+
+Azure AD, in conjunction with Azure Monitor, provides several reports to help you understand who has access to an application and if they're using that access.
+
+* An administrator, or a catalog owner, can [retrieve the list of users who have access package assignments](entitlement-management-access-package-assignments.md), via the Azure portal, Graph or PowerShell.
+* You can also send the audit logs to Azure Monitor and view a history of [changes to the access package](entitlement-management-logs-and-reporting.md#view-events-for-an-access-package), in the Azure portal, or via PowerShell.
+* You can view the last 30 days of sign ins to an application in the [sign ins report](../reports-monitoring/howto-find-activity-reports.md#sign-ins-report) in the Azure portal, or via [Graph](/graph/api/signin-list?view=graph-rest-1.0&tabs=http).
+* You can also send the [sign in logs to Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md) to archive sign in activity for up to two years.
+
+## Monitor to adjust entitlement management policies and access as needed
+
+At regular intervals, such as weekly, monthly or quarterly, based on the volume of application access assignment changes for your application, use the Azure portal to ensure that access is being granted in accordance with the policies. You can also ensure that the identified users for approval and review are still the correct individuals for these tasks.
+
+* **Watch for application role assignments and group membership changes.** If you have Azure AD configured to send its audit log to Azure Monitor, use the `Application role assignment activity` in Azure Monitor to [monitor and report on any application role assignments that weren't made through entitlement management](../governance/entitlement-management-access-package-incompatible.md#monitor-and-report-on-access-assignments). If there are role assignments that were created by an application owner directly, you should contact that application owner to determine if that assignment was authorized. In addition, if the application relies upon Azure AD security groups, also monitor for changes to those groups as well.
+
+* **Also watch for users granted access directly within the application.** If the following conditions are met, then it's possible for a user to obtain access to an application without being part of Azure AD, or without being added to the applications' user account store by Azure AD:
+
+ * The application has a local user account store within the app
+ * The user account store is in a database or in an LDAP directory
+ * The application doesn't rely solely upon Azure AD for single sign-on.
+
+ For an application with the properties in the previous list, you should regularly check that users were only added to the application's local user store through Azure AD provisioning. If users that were created directly in the application, contact the application owner to determine if that assignment was authorized.
+
+* **Ensure approvers and reviewers are kept up to date.** For each access package that you configured in the previous section, ensure the access package assignment policies continue to have the correct approvers and reviewers. Update those policies if the approvers and reviewers that were previously configured are no longer present in the organization, or are in a different role.
+
+* **Validate that reviewers are making decisions during a review.** Monitor that [recurring access reviews for those access packages](entitlement-management-access-package-lifecycle-policy.md) are completing successfully, to ensure reviewers are participating and making decisions to approve or deny user's continued need for access.
+
+* **Check that provisioning and deprovisioning are working as expected.** If you had previously configured provisioning of users to the application, then when the results of a review are applied, or a user's assignment to an access package expires, Azure AD will begin deprovisioning denied users from the application. You can [monitor the process of deprovisioning users](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md). If provisioning indicates an error with the application, you can [download the provisioning log](../reports-monitoring/concept-provisioning-logs.md) to investigate if there was a problem with the application.
+
+* **Update the Azure AD configuration with any role or group changes in the application.** If the application adds new roles, updates existing roles, or relies upon additional groups, then you'll need to update the access packages and access reviews to account for those new roles or groups.
+
+## Next steps
+
+- [Access reviews deployment plan](deploy-access-reviews.md)
+
active-directory Identity Governance Applications Existing Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-existing-users.md
+
+ Title: Governing an application's existing users in Azure AD with Microsoft PowerShell
+description: Planning for a successful access reviews campaign for a particular application includes identifying if there are any users in that application whose access doesn't derive from Azure AD.
+
+documentationCenter: ''
++
+editor:
++
+ na
++ Last updated : 06/24/2022+++++
+#Customer intent: As an IT admin, I want to ensure access to specific applications is governed, by setting up access reviews for those applications. For this, I need to have in Azure AD the existing users of that application assigned to the application.
+++
+# Governing an application's existing users - Microsoft PowerShell
+
+There are two common scenarios in which it's necessary to populate Azure Active Directory (Azure AD) with existing users of an application, prior to using the application with an Azure AD identity governance feature such as [access reviews](access-reviews-application-preparation.md).
+
+### Application migrated to Azure AD after using its own identity provider
+
+The first scenario is one in which the application already exists in the environment, and previously used its own identity provider or data store to track which users had access. When you change the application to rely upon Azure AD, then only users who are in Azure AD and permitted access to that application can access it. As part of that configuration change, you can choose to bring in the existing users from that application's data store into Azure AD, so that those users continue to have access, through Azure AD. Having the users associated with the application represented in Azure AD will enable Azure AD to track users with access to the application, even though the user's relationship with the application originated elsewhere, such as in an applications' database or directory. Once Azure AD is aware of a user's assignment, Azure AD will be able send updates to the application's data store when that user's attributes change, or when the user goes out of scope of the application.
+
+### Application that doesn't use Azure AD as its only identity provider
+
+The second scenario is one in which an application doesn't solely rely upon Azure AD as its identity provider. In some cases, an application might support multiple identity providers, or have its own built-in credential storage. This scenario is described as Pattern C in [preparing for an access review of user's access to an application](access-reviews-application-preparation.md). If it isn't feasible to remove other identity providers or local credential authentication from the application, then in order to be able to use Azure AD to review who has access to that application, or remove someone's access from that application, you'll need to create assignments in Azure AD that represent the access by users of the application, those users who don't rely upon Azure AD for authentication. Having these assignments is necessary if you plan to review all users with access to the application, as part of an access review.
+
+For example, there's a user who's in the application's data store and Azure AD is configured to require role assignments to the application, however, the user doesn't have an application role assignment in Azure AD. If the user is updated in Azure AD, then no changes will be sent to the application. And if the application's role assignments are reviewed, the user won't be included in the review. To have all the users included in the review, then it's necessary to have application role assignments for all users of the application.
+
+## Terminology
+
+This article illustrates the process for managing application role assignments using the [Microsoft Graph PowerShell cmdlets](https://www.powershellgallery.com/packages/Microsoft.Graph) and so uses Microsoft Graph terminology.
+
+![Terminology](./media/identity-governance-applications-existing-users/data-model-terminology.png)
+
+In Azure AD, a `ServicePrincipal` represents an application in a particular organization's directory. The `ServicePrincipal` has a property `AppRoles` that lists the roles an application supports, such as `Marketing specialist`. An `AppRoleAssignment` links a `User` to a Service principal and specifies which role that user has in that application.
+
+You may also be using [Azure AD entitlement management](entitlement-management-overview.md) access packages to give users time-limited access to the application. In entitlement management, an `AccessPackage` contains one or more resource roles, potentially from multiple service principals, and has `Assignment` for users to the access package. When you create an assignment for a user to an access package, then Azure AD entitlement management automatically creates the necessary `AppRoleAssignment` for the user to each application. For more information, see the [Manage access to resources in Azure AD entitlement management](/powershell/microsoftgraph/tutorial-entitlement-management) tutorial on how to create access packages through PowerShell.
+
+## Before you begin
+
+- You must have one of the following licenses in your tenant:
+
+ - Azure AD Premium P2
+ - Enterprise Mobility + Security (EMS) E5 license
+
+- You'll need to have an appropriate administrative role. If this is the first time you're performing these steps, you'll need the `Global administrator` role to authorize the use of Microsoft Graph PowerShell in your tenant.
+- There needs to be a service principal for your application in your tenant.
+
+ - If the application uses an LDAP directory, follow the guide for [configuring Azure AD to provision users into LDAP directories](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) through the section to Download, install, and configure the Azure AD Connect Provisioning Agent Package.
+ - If the application uses a SQL database, follow the guide for [configuring Azure AD to provision users into SQL based applications](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) through the section to Download, install and configure the Azure AD Connect Provisioning Agent Package.
++
+## Collect existing users from an application
+
+The first step to ensuring all users are recorded in Azure AD, is to collect the list of existing users who have access to the application. Some applications may have a built-in command to export a list of current users from its data store. In other cases, the application may rely upon an external directory or database. In some environments, the application may be located on a network segment or system that isn't appropriate for use for managing access to Azure AD, so you might need to extract the list of users from that directory or database, and then transfer it as a file to another system that can be used for Azure AD interactions. This section explains three approaches for how to get a list of users, held in a comma separated text file (CSV),
+
+* From an LDAP directory
+* From a SQL Server database
+* From another SQL-based database
+
+### Collect existing users from an application that uses an LDAP directory
+
+This section applies to applications that use an LDAP directory as its underlying data store for users who don't authenticate to Azure AD.
+
+Many LDAP directories, such as Active Directory, include a command that outputs a list of users.
+
+1. Identify which of the users in that directory are in scope of being users of the application. This choice will be dependent upon your application's configuration. For some applications, any user who exists in an LDAP directory is a valid user. Other applications may require the user to have a particular attribute or be a member of a group in that directory.
+
+1. Run the command that retrieves that subset of users from your directory. Ensure that the output includes the attributes of users that will be used for matching with Azure AD - such as an employee ID, account name or email address. For example, this command would produce a CSV file in the current directory with the `userPrincipalName` attribute of every person in the directory.
+
+ ```powershell
+ $out_filename = ".\users.csv"
+ csvde -f $out_filename -l userPrincipalName,cn -r "(objectclass=person)"
+ ```
+1. If needed, transfer the CSV file containing the list of users to a system with the [Microsoft Graph PowerShell cmdlets](https://www.powershellgallery.com/packages/Microsoft.Graph) installed.
+1. Continue reading at the section below, **Confirm Azure AD has users for each user from the application**.
+
+### Collect existing users from an application's database table using a SQL Server wizard
+
+This section applies to applications that use SQL Server as its underlying data store.
+
+First, get a list of the users from the tables. Most databases provide a way to export the contents of tables to a standard file format, such as to a CSV file. If the application uses a SQL Server database, you can use the **SQL Server Import and Export Wizard** to export portions of a database. If you don't have a utility for your database, you can use the ODBC driver with PowerShell, described in the next section.
+
+1. Log in to the system where SQL Server is installed.
+1. Launch **SQL Server 2019 Import and Export (64 bit)** or the equivalent for your database.
+1. Select the existing database as the source.
+1. Select **Flat File Destination** as the destination. Provide a file name, and change the **Code page** to **65001 (UTF-8)**.
+1. Complete the wizard, and select to run immediately.
+1. Wait for the execution to complete.
+1. If needed, transfer the CSV file containing the list of users to a system with the [Microsoft Graph PowerShell cmdlets](https://www.powershellgallery.com/packages/Microsoft.Graph) installed.
+1. Continue reading at the section below, **Confirm Azure AD has users for each user from the application**.
+
+### Collect existing users from an application's database table using PowerShell
+
+This section applies to applications that use another SQL database as its underlying data store, where you're using the [ECMA Connector Host](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) to provision users into that application. If you've not yet configured the provisioning agent, use that guide to create the DSN connection file you'll use in this section.
+
+1. Log in to the system where the provisioning agent is or will be installed.
+1. Launch PowerShell.
+1. Construct a connection string for connecting to your database system. The components of a connection string will depend upon the requirements of your database. If you are using SQL Server, then see the [list of DSN and Connection String Keywords and Attributes](/sql/connect/odbc/dsn-connection-string-attribute). If you are using a different database, then you'll need to include the mandatory keywords for connecting to that database. For example, if your database uses the fully-qualified pathname of the DSN file, a userID and password, then construct the connection string using the following commands.
+
+ ```powershell
+ $filedsn = "c:\users\administrator\documents\db.dsn"
+ $db_cs = "filedsn=" + $filedsn + ";uid=p;pwd=secret"
+ ```
+
+1. Open a connection to your database, providing that connection string, using the following commands.
+
+ ```powershell
+ $db_conn = New-Object data.odbc.OdbcConnection
+ $db_conn.ConnectionString = $db_cs
+ $db_conn.Open()
+ ```
+
+1. Construct a SQL query to retrieve the users from the database table. Be sure to include the columns that will be used to match users in the application's database with those users in Azure AD, such as an employee ID, account name or email address. For example, if your users are held in a database table named `USERS` and have columns `name` and `email`, then type the following command.
+
+ ```powershell
+ $db_query = "SELECT name,email from USERS"
+
+ ```
+
+1. Send the query to the database via the connection, and retrieve the results.
+
+ ```powershell
+ $result = (new-object data.odbc.OdbcCommand($db_query,$db_conn)).ExecuteReader()
+ $table = new-object System.Data.DataTable
+ $table.Load($result)
+ ```
+
+1. Write the result, the list of rows representing users that were retrieved from the query, to a CSV file.
+
+ ```powershell
+ $out_filename = ".\users.csv"
+ $table.Rows | Export-Csv -Path $out_filename -NoTypeInformation -Encoding UTF8
+ ```
+
+1. If this system doesn't have the Microsoft Graph PowerShell cmdlets installed, or doesn't have connectivity to Azure AD, then transfer the CSV file that was generated in the previous step, containing the list of users, to a system that has the [Microsoft Graph PowerShell cmdlets](https://www.powershellgallery.com/packages/Microsoft.Graph) installed.
+
+## Confirm Azure AD has users for each user from the application
+
+Now that you have a list of all the users obtained from the application, you'll next match those users from the application's data store with users in Azure AD. Before proceeding, review the section on [matching users in the source and target systems](/azure/active-directory/app-provisioning/customize-application-attributes#matching-users-in-the-source-and-target--systems), as you'll configure Azure AD provisioning with equivalent mappings afterwards. That step will allow Azure AD provisioning to query the application's data store with the same matching rules.
+
+### Retrieve the IDs of the users in Azure AD
+
+This section shows how to interact with Azure AD using [Microsoft Graph PowerShell](https://www.powershellgallery.com/packages/Microsoft.Graph) cmdlets. The first time your organization use these cmdlets for this scenario, you'll need to be in a Global Administrator role to consent Microsoft Graph PowerShell to be used for these scenarios in your tenant. Subsequent interactions can use a lower privileged role, such as User Administrator role if you anticipate creating new users, or the Application Administrator or [Identity Governance Administrator](/azure/active-directory/roles/permissions-reference#identity-governance-administrator) role, if you're just managing application role assignments.
+
+1. Launch PowerShell.
+1. If you don't have the [Microsoft Graph PowerShell modules](https://www.powershellgallery.com/packages/Microsoft.Graph) already installed, install the `Microsoft.Graph.Users` module and others using
+
+ ```powershell
+ Install-Module Microsoft.Graph
+ ```
+
+1. If you already have the modules installed, ensure you are using a recent version.
+
+ ```powershell
+ Update-Module microsoft.graph.users,microsoft.graph.identity.governance,microsoft.graph.applications
+ ```
+
+1. Connect to Azure AD.
+
+ The first time you run these scripts, you'll need to be an administrator, to be able to consent Microsoft Graph PowerShell for these permissions.
+
+ ```powershell
+ $msg = Connect-MgGraph -ContextScope Process -Scopes "User.Read.All,Application.Read.All,AppRoleAssignment.ReadWrite.All,EntitlementManagement.ReadWrite.All"
+ ```
+
+1. Read the list of users obtained from the application's data store into the PowerShell session. If the list of users was in a CSV file, then you can use the PowerShell cmdlet `Import-Csv` and provide the filename of the file from the previous section as an argument. For example, if the file is named `users.csv` and located in the current directory, type the command
+
+ ```powershell
+ $filename = ".\users.csv"
+ $dbusers = Import-Csv -Path $filename -Encoding UTF8
+ ```
+
+1. Pick the column of the `users` file that will match with an attribute of a user in Azure AD.
+
+ For example, you might have users in the database where the value in the column named `EMail` is the same value as in the Azure AD attribute `mail`.
+
+ ```powershell
+ $db_match_column_name = "EMail"
+ $azuread_match_attr_name = "mail"
+ ```
+
+1. Retrieve the IDs of those users in Azure AD.
+
+ The following PowerShell script will use the `$dbusers`, `$db_match_column_name` and `$azuread_match_attr_name` specified above, and will query Azure AD to locate a user that has a matching value for each record in the source file. If there are many users in the database, this script may take several minutes to complete.
+
+ ```powershell
+ $dbu_not_queried_list = @()
+ $dbu_not_matched_list = @()
+ $dbu_match_ambiguous_list = @()
+ $dbu_query_failed_list = @()
+ $azuread_match_id_list = @()
+
+ foreach ($dbu in $dbusers) {
+ if ($null -ne $dbu.$db_match_column_name -and $dbu.$db_match_column_name.Length -gt 0) {
+ $val = $dbu.$db_match_column_name
+ $escval = $val -replace "'","''"
+ $filter = $azuread_match_attr_name + " eq '" + $escval + "'"
+ try {
+ $ul = @(Get-MgUser -Filter $filter -All -ErrorAction Stop)
+ if ($ul.length -eq 0) { $dbu_not_matched_list += $dbu; } elseif ($ul.length -gt 1) {$dbu_match_ambiguous_list += $dbu } else {
+ $id = $ul[0].id;
+ $azuread_match_id_list += $id;
+ }
+ } catch { $dbu_query_failed_list += $dbu }
+ } else { $dbu_not_queried_list += $dbu }
+ }
+
+ ```
+
+1. View the results of the previous queries to see if any of the users in the database couldn't be located in Azure AD, due to errors or missing matches.
+
+ The following PowerShell script will display the counts of records that weren't located.
+
+ ```powershell
+ $dbu_not_queried_count = $dbu_not_queried_list.Count
+ if ($dbu_not_queried_count -ne 0) {
+ Write-Error "Unable to query for $dbu_not_queried_count records as rows lacked values for $db_match_column_name."
+ }
+ $dbu_not_matched_count = $dbu_not_matched_list.Count
+ if ($dbu_not_matched_count -ne 0) {
+ Write-Error "Unable to locate $dbu_not_matched_count records in Azure AD by querying for $db_match_column_name values in $azuread_match_attr_name."
+ }
+ $dbu_match_ambiguous_count = $dbu_match_ambiguous_list.Count
+ if ($dbu_match_ambiguous_count -ne 0) {
+ Write-Error "Unable to locate $dbu_match_ambiguous_count records in Azure AD."
+ }
+ $dbu_query_failed_count = $dbu_query_failed_list.Count
+ if ($dbu_query_failed_count -ne 0) {
+ Write-Error "Unable to locate $dbu_query_failed_count records in Azure AD as queries returned errors."
+ }
+ if ($dbu_not_queried_count -ne 0 -or $dbu_not_matched_count -ne 0 -or $dbu_match_ambiguous_count -ne 0 -or $dbu_query_failed_count -ne 0) {
+ Write-Output "You will need to resolve those issues before access of all existing users can be reviewed."
+ }
+ $azuread_match_count = $azuread_match_id_list.Count
+ Write-Output "Users corresponding to $azuread_match_count records were located in Azure AD."
+ ```
+
+1. When the script completes, it will indicate an error if there were any records from the data source that weren't located in Azure AD. If not all the records for users from the application's data store could be located as users in Azure AD, then you'll need to investigate which records didn't match and why. For example, someone's email address may have been changed in Azure AD without their corresponding `mail` property being updated in the application's data source. Or, they may have already left the organization, but still be in the application's data source. Or there might be a vendor or super-admin account in the application's data source who does not correspond to any specific person in Azure AD.
+
+1. If there were users that couldn't be located in Azure AD, but you want to have their access be reviewed or their attributes updated in the database, you'll need to create Azure AD users for the users that could not be located. You can create users in bulk using either a CSV file, as described in [bulk create users in the Azure AD portal](../enterprise-users/users-bulk-add.md), or by using the [New-MgUser](/powershell/module/microsoft.graph.users/new-mguser?view=graph-powershell-1.0#examples) cmdlet. When doing so, ensure that the users are populated with the attributes required for Azure AD to later match these new users to the existing users in the application.
+
+1. After adding any missing users to Azure AD, then run the script from step 7 above again, and then the script from step 8, and check that no errors are reported.
+
+ ```powershell
+ $dbu_not_queried_list = @()
+ $dbu_not_matched_list = @()
+ $dbu_match_ambiguous_list = @()
+ $dbu_query_failed_list = @()
+ $azuread_match_id_list = @()
+
+ foreach ($dbu in $dbusers) {
+ if ($null -ne $dbu.$db_match_column_name -and $dbu.$db_match_column_name.Length -gt 0) {
+ $val = $dbu.$db_match_column_name
+ $escval = $val -replace "'","''"
+ $filter = $azuread_match_attr_name + " eq '" + $escval + "'"
+ try {
+ $ul = @(Get-MgUser -Filter $filter -All -ErrorAction Stop)
+ if ($ul.length -eq 0) { $dbu_not_matched_list += $dbu; } elseif ($ul.length -gt 1) {$dbu_match_ambiguous_list += $dbu } else {
+ $id = $ul[0].id;
+ $azuread_match_id_list += $id;
+ }
+ } catch { $dbu_query_failed_list += $dbu }
+ } else { $dbu_not_queried_list += $dbu }
+ }
+
+ $dbu_not_queried_count = $dbu_not_queried_list.Count
+ if ($dbu_not_queried_count -ne 0) {
+ Write-Error "Unable to query for $dbu_not_queried_count records as rows lacked values for $db_match_column_name."
+ }
+ $dbu_not_matched_count = $dbu_not_matched_list.Count
+ if ($dbu_not_matched_count -ne 0) {
+ Write-Error "Unable to locate $dbu_not_matched_count records in Azure AD by querying for $db_match_column_name values in $azuread_match_attr_name."
+ }
+ $dbu_match_ambiguous_count = $dbu_match_ambiguous_list.Count
+ if ($dbu_match_ambiguous_count -ne 0) {
+ Write-Error "Unable to locate $dbu_match_ambiguous_count records in Azure AD."
+ }
+ $dbu_query_failed_count = $dbu_query_failed_list.Count
+ if ($dbu_query_failed_count -ne 0) {
+ Write-Error "Unable to locate $dbu_query_failed_count records in Azure AD as queries returned errors."
+ }
+ if ($dbu_not_queried_count -ne 0 -or $dbu_not_matched_count -ne 0 -or $dbu_match_ambiguous_count -ne 0 -or $dbu_query_failed_count -ne 0) {
+ Write-Output "You will need to resolve those issues before access of all existing users can be reviewed."
+ }
+ $azuread_match_count = $azuread_match_id_list.Count
+ Write-Output "Users corresponding to $azuread_match_count records were located in Azure AD."
+ ```
+
+## Check for users who are not already assigned to the application
+
+The previous steps have confirmed that all the users in the application's data store exist as users in Azure AD. However, they may not all currently be assigned to the application's roles in Azure AD. So the next steps are to see which users don't have assignments to application roles.
+
+1. Retrieve the users who currently have assignments to the application in Azure AD.
+
+ For example, if the enterprise application is named `CORPDB1`, then type the following commands
+
+ ```powershell
+ $azuread_app_name = "CORPDB1"
+ $azuread_sp_filter = "displayName eq '" + ($azuread_app_name -replace "'","''") + "'"
+ $azuread_sp = Get-MgServicePrincipal -Filter $azuread_sp_filter -All
+ $azuread_existing_assignments = @(Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $azuread_sp.Id -All)
+ ```
+
+1. Compare the list of user IDs from the previous section to those users currently assigned to the application.
+
+ ```powershell
+ $azuread_not_in_role_list = @()
+ foreach ($id in $azuread_match_id_list) {
+ $found = $false
+ foreach ($existing in $azuread_existing_assignments) {
+ if ($existing.principalId -eq $id) {
+ $found = $true; break;
+ }
+ }
+ if ($found -eq $false) { $azuread_not_in_role_list += $id }
+ }
+ $azuread_not_in_role_count = $azuread_not_in_role_list.Count
+ Write-Output "$azuread_not_in_role_count users in the application's data store are not assigned to the application roles."
+ ```
+
+ If 0 users aren't assigned to application roles, indicating that all users are assigned to application roles, then no further changes are needed before performing an access review.
+
+ However, if one or more users aren't currently assigned to the application roles, you'll need to add them to one of the application's roles, as described in the sections below.
+
+1. Select the role of the application to assign the remaining users to.
+
+ An application may have more than one role. Use this command to list the available roles.
+
+ ```powershell
+ $azuread_sp.AppRoles | where-object {$_.AllowedMemberTypes -contains "User"} | ft DisplayName,Id
+ ```
+
+ Select the appropriate role from the list, and obtain its role ID. For example, if the role name is `Admin`, then provide that value in the following PowerShell commands.
+
+ ```powershell
+ $azuread_app_role_name = "Admin"
+ $azuread_app_role_id = ($azuread_sp.AppRoles | where-object {$_.AllowedMemberTypes -contains "User" -and $_.DisplayName -eq $azuread_app_role_name}).Id
+ if ($null -eq $azuread_app_role_id) { write-error "role $azuread_app_role_name not located in application manifest"}
+ ```
+
+## Configure application provisioning
+
+Before creating new assignments, you'll want to configure [Azure AD provisioning](/azure/active-directory/app-provisioning/user-provisioning) of Azure AD users to the application. Configuring provisioning will enable Azure AD to match up the users in Azure AD with the application role assignments to the users already in the application's data store.
+
+1. Ensure that the application is configured to require users to have application role assignments, so that only selected users will be provisioned to the application.
+1. If provisioning hasn't been configured for the application, then configure, but do not start, [provisioning](/azure/active-directory/app-provisioning/user-provisioning).
+
+ * If the application uses an LDAP directory, follow the guide for [configuring Azure AD to provision users into LDAP directories](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure).
+ * If the application uses a SQL database, follow the guide for [configuring Azure AD to provision users into SQL based applications](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure).
+
+1. Check the [attribute mappings](/azure/active-directory/app-provisioning/customize-application-attributes) for provisioning to that application. Make sure that *Match objects using this attribute* is set for the Azure AD attribute and column that you used in the sections above for matching. If these rules aren't using the same attributes as you used earlier, then when application role assignments are created, Azure AD may be unable to locate existing users in the applications' data store, and inadvertently create duplicate users.
+1. Check that there's an attribute mapping for **isSoftDeleted** to an attribute of the application. When a user is unassigned from the application, soft-deleted in Azure AD, or blocked from sign-in, then Azure AD provisioning will update the attribute mapped to **isSoftDeleted**. If no attribute is mapped, then users who later are unassigned from the application role will continue to exist in the application's data store.
+1. If provisioning has already been enabled for the application, check that the application provisioning is not in [quarantine](/azure/active-directory/app-provisioning/application-provisioning-quarantine-status). You'll need to resolve any issues that are causing the quarantine prior to proceeding.
+
+## Create app role assignments in Azure AD
+
+For Azure AD to match the users in the application with the users in Azure AD, you'll need to create application role assignments in Azure AD.
+
+When an application role assignment is created in Azure AD for a user to application, then
+
+ - Azure AD will query the application to determine if the user already exists.
+ - Subsequent updates to the user's attributes in Azure AD will be sent to the application.
+ - Users will remain in the application indefinitely, unless updated outside of Azure AD, or until the assignment in Azure AD is removed.
+ - On the next review of that application's role assignments, the user will be included in the review.
+ - If the user is denied in an access review, then their application role assignment will be removed, and Azure AD will notify the application that the user is blocked from sign in.
+
+1. Create application role assignments for users who don't currently have role assignments.
+
+ ```powershell
+ foreach ($u in $azuread_not_in_role_list) {
+ $res = New-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $azuread_sp.Id -AppRoleId $azuread_app_role_id -PrincipalId $u -ResourceId $azuread_sp.Id
+ }
+ ```
+
+1. Wait 1 minute for changes to propagate within Azure AD.
+
+## Check that Azure AD provisioning has matched the existing users
+
+1. Requery Azure AD to obtain the updated list of role assignments.
+
+ ```powershell
+ $azuread_existing_assignments = @(Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $azuread_sp.Id -All)
+ ```
+
+1. Compare the list of user IDs from the previous section to those users now assigned to the application.
+
+ ```powershell
+ $azuread_still_not_in_role_list = @()
+ foreach ($id in $azuread_match_id_list) {
+ $found = $false
+ foreach ($existing in $azuread_existing_assignments) {
+ if ($existing.principalId -eq $id) {
+ $found = $true; break;
+ }
+ }
+ if ($found -eq $false) { $azuread_still_not_in_role_list += $id }
+ }
+ $azuread_still_not_in_role_count = $azuread_still_not_in_role_list.Count
+ if ($azuread_still_not_in_role_count -gt 0) {
+ Write-Output "$azuread_still_not_in_role_count users in the application's data store are not assigned to the application roles."
+ }
+ ```
+
+ If any users aren't assigned to application roles, check the Azure AD audit log for an error from a previous step.
+
+1. If the **Provisioning Status** of the application is **Off**, turn the **Provisioning Status** to **On**.
+1. Based on the guidance for [how long will it take to provision users](/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user#how-long-will-it-take-to-provision-users), wait for Azure AD provisioning to match the existing users of the application to those users just assigned.
+1. Monitor the [provisioning status](/azure/active-directory/app-provisioning/check-status-user-account-provisioning) to ensure that all users were matched successfully. If you don't see users being provisioned, check the troubleshooting guide for [no users being provisioned](/azure/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned). If you see an error in the provisioning status and are provisioning to an on-premises application, then check the [troubleshooting guide for on-premises application provisioning](/azure/active-directory/app-provisioning/on-premises-ecma-troubleshoot).
+
+Once the users have been matched by the Azure AD provisioning service, based on the application role assignments you've created, then subsequent changes will be sent to the application.
+
+## Next steps
+
+ - [Prepare for an access review of users' access to an application](access-reviews-application-preparation.md)
active-directory Identity Governance Applications Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-integrate.md
+
+ Title: Integrate your applications for identity governance and establishing a baseline of reviewed access - Azure AD
+description: Azure Active Directory Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. You can integrate your existing business critical third party on-premises and cloud-based applications with Azure AD for identity governance scenarios.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 6/28/2022+++++
+# Integrating applications with Azure AD and establishing a baseline of reviewed access
++
+Once you've [established the policies](identity-governance-applications-define.md) for who should have access to an application, then you can [connect your application to Azure AD](../manage-apps/what-is-application-management.md) and then [deploy the policies](identity-governance-applications-deploy.md) for governing access to them.
+
+Azure AD identity governance can be integrated with many applications, using [standards](../fundamentals/auth-sync-overview.md) such as OpenID Connect, SAML, SCIM, SQL and LDAP. Through these standards, you can use Azure AD with many popular SaaS applications and on-premises applications, including applications that your organization has developed. This deployment plan covers how to connect your application to Azure AD and enable identity governance features to be used for that application.
+
+In order for Azure AD identity governance to be used for an application, the application must first be integrated with Azure AD. An application being integrated with Azure AD means one of two requirements must be met:
+
+* The application relies upon Azure AD for federated SSO, and Azure AD controls authentication token issuance. If Azure AD is the only identity provider for the application, then only users who are assigned to one of the application's roles in Azure AD are able to sign into the application. Those users that lose their application role assignment can no longer get a new token to sign in to the application.
+* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as SCIM or by the application querying Azure AD via Microsoft Graph.
+
+If neither of those criteria are met for an application, for example when the application doesn't rely upon Azure AD, then identity governance can still be used. However, there may be some limitations using identity governance without meeting the criteria. For instance, users that aren't in your Azure AD, or aren't assigned to the application roles in Azure AD, won't be included in access reviews of the application, until you add them to the application roles. For more information, see [Preparing for an access review of users' access to an application](access-reviews-application-preparation.md).
+
+## Integrate the application with Azure AD to ensure only authorized users can access the application
+
+Typically this process of integrating an application begins when you configure that application to rely upon Azure AD for user authentication, with a federated single sign-on (SSO) protocol connection, and then add provisioning. The most commonly used protocols for SSO are [SAML and OpenID Connect](../develop/active-directory-v2-protocols.md). You can read more about the tools and process to [discover and migrate application authentication to Azure AD](../manage-apps/migrate-application-authentication-to-azure-active-directory.md).
+
+Next, if the application implements a provisioning protocol, then you should configure Azure AD to provision users to the application, so that Azure AD can signal to the application when a user has been granted access or a user's access has been removed. These provisioning signals permit the application to make automatic corrections, such as to reassign content created by an employee who has left to their manager.
+
+1. Check if your application is on the [list of enterprise applications](../manage-apps/view-applications-portal.md) or [list of app registrations](../develop/app-objects-and-service-principals.md). If the application is already present in your tenant, then skip to step 5 in this section.
+1. If your application is a SaaS application that isn't already registered in your tenant, then check if the application is available the [application gallery](../manage-apps/overview-application-gallery.md) for applications that can be integrated for federated SSO. If it's in the gallery, then use the tutorials to integrate the application with Azure AD.
+ 1. Follow the [tutorial](../saas-apps/tutorial-list.md) to configure the application for federated SSO with Azure AD.
+ 1. if the application supports provisioning, [configure the application for provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md).
+ 1. When complete, skip to the next section in this article.
+ If the SaaS application isn't in the gallery, then [ask the SaaS vendor to onboard](../manage-apps/v2-howto-app-gallery-listing.md).
+1. If this is a private or custom application, you can also select a single sign-on integration that's most appropriate, based on the location and capabilities of the application.
+
+ * If this application is in the public cloud, and it supports single sign-on, then configure single sign-on directly from Azure AD to the application.
+
+ |Application supports| Next steps|
+ |-|--|
+ | OpenID Connect | [Add an OpenID Connect OAuth application](../saas-apps/openidoauth-tutorial.md) |
+ | SAML 2.0 | Register the application and configure the application with [the SAML endpoints and certificate of Azure AD](../develop/active-directory-saml-protocol-reference.md) |
+ | SAML 1.1 | [Add a SAML-based application](../saas-apps/saml-tutorial.md) |
+
+ * Otherwise, if this is an on-premises or IaaS hosted application that supports single sign-on, then configure single sign-on from Azure AD to the application through the application proxy.
+
+ |Application supports| Next steps|
+ |-|--|
+ | SAML 2.0| Deploy the [application proxy](../app-proxy/application-proxy.md) and configure an application for [SAML SSO](../app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md) |
+ | Integrated Windows Auth (IWA) | Deploy the [application proxy](../app-proxy/application-proxy.md), configure an application for [Integrated Windows authentication SSO](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md), and set firewall rules to prevent access to the application's endpoints except via the proxy.|
+ | header-based authentication | Deploy the [application proxy](../app-proxy/application-proxy.md) and configure an application for [header-based SSO](../app-proxy/application-proxy-configure-single-sign-on-with-headers.md) |
+
+1. If your application has multiple roles, and relies upon Azure AD to send a user's role as part of a user signing into the application, then configure those application roles in Azure AD on your application. You can use the [app roles UI](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to add those roles.
+
+1. If the application supports provisioning, then [configure provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md) of assigned users and groups from Azure AD to that application. If this is a private or custom application, you can also select the integration that's most appropriate, based on the location and capabilities of the application.
+
+ * If this application is in the public cloud and supports SCIM, then configure provisioning of users via SCIM.
+
+ |Application supports| Next steps|
+ |-|--|
+ | SCIM | Configure an application with SCIM [for user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md) |
+
+ * Otherwise, if this is an on-premises or IaaS hosted application, then configure provisioning to that application, either via SCIM or to the underlying database or directory of the application.
+
+ |Application supports| Next steps|
+ |-|--|
+ | SCIM | configure an application with the [provisioning agent for on-premises SCIM-based apps](../app-provisioning/on-premises-scim-provisioning.md)|
+ | local user accounts, stored in a SQL database | configure an application with the [provisioning agent for on-premises SQL-based applications](../app-provisioning/on-premises-sql-connector-configure.md)|
+ | local user accounts, stored in an LDAP directory | configure an application with the [provisioning agent for on-premises LDAP-based applications](../app-provisioning/on-premises-ldap-connector-configure.md) |
+
+1. If your application uses Microsoft Graph to query groups from Azure AD, then [consent](../develop/consent-framework.md) to the applications to have the appropriate permissions to read from your tenant.
+
+1. Set that access to **the application is only permitted for users assigned to the application**. This setting will prevent users from inadvertently seeing the application in MyApps, and attempting to sign into the application, prior to Conditional Access policies being enabled.
+
+## Perform an initial access review
+
+If this is a new application your organization hasn't used before, and therefore no one has pre-existing access, or if you've already been performing access reviews for this application, then skip to the [next section](identity-governance-applications-deploy.md).
+
+However, if the application already existed in your environment, then it's possible that users may have gotten access in the past through manual or out-of-band processes, and those users should now be reviewed to have confirmation that their access is still needed and appropriate going forward. We recommend performing an access review of the users who already have access to the application, before enabling policies for more users to be able to request access. This review will set a baseline of all users having been reviewed at least once, to ensure that those users are authorized for continued access.
+
+1. Follow the steps in [Preparing for an access review of users' access to an application](access-reviews-application-preparation.md).
+1. Bring in any [existing users and create application role assignments](identity-governance-applications-existing-users.md) for them.
+1. If the application wasn't integrated for provisioning, then once the review is complete, you may need to manually update the application's internal database or directory to remove those users who were denied.
+1. Once the review has been completed and the application access updated, or if no users have access, then continue on to the next steps to deploy conditional access and entitlement management policies for the application.
+
+Now that you have a baseline that ensures existing access has been reviewed, then you can [deploy the organization's policies](identity-governance-applications-deploy.md) for ongoing access and any new access requests.
+
+## Next steps
+
+- [Deploy governance policies](identity-governance-applications-deploy.md)
active-directory Identity Governance Applications Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-prepare.md
+
+ Title: Govern access for applications in your environment - Azure AD
+description: Azure Active Directory Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. These features can be used for your existing business critical third party on-premises and cloud-based applications.
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 6/28/2022+++++
+# Govern access for applications in your environment
+
+Azure Active Directory (Azure AD) Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. Its features ensure that the right people have the right access to the right resources in your organization at the right time.
+
+Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. The application sensitivity may be based on its purpose or the data it contains, such as financial information or personal information of the organization's customers. For those applications, only a subset of all the users in the organization will typically be authorized to have access, and access should only be permitted based on documented business requirements. As part of your organization's controls for managing access, you can use Azure AD features to
+
+* set up appropriate access
+* enforce access checks
+* produce reports to demonstrate how those controls are being used to meet your compliance and risk management objectives.
+
+In addition to the application access governance scenario, you can also use identity governance and the other Azure AD features for other scenarios, such as [reviewing and removing users from other organizations](../governance/access-reviews-external-users.md) or [managing users who are excluded from Conditional Access policies](../governance/conditional-access-exclusion.md). If your organization has multiple administrators in Azure AD or Azure, uses B2B or self-service group management, then you should [plan an access reviews deployment](deploy-access-reviews.md) for those scenarios.
+
+## Getting started with governing access to applications
+
+Azure AD identity governance can be integrated with many applications, using [standards](../fundamentals/auth-sync-overview.md) such as OpenID Connect, SAML, SCIM, SQL and LDAP. Through these standards, you can use Azure AD with many popular SaaS applications, as well as on-premises applications, and applications that your organization has developed. Once you've prepared your Azure AD environment, as described in the section below, the three step plan covers how to connect an application to Azure AD and enable identity governance features to be used for that application.
+
+1. [Define your organization's policies for governing access to the application](identity-governance-applications-define.md)
+1. [Integrate the application with Azure AD](identity-governance-applications-integrate.md) to ensure only authorized users can access the application, and review user's existing access to the application to set a baseline of all users having been reviewed
+1. [Deploy those policies](identity-governance-applications-deploy.md) for controlling single sign-on (SSO) and automating access assignments for that application
+
+## Prerequisites before configuring Azure AD for identity governance
+
+Before you begin the process of governing application access from Azure AD, you should check your Azure AD environment is appropriately configured.
+
+* **Ensure your Azure AD and Microsoft Online Services environment is ready for the [compliance requirements](../standards/standards-overview.md) for the applications to be integrated and properly licensed**. Compliance is a shared responsibility among Microsoft, cloud service providers (CSPs), and organizations. To use Azure AD to govern access to applications, you must have one of the following licenses in your tenant:
+
+ * Azure AD Premium P2
+ * Enterprise Mobility + Security (EMS) E5 license
+
+ Your tenant will need to have at least as many licenses as the number of member (non-guest) users who have or can request access to the applications, approve, or review access to the applications. With an appropriate license for those users, you can then govern access to up to 1500 applications per user.
+
+* **If you will be governing guest's access to the application, link your Azure AD tenant to a subscription for MAU billing**. This step will be necessary prior to having a guest request or review their access. For more information, see [billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md).
+
+* **Check that Azure AD is already sending its audit log, and optionally other logs, to Azure Monitor.** Azure Monitor is optional, but useful for governing access to apps, as Azure AD only stores audit events for up to 30 days in its audit log. You can keep the audit data for longer than the default retention period, outlined in [How long does Azure AD store reporting data?](../reports-monitoring/reference-reports-data-retention.md), and use Azure Monitor workbooks and custom queries and reports on historical audit data. You can check the Azure AD configuration to see if it is using Azure Monitor, in **Azure Active Directory** in the Azure portal, by clicking on **Workbooks**. If this integration isn't configured, and you have an Azure subscription and are in the `Global Administrator` or `Security Administrator` roles, you can [configure Azure AD to use Azure Monitor](../governance/entitlement-management-logs-and-reporting.md).
+
+* **Make sure only authorized users are in the highly privileged administrative roles in your Azure AD tenant.** Administrators in the *Global Administrator*, *Identity Governance Administrator*, *User Administrator*, *Application Administrator*, *Cloud Application Administrator* and *Privileged Role Administrator* can make changes to users and their application role assignments. If the memberships of those roles have not yet been recently reviewed, you'll need a user who is in the *Global Administrator* or *Privileged Role Administrator* to ensure that [access review of these directory roles](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md) are started. You should also ensure that users in Azure roles in subscriptions that hold the Azure Monitor, Logic Apps and other resources needed for the operation of your Azure AD configuration have been reviewed.
+
+* **Check your tenant has appropriate isolation.** If your organization is using Active Directory on-premises, and these AD domains are connected to Azure AD, then you'll need to ensure that highly-privileged administrative operations for cloud-hosted services are isolated from on-premises accounts. Check that you've [configured your systems to protect your Microsoft 365 cloud environment from on-premises compromise](../fundamentals/protect-m365-from-on-premises-attacks.md).
+
+Once you have checked your Azure AD environment is ready, then proceed to [define the governance policies](identity-governance-applications-define.md) for your applications.
+
+## Next steps
+
+- [Define governance policies](identity-governance-applications-define.md)
+- [Integrate an application with Azure AD](identity-governance-applications-integrate.md)
+- [Deploy governance policies](identity-governance-applications-deploy.md)
+
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Typically, IT delegates access approval decisions to business decision makers.
Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Azure AD access reviews](access-reviews-overview.md). [Azure AD entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles.
-When a user attempts to access applications, Azure AD enforces [Conditional Access](../conditional-access/index.yml) policies. For example, Conditional Access policies can include displaying a [terms of use](../conditional-access/terms-of-use.md) and [ensuring the user has agreed to those terms](../conditional-access/require-tou.md) prior to being able to access an application.
+When a user attempts to access applications, Azure AD enforces [Conditional Access](../conditional-access/index.yml) policies. For example, Conditional Access policies can include displaying a [terms of use](../conditional-access/terms-of-use.md) and [ensuring the user has agreed to those terms](../conditional-access/require-tou.md) prior to being able to access an application. For more information, see [govern access to applications in your environment](identity-governance-applications-prepare.md).
## Privileged access lifecycle
In addition to the features listed above, additional Azure AD features frequentl
|Access requests|End users can request group membership or application access. End users, including guests from other organizations, can request access to access packages.|[Entitlement management](entitlement-management-overview.md)| |Workflow|Resource owners can define the approvers and escalation approvers for access requests and approvers for role activation requests. |[Entitlement management](entitlement-management-overview.md) and [PIM](../privileged-identity-management/pim-configure.md)| |Policy and role management|Admin can define conditional access policies for run-time access to applications. Resource owners can define policies for user's access via access packages.|[Conditional access](../conditional-access/overview.md) and [Entitlement management](entitlement-management-overview.md) policies|
-|Access certification|Admins can enable recurring access re-certification for: SaaS apps or cloud group memberships, Azure AD or Azure Resource role assignments. Automatically remove resource access, block guest access and delete guest accounts.|[Access reviews](access-reviews-overview.md), also surfaced in [PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)|
-|Fulfillment and provisioning|Automatic provisioning and deprovisioning into Azure AD connected apps, including via SCIM and into SharePoint Online sites. |[user provisioning](../app-provisioning/user-provisioning.md)|
+|Access certification|Admins can enable recurring access recertification for: SaaS apps, on-premises apps, cloud group memberships, Azure AD or Azure Resource role assignments. Automatically remove resource access, block guest access and delete guest accounts.|[Access reviews](access-reviews-overview.md), also surfaced in [PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)|
+|Fulfillment and provisioning|Automatic provisioning and deprovisioning into Azure AD connected apps, including via SCIM, LDAP, SQL and into SharePoint Online sites. |[user provisioning](../app-provisioning/user-provisioning.md)|
|Reporting and analytics|Admins can retrieve audit logs of recent user provisioning and sign on activity. Integration with Azure Monitor and 'who has access' via access packages.|[Azure AD reports](../reports-monitoring/overview-reports.md) and [monitoring](../reports-monitoring/overview-monitoring.md)| |Privileged access|Just-in-time and scheduled access, alerting, approval workflows for Azure AD roles (including custom roles) and Azure Resource roles.|[Azure AD PIM](../privileged-identity-management/pim-configure.md)| |Auditing|Admins can be alerted of creation of admin accounts.|[Azure AD PIM alerts](../privileged-identity-management/pim-how-to-configure-security-alerts.md)| ## Getting started
-Check out the Getting started tab of **Identity Governance** in the Azure portal to start using entitlement management, access reviews, Privileged Identity Management, and Terms of use.
+Check out the [Getting started tab](https://portal.azure.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/GettingStarted) of **Identity Governance** in the Azure portal to start using entitlement management, access reviews, Privileged Identity Management, and Terms of use, and see some common use cases.
![Identity Governance getting started](./media/identity-governance-overview/getting-started.png)
+There are also tutorials for [managing access to resources in entitlement management](entitlement-management-access-package-first.md), [onboarding external users to Azure AD through an approval process](entitlement-management-onboard-external-user.md), [governing access to existing applications](identity-governance-applications-prepare.md). You can also automate identity governance tasks through Microsoft Graph and PowerShell.
+ If you have any feedback about Identity Governance features, click **Got feedback?** in the Azure portal to submit your feedback. The team regularly reviews your feedback. While there is no perfect solution or recommendation for every customer, the following configuration guides also provide the baseline policies Microsoft recommends you follow to ensure a more secure and productive workforce.
+- [Plan an access reviews deployment to manage resource access lifecycle](deploy-access-reviews.md)
- [Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations) - [Securing privileged access](../roles/security-planning.md)
active-directory Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migration-resources.md
Resources to help you migrate application access and authentication to Azure Act
| Resource | Description | |:--|:-| |[Migrating your apps to Azure AD](https://aka.ms/migrateapps/whitepaper) | This white paper presents the benefits of migration, and describes how to plan for migration in four clearly-outlined phases: discovery, classification, migration, and ongoing management. YouΓÇÖll be guided through how to think about the process and break down your project into easy-to-consume pieces. Throughout the document are links to important resources that will help you along the way. |
-|[Developer tutorial: AD FS to Azure AD application migration playbook for developers](https://aka.ms/adfsplaybook) | This set of ASP.NET code samples and accompanying tutorials will help you learn how to safely and securely migrate your applications integrated with Active Directory Federation Services (AD FS) to Azure Active Directory (Azure AD). This tutorial is focused towards developers who not only need to learn configuring apps on both AD FS and Azure AD, but also become aware and confident of changes their code base will require in this process.|
+|[Developer tutorial: AD FS to Azure AD application migration playbook for developers](https://aka.ms/adfsplaybook) | This set of ASP.NET code samples and accompanying tutorials will help you learn how to safely and securely migrate your applications integrated with Active Directory Federation Services (AD FS) to Azure Active Directory (Azure AD). This tutorial is focused towards developers who not only need to learn how to configure apps on both AD FS and Azure AD, but also become aware and confident of changes their code base will require in this process.|
| [Tool: Active Directory Federation Services Migration Readiness Script](https://aka.ms/migrateapps/adfstools) | This is a script you can run on your on-premises Active Directory Federation Services (AD FS) server to determine the readiness of apps for migration to Azure AD.| | [Deployment plan: Migrating from AD FS to password hash sync](https://aka.ms/ADFSTOPHSDPDownload) | With password hash synchronization, hashes of user passwords are synchronized from on-premises Active Directory to Azure AD. This allows Azure AD to authenticate users without interacting with the on-premises Active Directory.| | [Deployment plan: Migrating from AD FS to pass-through authentication](https://aka.ms/ADFSTOPTADPDownload)|Azure AD pass-through authentication helps users sign in to both on-premises and cloud-based applications by using the same password. This feature provides your users with a better experience since they have one less password to remember. It also reduces IT helpdesk costs because users are less likely to forget how to sign in when they only need to remember one password. When people sign in using Azure AD, this feature validates users' passwords directly against your on-premises Active Directory.|
-| [Deployment plan: Enabling Single Sign-on to a SaaS app with Azure AD](https://aka.ms/SSODPDownload) | Single sign-on (SSO) helps you access all the apps and resources you need to do business, while signing in only once, using a single user account. For example, after a user has signed in, the user can move from Microsoft Office, to SalesForce, to Box without authenticating (for example, typing a password) a second time.
+| [Deployment plan: Enabling single sign-on to a SaaS app with Azure AD](https://aka.ms/SSODPDownload) | Single sign-on (SSO) helps you access all the apps and resources you need to do business, while signing in only once, using a single user account. For example, after a user has signed in, the user can move from Microsoft Office, to SalesForce, to Box without authenticating (for example, typing a password) a second time.
| [Deployment plan: Extending apps to Azure AD with Application Proxy](https://aka.ms/AppProxyDPDownload)| Providing access from employee laptops and other devices to on-premises applications has traditionally involved virtual private networks (VPNs) or demilitarized zones (DMZs). Not only are these solutions complex and hard to make secure, but they are costly to set up and manage. Azure AD Application Proxy makes it easier to access on-premises applications. |
-| [Deployment plans](../fundamentals/active-directory-deployment-plans.md) | Find more deployment plans for deploying features such as multi-factor authentication, Conditional Access, user provisioning, seamless SSO, self-service password reset, and more! |
-| [Migrating apps from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/mediahandler/files/resourcefiles/migrating-applications-from-symantec-siteminder-to-azure-active-directory/Migrating-applications-from-Symantec-SiteMinder-to-Azure-Active-Directory.pdf) | Get step by step guidance on application migration and integration options with an example, that walks you through migrating applications from Symantec SiteMinder to Azure AD. |
+| [Deployment plans](../fundamentals/active-directory-deployment-plans.md) | Find more deployment plans for deploying features such as Azure AD multi-factor authentication, Conditional Access, user provisioning, seamless SSO, self-service password reset, and more! |
+| [Migrating apps from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/mediahandler/files/resourcefiles/migrating-applications-from-symantec-siteminder-to-azure-active-directory/Migrating-applications-from-Symantec-SiteMinder-to-Azure-Active-Directory.pdf) | Get step by step guidance on application migration and integration options with an example that walks you through migrating applications from Symantec SiteMinder to Azure AD. |
+| [Identity governance for applications](../governance/identity-governance-applications-prepare.md)| This guide outlines what you need to do if you're migrating identity governance for an application from a previous identity governance technology, to connect Azure AD to that application.|
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
Title: Publish your application
-description: Learn how to publish your application in the Azure Active Directory application gallery.
+ Title: Submit a request to publish your application
+description: Learn how to publish your application in Azure Active Directory application gallery.
Previously updated : 1/18/2022 Last updated : 6/2/2022 +
-# Request to Publish your application in the Azure Active Directory application gallery
+# Submit a request to publish your application in Azure Active Directory application gallery
-You can publish your application in the Azure Active Directory (Azure AD) application gallery. When your application is published, it's made available as an option for users when they add applications to their tenant. For more information, see [Overview of the Azure Active Directory application gallery](overview-application-gallery.md).
+You can publish applications you develop in the *Azure Active Directory* (Azure AD) application gallery, which is a catalog of thousands of apps. When you publish your applications, they're made publicly available for users to add to their tenants. For more information, see [Overview of the Azure Active Directory application gallery](overview-application-gallery.md).
-To publish your application in the gallery, you need to complete the following tasks:
+To publish your application in the Azure AD gallery, you need to complete the following tasks:
- Make sure that you complete the prerequisites. - Create and publish documentation.
To publish your application in the gallery, you need to complete the following t
- Join the Microsoft partner network. ## Prerequisites-- To publish your application in the gallery, you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).-- Support for single sign-on (SSO). To learn more about the supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md).
- - For password SSO, make sure that your application supports form authentication so that password vaulting can be used.
- - For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/) to be listed in the gallery. The enterprise gallery applications must support multiple user configurations and not any specific user.
- - For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be properly implemented for the application.
-- Supporting provisioning is optional, but highly recommended. To learn more about the Azure AD SCIM implementation, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+To publish your application in the gallery, you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).
+- Implement support for *single sign-on* (SSO). To learn more about supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md).
+ - For password SSO, make sure that your application supports form authentication so that password vaulting can be used.
+ - For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/). Enterprise gallery applications must support multiple user configurations and not any specific user.
+ - For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be correctly implemented.
+- Provisioning is optional yet highly recommended. To learn more about Azure AD SCIM, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
-You can get a free test account with all the premium Azure AD features - 90 days free and can get extended as long as you do dev work with it: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
+You can sign up for a free, test Development account. It's free for 90 days and you get all of the premium Azure AD features with it. You can also extend the account if you use it for development work: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
## Create and publish documentation
-### Documentation on your site
+### Provide app documentation for your site
-Ease of adoption is a significant factor in enterprise software decisions. Clear easy-to-follow documentation supports your users in their adoption journey and reduces support costs.
+Ease of adoption is an important factor for those that make decisions about enterprise software. Documentation that is clear and easy to follow helps your users adopt technology and it reduces support costs.
-Your documentation should at a minimum include the following items:
+Create documentation that includes the following information at minimum:
-- Introduction to your SSO functionality
- - Protocols supported
+- An introduction to your SSO functionality
+ - Protocols
- Version and SKU
- - Supported identity providers list with documentation links
+ - List of supported identity providers with documentation links
- Licensing information for your application - Role-based access control for configuring SSO - SSO Configuration Steps - UI configuration elements for SAML with expected values from the provider - Service provider information to be passed to identity providers-- If OIDC/OAuth, list of permissions required for consent with business justifications
+- If you use OIDC/OAuth, a list of permissions required for consent, with business justifications
- Testing steps for pilot users - Troubleshooting information, including error codes and messages - Support mechanisms for users-- Details about your SCIM endpoint, including the resources and attributes supported
+- Details about your SCIM endpoint, including supported resources and attributes
-### Documentation on the Microsoft site
+### App documentation on the Microsoft site
-When your application is added to the gallery, documentation is created that explains the step-by-step process. For an example, see [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). This documentation is created based on your submission to the gallery, and you can easily update it if you make changes to your application using your GitHub account.
+When your application is added to the gallery, documentation is created that explains the step-by-step process. For an example, see [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md). This documentation is created based on your submission to the gallery. You can easily update the documentation if you make changes to your application by using your GitHub account.
## Submit your application
-After you've tested that your application integration works with Azure AD, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign into the portal you are presented with one of two screens.
+After you've tested that your application works with Azure AD, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign in to the portal you are presented with one of two screens.
- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team will add the account in the Microsoft Application Network portal. - If you see a "Request Access" page, then fill in the business justification and select **Request Access**.
-After the account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page. If you see the **Your sign-in was blocked** error while logging in, see [Troubleshoot sign-in to the Microsoft Application Network portal](troubleshoot-app-publishing.md).
+After your account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page. If you see the "Your sign-in was blocked" error while logging in, see [Troubleshoot sign-in to the Microsoft Application Network portal](troubleshoot-app-publishing.md).
### Implementation-specific options
-On the Application Registration Form, select the feature that you want to enable. Select **OpenID Connect & OAuth 2.0**, **SAML 2.0/WS-Fed**, or **Password SSO(UserName & Password)** depending on the feature that your application supports.
+On the application **Registration** form, select the feature that you want to enable. Select **OpenID Connect & OAuth 2.0**, **SAML 2.0/WS-Fed**, or **Password SSO(UserName & Password)** depending on the feature that your application supports.
-If you're implementing a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) 2.0 endpoint for user provisioning, select **User Provisioning (SCIM 2.0)**. Download the schema to provide in the onboarding request. For more information, see [Export provisioning configuration and roll back to a known good state](../app-provisioning/export-import-provisioning-configuration.md). The schema that you configured is used when testing the non-gallery application to build the gallery application.
+If you're implementing a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) 2.0 endpoint for user provisioning, select **User Provisioning (SCIM 2.0)**. Download the schema to provide in the onboarding request. For more information, see [Export provisioning configuration and roll back to a known good state](../app-provisioning/export-import-provisioning-configuration.md). The schema that you configured is used when testing the non-gallery application to build the gallery application.
+
+If you wish to register an MDM application in the Azure AD gallery, select **Register an MDM app**.
You can track application requests by customer name at the Microsoft Application Network portal. For more information, see [Application requests by Customers](https://microsoft.sharepoint.com/teams/apponboarding/Apps/SitePages/AppRequestsByCustomers.aspx). ### Timelines
-The timeline for the process of listing a SAML 2.0 or WS-Fed application in the gallery is 7 to 10 business days.
+Listing an SAML 2.0 or WS-Fed application in the gallery takes 7 to 10 business days.
:::image type="content" source="./media/howto-app-gallery-listing/timeline.png" alt-text="Screenshot that shows the timeline for listing a SAML application.":::
-The timeline for the process of listing an OpenID Connect application in the gallery is 2 to 5 business days.
+Listing an OpenID Connect application in the gallery takes 2 to 5 business days.
:::image type="content" source="./media/howto-app-gallery-listing/timeline2.png" alt-text="Screenshot that shows the timeline for listing an OpenID Connect application.":::
-The timeline for the process of listing a SCIM provisioning application in the gallery is variable and depends on numerous factors.
+Listing an SCIM provisioning application in the gallery varies, depending on numerous factors.
-Not all applications can be onboarded. Per the terms and conditions, the choice may be made to not list an application. Onboarding applications is at the sole discretion of the onboarding team. If your application is declined, you should use the non-gallery provisioning application to satisfy your provisioning needs.
+Not all applications are onboarded. Per the terms and conditions, a decision can be made not to list an application. Onboarding applications is at the sole discretion of the onboarding team.
Here's the flow of customer-requested applications. :::image type="content" source="./media/howto-app-gallery-listing/customer-request-2.png" alt-text="Screenshot that shows the customer-requested apps flow.":::
-For any escalations, send email to the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com), and a response is sent as soon as possible.
+To escalate issues of any kind, send an email to the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). A response is typically sent as soon as possible.
+
+## Update or Remove the application from the Gallery
+
+You can submit your application update request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps). The first time you try to sign into the portal you are presented with one of two screens.
+
+- If you receive the message "That didn't work", then you need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team will add the account in the Microsoft Application Network portal.
+
+- If you see a "Request Access" page, then fill in the business justification and select **Request Access**.
+
+After the account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page and select **Update my applicationΓÇÖs listing in the gallery** and select one of the following options as per your choice -
+
+* If you want to update applications SSO feature, select **Update my applicationΓÇÖs Federated SSO feature**.
+
+* If you want to update Password SSO feature, select **Update my applicationΓÇÖs Password SSO feature**.
+
+* If you want to upgrade your listing from Password SSO to Federated SSO, select **Upgrade my application from Password SSO to Federated SSO**.
+
+* If you want to update MDM listing, select **Update my MDM app**.
+
+* If you want to improve User Provisioning feature, select **Improve my applicationΓÇÖs User Provisioning feature**.
+
+* If you want to remove the application from Azure AD gallery, select **Remove my application listing from the gallery**.
+
+If you see the **Your sign-in was blocked** error while logging in, see [Troubleshoot sign-in to the Microsoft Application Network portal](troubleshoot-app-publishing.md).
+ ## Join the Microsoft partner network
-The Microsoft Partner Network provides instant access to exclusive resources, programs, tools, and connections. To join the network and create your go to market plan, see [Reach commercial customers](https://partner.microsoft.com/explore/commercial#gtm).
+The Microsoft Partner Network provides instant access to exclusive programs, tools, connections, and resources. To join the network and create your go-to-market plan, see [Reach commercial customers](https://partner.microsoft.com/explore/commercial#gtm).
## Next steps -- Learn more about managing enterprise applications in [What is application management in Azure Active Directory?](what-is-application-management.md)
+- Learn more about managing enterprise applications with [What is application management in Azure Active Directory?](what-is-application-management.md)
active-directory Anyone Home Crm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/anyone-home-crm-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Anyone Home CRM | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Anyone Home CRM'
description: Learn how to configure single sign-on between Azure Active Directory and Anyone Home CRM.
Previously updated : 05/22/2020 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Anyone Home CRM
+# Tutorial: Azure AD SSO integration with Anyone Home CRM
In this tutorial, you'll learn how to integrate Anyone Home CRM with Azure Active Directory (Azure AD). When you integrate Anyone Home CRM with Azure AD, you can:
In this tutorial, you'll learn how to integrate Anyone Home CRM with Azure Activ
* Enable your users to be automatically signed-in to Anyone Home CRM with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Anyone Home CRM single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Anyone Home CRM supports **IDP** initiated SSO
-* Once you configure Anyone Home CRM you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Anyone Home CRM supports **IDP** initiated SSO.
-## Adding Anyone Home CRM from the gallery
+## Add Anyone Home CRM from the gallery
To configure the integration of Anyone Home CRM into Azure AD, you need to add Anyone Home CRM from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Anyone Home CRM** in the search box. 1. Select **Anyone Home CRM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Anyone Home CRM
+## Configure and test Azure AD SSO for Anyone Home CRM
Configure and test Azure AD SSO with Anyone Home CRM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Anyone Home CRM.
-To configure and test Azure AD SSO with Anyone Home CRM, complete the following building blocks:
+To configure and test Azure AD SSO with Anyone Home CRM, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Anyone Home CRM, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Anyone Home CRM** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Anyone Home CRM** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://app.anyonehome.com/webroot/files/simplesamlphp/www/module.php/saml/sp/metadata.php/<Anyone_Home_Provided_Unique_Value>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Anyone Home CRM**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Anyone Home CRM. Work
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Anyone Home CRM tile in the Access Panel, you should be automatically signed in to the Anyone Home CRM for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Anyone Home CRM for which you set up the SSO.
-- [Try Anyone Home CRM with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Anyone Home CRM tile in the My Apps, you should be automatically signed in to the Anyone Home CRM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Anyone Home CRM with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Anyone Home CRM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Chronicx Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chronicx-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ChronicX® | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ChronicX®'
description: Learn how to configure single sign-on between Azure Active Directory and ChronicX®.
Previously updated : 02/20/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with ChronicX®
+# Tutorial: Azure AD SSO integration with ChronicX®
-In this tutorial, you learn how to integrate ChronicX® with Azure Active Directory (Azure AD).
-Integrating ChronicX® with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ChronicX® with Azure Active Directory (Azure AD). When you integrate ChronicX® with Azure AD, you can:
-* You can control in Azure AD who has access to ChronicX®.
-* You can enable your users to be automatically signed-in to ChronicX® (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ChronicX®.
+* Enable your users to be automatically signed-in to ChronicX® with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with ChronicX®, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* ChronicX® single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ChronicX® single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ChronicX® supports **SP** initiated SSO
-* ChronicX® supports **Just In Time** user provisioning
-
-## Adding ChronicX® from the gallery
-
-To configure the integration of ChronicX® into Azure AD, you need to add ChronicX® from the gallery to your list of managed SaaS apps.
-
-**To add ChronicX® from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* ChronicX® supports **SP** initiated SSO.
+* ChronicX® supports **Just In Time** user provisioning.
-4. In the search box, type **ChronicX®**, select **ChronicX®** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![ChronicX® in the results list](common/search-new-app.png)
+## Add ChronicX® from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ChronicX® based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ChronicX® needs to be established.
-
-To configure and test Azure AD single sign-on with ChronicX®, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ChronicX® Single Sign-On](#configure-chronicx-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ChronicX® test user](#create-chronicx-test-user)** - to have a counterpart of Britta Simon in ChronicX® that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of ChronicX® into Azure AD, you need to add ChronicX® from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ChronicX®** in the search box.
+1. Select **ChronicX®** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with ChronicX®, perform the following steps:
+## Configure and test Azure AD SSO for ChronicX®
-1. In the [Azure portal](https://portal.azure.com/), on the **ChronicX®** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with ChronicX® using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ChronicX®.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with ChronicX®, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ChronicX SSO](#configure-chronicx-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ChronicX test user](#create-chronicx-test-user)** - to have a counterpart of B.Simon in ChronicX® that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **ChronicX®** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![ChronicX® Domain and URLs single sign-on information](common/sp-identifier.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.chronicx.com/ups/processlogonSSO.jsp`
-
- b. In the **Identifier (Entity ID)** text box, type a URL:
+ a. In the **Identifier (Entity ID)** text box, type the value:
`ups.chronicx.com`
+
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<subdomain>.chronicx.com/ups/processlogonSSO.jsp`
> [!NOTE] >The Sign-on URL value is not real. Update the value with the actual Sign-On URL. Contact [ChronicX® Client support team](https://www.casebank.com/contact-us/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-6. On the **Set up ChronicX®** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- a. Login URL
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
- b. Azure Ad Identifier
+1. On the **Set up ChronicX®** section, copy the appropriate URL(s) as per your requirement.
- c. Logout URL
-
-### Configure ChronicX Single Sign-On
-
-To configure single sign-on on **ChronicX®** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ChronicX® support team](https://www.casebank.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ChronicX®.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ChronicX®**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **ChronicX®**.
-
- ![The ChronicX® link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ChronicX®.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ChronicX®**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure ChronicX SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **ChronicX®** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ChronicX® support team](https://www.casebank.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create ChronicX test user
In this section, a user called Britta Simon is created in ChronicX®. ChronicX®
> [!Note] > If you need to create a user manually, contact [ChronicX® support team](https://www.casebank.com/contact-us/).
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ChronicX® tile in the Access Panel, you should be automatically signed in to the ChronicX® for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to ChronicX® Sign-On URL where you can initiate the login flow.
-## Additional Resources
+* Go to ChronicX® Sign-On URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ChronicX® tile in the My Apps, this will redirect to ChronicX® Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ChronicX® you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cpqsync By Cincom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cpqsync-by-cincom-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with CPQSync by Cincom | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and CPQSync by Cincom.
+ Title: 'Tutorial: Azure AD SSO integration with Cincom CPQ'
+description: Learn how to configure single sign-on between Azure Active Directory and Cincom CPQ.
Previously updated : 08/08/2019 Last updated : 06/28/2022
-# Tutorial: Integrate CPQSync by Cincom with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Cincom CPQ
-In this tutorial, you'll learn how to integrate CPQSync by Cincom with Azure Active Directory (Azure AD). When you integrate CPQSync by Cincom with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Cincom CPQ with Azure Active Directory (Azure AD). When you integrate Cincom CPQ with Azure AD, you can:
-* Control in Azure AD who has access to CPQSync by Cincom.
-* Enable your users to be automatically signed-in to CPQSync by Cincom with their Azure AD accounts.
+* Control in Azure AD who has access to Cincom CPQ.
+* Enable your users to be automatically signed-in to Cincom CPQ with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* CPQSync by Cincom single sign-on (SSO) enabled subscription.
+* Cincom CPQ single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* CPQSync by Cincom supports **SP and IDP** initiated SSO
+* Cincom CPQ supports **SP and IDP** initiated SSO.
-## Adding CPQSync by Cincom from the gallery
+## Add Cincom CPQ from the gallery
-To configure the integration of CPQSync by Cincom into Azure AD, you need to add CPQSync by Cincom from the gallery to your list of managed SaaS apps.
+To configure the integration of Cincom CPQ into Azure AD, you need to add Cincom CPQ from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **CPQSync by Cincom** in the search box.
-1. Select **CPQSync by Cincom** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Cincom CPQ** in the search box.
+1. Select **Cincom CPQ** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for CPQSync by Cincom
+## Configure and test Azure AD SSO for Cincom CPQ
-Configure and test Azure AD SSO with CPQSync by Cincom using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CPQSync by Cincom.
+Configure and test Azure AD SSO with Cincom CPQ using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cincom CPQ.
-To configure and test Azure AD SSO with CPQSync by Cincom, complete the following building blocks:
+To configure and test Azure AD SSO with Cincom CPQ, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-2. **[Configure CPQSync by Cincom SSO](#configure-cpqsync-by-cincom-sso)** - to configure the Single Sign-On settings on application side.
- 1. **[Create CPQSync by Cincom test user](#create-cpqsync-by-cincom-test-user)** - to have a counterpart of B.Simon in CPQSync by Cincom that is linked to the Azure AD representation of user.
+2. **[Configure Cincom CPQ SSO](#configure-cincom-cpq-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Cincom CPQ test user](#create-cincom-cpq-test-user)** - to have a counterpart of B.Simon in Cincom CPQ that is linked to the Azure AD representation of user.
3. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **CPQSync by Cincom** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Cincom CPQ** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://cincom.oktapreview.com/sso/saml2/<CUSTOMURL>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://cincom.okta.com/` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [CPQSync by Cincom Client support team](https://supportweb.cincom.com/default.aspx) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Cincom CPQ Client support team](https://supportweb.cincom.com/default.aspx) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificateraw.png)
+ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate")
-6. On the **Set up CPQSync by Cincom** section, copy the appropriate URL(s) based on your requirement.
+6. On the **Set up Cincom CPQ** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CPQSync by Cincom.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cincom CPQ.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **CPQSync by Cincom**.
+1. In the applications list, select **Cincom CPQ**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure CPQSync by Cincom SSO
+## Configure Cincom CPQ SSO
-To configure single sign-on on **CPQSync by Cincom** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [CPQSync by Cincom support team](https://supportweb.cincom.com/default.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Cincom CPQ** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Cincom CPQ support team](https://supportweb.cincom.com/default.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create CPQSync by Cincom test user
+### Create Cincom CPQ test user
-In this section, you create a user called B.Simon in CPQSync by Cincom. Work with [CPQSync by Cincom support team](https://supportweb.cincom.com/default.aspx) to add the users in the CPQSync by Cincom platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Cincom CPQ. Work with [Cincom CPQ support team](https://supportweb.cincom.com/default.aspx) to add the users in the Cincom CPQ platform. Users must be created and activated before you use single sign-on.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Cincom CPQ Sign-On URL where you can initiate the login flow.
+
+* Go to Cincom CPQ Sign-On URL directly and initiate the login flow from there.
-When you click the CPQSync by Cincom tile in the Access Panel, you should be automatically signed in to the CPQSync by Cincom for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Cincom CPQ for which you set up the SSO.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Cincom CPQ tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Cincom CPQ for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Cincom CPQ you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Firmplay Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/firmplay-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with FirmPlay - Employee Advocacy for Recruiting | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with FirmPlay - Employee Advocacy for Recruiting'
description: Learn how to configure single sign-on between Azure Active Directory and FirmPlay - Employee Advocacy for Recruiting.
Previously updated : 04/01/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with FirmPlay - Employee Advocacy for Recruiting
+# Tutorial: Azure AD SSO integration with FirmPlay - Employee Advocacy for Recruiting
-In this tutorial, you learn how to integrate FirmPlay - Employee Advocacy for Recruiting with Azure Active Directory (Azure AD).
-Integrating FirmPlay - Employee Advocacy for Recruiting with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate FirmPlay - Employee Advocacy for Recruiting with Azure Active Directory (Azure AD). When you integrate FirmPlay - Employee Advocacy for Recruiting with Azure AD, you can:
-* You can control in Azure AD who has access to FirmPlay - Employee Advocacy for Recruiting.
-* You can enable your users to be automatically signed-in to FirmPlay - Employee Advocacy for Recruiting (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to FirmPlay - Employee Advocacy for Recruiting.
+* Enable your users to be automatically signed-in to FirmPlay - Employee Advocacy for Recruiting with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
To configure Azure AD integration with FirmPlay - Employee Advocacy for Recruiti
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* FirmPlay - Employee Advocacy for Recruiting supports **SP** initiated SSO
+* FirmPlay - Employee Advocacy for Recruiting supports **SP** initiated SSO.
-## Adding FirmPlay - Employee Advocacy for Recruiting from the gallery
+## Add FirmPlay - Employee Advocacy for Recruiting from the gallery
To configure the integration of FirmPlay - Employee Advocacy for Recruiting into Azure AD, you need to add FirmPlay - Employee Advocacy for Recruiting from the gallery to your list of managed SaaS apps.
-**To add FirmPlay - Employee Advocacy for Recruiting from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **FirmPlay - Employee Advocacy for Recruiting**, select **FirmPlay - Employee Advocacy for Recruiting** from result panel then click **Add** button to add the application.
-
- ![FirmPlay - Employee Advocacy for Recruiting in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **FirmPlay - Employee Advocacy for Recruiting** in the search box.
+1. Select **FirmPlay - Employee Advocacy for Recruiting** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with FirmPlay - Employee Advocacy for Recruiting based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in FirmPlay - Employee Advocacy for Recruiting needs to be established.
+## Configure and test Azure AD SSO for FirmPlay - Employee Advocacy for Recruiting
-To configure and test Azure AD single sign-on with FirmPlay - Employee Advocacy for Recruiting, you need to complete the following building blocks:
+Configure and test Azure AD SSO with FirmPlay - Employee Advocacy for Recruiting using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in FirmPlay - Employee Advocacy for Recruiting.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure FirmPlay - Employee Advocacy for Recruiting Single Sign-On](#configure-firmplayemployee-advocacy-for-recruiting-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create FirmPlay - Employee Advocacy for Recruiting test user](#create-firmplayemployee-advocacy-for-recruiting-test-user)** - to have a counterpart of Britta Simon in FirmPlay - Employee Advocacy for Recruiting that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with FirmPlay - Employee Advocacy for Recruiting, perform the following steps:
-### Configure Azure AD single sign-on
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure FirmPlay - Employee Advocacy for Recruiting SSO](#configure-firmplayemployee-advocacy-for-recruiting-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create FirmPlay - Employee Advocacy for Recruiting test user](#create-firmplayemployee-advocacy-for-recruiting-test-user)** - to have a counterpart of Britta Simon in FirmPlay - Employee Advocacy for Recruiting that is linked to the Azure AD representation of user.
+6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with FirmPlay - Employee Advocacy for Recruiting, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **FirmPlay - Employee Advocacy for Recruiting** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **FirmPlay - Employee Advocacy for Recruiting** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ ![Screenshot showing the edit Basic SAML Configuration screen.](common/edit-urls.png)
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![FirmPlay - Employee Advocacy for Recruiting Domain and URLs single sign-on information](common/sp-signonurl.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<your-subdomain>.firmplay.com/`
To configure Azure AD single sign-on with FirmPlay - Employee Advocacy for Recru
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure FirmPlay - Employee Advocacy for Recruiting Single Sign-On
-
-To configure single sign-on on **FirmPlay - Employee Advocacy for Recruiting** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [FirmPlay - Employee Advocacy for Recruiting support team](mailto:engineering@firmplay.com). They set this setting to have the SAML SSO connection set properly on both sides.
+### Create an Azure AD test user
-### Create an Azure AD test user
+In this section, you'll create a test user in the Azure portal called B.Simon.
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to FirmPlay - Employee Advocacy for Recruiting.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to FirmPlay - Employee Advocacy for Recruiting.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **FirmPlay - Employee Advocacy for Recruiting**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **FirmPlay - Employee Advocacy for Recruiting**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure FirmPlay - Employee Advocacy for Recruiting SSO
-2. In the applications list, select **FirmPlay - Employee Advocacy for Recruiting**.
-
- ![The FirmPlay - Employee Advocacy for Recruiting link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **FirmPlay - Employee Advocacy for Recruiting** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [FirmPlay - Employee Advocacy for Recruiting support team](mailto:engineering@firmplay.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create FirmPlay - Employee Advocacy for Recruiting test user
-In this section, you create a user called Britta Simon in FirmPlay - Employee Advocacy for Recruiting. Work with [FirmPlay - Employee Advocacy for Recruiting support team](mailto:engineering@firmplay.com) to add the users in the FirmPlay - Employee Advocacy for Recruiting platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in FirmPlay - Employee Advocacy for Recruiting. Work with [FirmPlay - Employee Advocacy for Recruiting support team](mailto:engineering@firmplay.com) to add the users in the FirmPlay - Employee Advocacy for Recruiting platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the FirmPlay - Employee Advocacy for Recruiting tile in the Access Panel, you should be automatically signed in to the FirmPlay - Employee Advocacy for Recruiting for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to FirmPlay - Employee Advocacy for Recruiting Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to FirmPlay - Employee Advocacy for Recruiting Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the FirmPlay - Employee Advocacy for Recruiting tile in the My Apps, this will redirect to FirmPlay - Employee Advocacy for Recruiting Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure FirmPlay - Employee Advocacy for Recruiting you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Foreseecxsuite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foreseecxsuite-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ForeSee CX Suite | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ForeSee CX Suite'
description: Learn how to configure single sign-on between Azure Active Directory and ForeSee CX Suite.
Previously updated : 04/01/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with ForeSee CX Suite
+# Tutorial: Azure AD SSO integration with ForeSee CX Suite
-In this tutorial, you learn how to integrate ForeSee CX Suite with Azure Active Directory (Azure AD).
-Integrating ForeSee CX Suite with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ForeSee CX Suite with Azure Active Directory (Azure AD). When you integrate ForeSee CX Suite with Azure AD, you can:
-* You can control in Azure AD who has access to ForeSee CX Suite.
-* You can enable your users to be automatically signed-in to ForeSee CX Suite (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to ForeSee CX Suite.
+* Enable your users to be automatically signed-in to ForeSee CX Suite with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Prerequisites
To configure Azure AD integration with ForeSee CX Suite, you need the following
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ForeSee CX Suite supports **SP** initiated SSO
+* ForeSee CX Suite supports **SP** initiated SSO.
-* ForeSee CX Suite supports **Just In Time** user provisioning
+* ForeSee CX Suite supports **Just In Time** user provisioning.
-## Adding ForeSee CX Suite from the gallery
+## Add ForeSee CX Suite from the gallery
To configure the integration of ForeSee CX Suite into Azure AD, you need to add ForeSee CX Suite from the gallery to your list of managed SaaS apps.
-**To add ForeSee CX Suite from the gallery, perform the following steps:**
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ForeSee CX Suite** in the search box.
+1. Select **ForeSee CX Suite** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
+## Configure and test Azure AD SSO for ForeSee CX Suite
- ![The Azure Active Directory button](common/select-azuread.png)
+Configure and test Azure AD SSO with ForeSee CX Suite using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ForeSee CX Suite.
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
+To configure and test Azure AD SSO with ForeSee CX Suite, perform the following steps:
- ![The Enterprise applications blade](common/enterprise-applications.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure ForeSee CX Suite SSO](#configure-foresee-cx-suite-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ForeSee CX Suite test user](#create-foresee-cx-suite-test-user)** - to have a counterpart of B.Simon in ForeSee CX Suite that is linked to the Azure AD representation of user.
+3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-3. To add new application, click **New application** button on the top of dialog.
+## Configure Azure AD SSO
- ![The New application button](common/add-new-app.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-4. In the search box, type **ForeSee CX Suite**, select **ForeSee CX Suite** from result panel then click **Add** button to add the application.
+1. In the Azure portal, on the **ForeSee CX Suite** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![ForeSee CX Suite in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ForeSee CX Suite based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ForeSee CX Suite needs to be established.
-
-To configure and test Azure AD single sign-on with ForeSee CX Suite, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ForeSee CX Suite Single Sign-On](#configure-foresee-cx-suite-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ForeSee CX Suite test user](#create-foresee-cx-suite-test-user)** - to have a counterpart of Britta Simon in ForeSee CX Suite that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with ForeSee CX Suite, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **ForeSee CX Suite** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing the edit Basic SAML Configuration screen.](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
To configure Azure AD single sign-on with ForeSee CX Suite, perform the followin
c. After the metadata file is successfully uploaded, the **Identifier** value gets auto populated in Basic SAML Configuration section.
- ![ForeSee CX Suite Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign-on URL** text box, type a URL:
+ d. In the **Sign-on URL** text box, type a URL:
`https://cxsuite.foresee.com/`
- b. In the **Identifier** textbox, type a URL using the following pattern: https:\//www.okta.com/saml2/service-provider/\<UniqueID>
+ e. In the **Identifier** textbox, type a URL using the following pattern: https:\//www.okta.com/saml2/service-provider/\<UniqueID>
> [!Note] > If the **Identifier** value do not get auto polulated, then please fill in the value manually according to above pattern. The Identifier value is not real. Update this value with the actual Identifier. Contact [ForeSee CX Suite Client support team](mailto:support@foresee.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
To configure Azure AD single sign-on with ForeSee CX Suite, perform the followin
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure ForeSee CX Suite Single Sign-On
-
-To configure single sign-on on **ForeSee CX Suite** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ForeSee CX Suite support team](mailto:support@foresee.com). They set this setting to have the SAML SSO connection set properly on both sides.
+### Create an Azure AD test user
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
2. Select **New user** at the top of the screen.-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+3. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 2. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 3. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 4. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ForeSee CX Suite.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ForeSee CX Suite**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ForeSee CX Suite.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
2. In the applications list, select **ForeSee CX Suite**.
+3. In the app's overview page, find the **Manage** section and select **Users and groups**.
+4. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+6. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+7. In the **Add Assignment** dialog, click the **Assign** button.
- ![The ForeSee CX Suite link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure ForeSee CX Suite SSO
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **ForeSee CX Suite** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ForeSee CX Suite support team](mailto:support@foresee.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create ForeSee CX Suite test user
-In this section, you create a user called Britta Simon in ForeSee CX Suite. Work with [ForeSee CX Suite support team](mailto:support@foresee.com) to add the users or the domain that must be added to an allow list for the ForeSee CX Suite platform. If the domain is added by the team, users will get automatically provisioned to the ForeSee CX Suite platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in ForeSee CX Suite. Work with [ForeSee CX Suite support team](mailto:support@foresee.com) to add the users or the domain that must be added to an allowlist for the ForeSee CX Suite platform. If the domain is added by the team, users will get automatically provisioned to the ForeSee CX Suite platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ForeSee CX Suite tile in the Access Panel, you should be automatically signed in to the ForeSee CX Suite for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to ForeSee CX Suite Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to ForeSee CX Suite Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ForeSee CX Suite tile in the My Apps, this will redirect to ForeSee CX Suite Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ForeSee CX Suite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory G Suite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
-9. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in G Suite for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the G Suite API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. Select the **Save** button to commit any changes.
+
+> [!NOTE]
+> GSuite Provisioning currently only supports the use of primaryEmail as the matching attribute.
+ |Attribute|Type| |||
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Insigniasamlsso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insigniasamlsso-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Insignia SAML SSO | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Insignia SAML SSO | Microsoft Docs'
description: Learn how to configure single sign-on between Azure Active Directory and Insignia SAML SSO.
Previously updated : 03/26/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with Insignia SAML SSO
+# Tutorial: Azure AD SSO integration with Insignia SAML SSO
-In this tutorial, you learn how to integrate Insignia SAML SSO with Azure Active Directory (Azure AD).
-Integrating Insignia SAML SSO with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Insignia SAML SSO with Azure Active Directory (Azure AD). When you integrate Insignia SAML SSO with Azure AD, you can:
-* You can control in Azure AD who has access to Insignia SAML SSO.
-* You can enable your users to be automatically signed-in to Insignia SAML SSO (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to Insignia SAML SSO.
+* Enable your users to be automatically signed-in to Insignia SAML SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Prerequisites To configure Azure AD integration with Insignia SAML SSO, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Insignia SAML SSO single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Insignia SAML SSO single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Insignia SAML SSO supports **SP** initiated SSO
+* Insignia SAML SSO supports **SP** initiated SSO.
-## Adding Insignia SAML SSO from the gallery
+## Add Insignia SAML SSO from the gallery
To configure the integration of Insignia SAML SSO into Azure AD, you need to add Insignia SAML SSO from the gallery to your list of managed SaaS apps.
-**To add Insignia SAML SSO from the gallery, perform the following steps:**
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Insignia SAML SSO** in the search box.
+1. Select **Insignia SAML SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
+## Configure and test Azure AD SSO for Insignia SAML SSO
- ![The Azure Active Directory button](common/select-azuread.png)
+Configure and test Azure AD SSO with Insignia SAML SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Insignia SAML SSO.
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
+To configure and test Azure AD SSO with Insignia SAML SSO, perform the following steps:
- ![The Enterprise applications blade](common/enterprise-applications.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure Insignia SAML SSO](#configure-insignia-saml-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Insignia SAML SSO test user](#create-insignia-saml-sso-test-user)** - to have a counterpart of B.Simon in Insignia SAML SSO that is linked to the Azure AD representation of user.
+3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-3. To add new application, click **New application** button on the top of dialog.
+## Configure Azure AD SSO
- ![The New application button](common/add-new-app.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-4. In the search box, type **Insignia SAML SSO**, select **Insignia SAML SSO** from result panel then click **Add** button to add the application.
+1. In the Azure portal, on the **Insignia SAML SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Insignia SAML SSO in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Insignia SAML SSO based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Insignia SAML SSO needs to be established.
-
-To configure and test Azure AD single sign-on with Insignia SAML SSO, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Insignia SAML SSO Single Sign-On](#configure-insignia-saml-sso-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Insignia SAML SSO test user](#create-insignia-saml-sso-test-user)** - to have a counterpart of Britta Simon in Insignia SAML SSO that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with Insignia SAML SSO, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Insignia SAML SSO** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing the edit Basic SAML Configuration screen.](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Insignia SAML SSO Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ a. In the **Sign on URL** text box, type a URL using one of the following patterns:
- ```http
- https://<customername>.insigniails.com/ils
- https://<customername>.insigniails.com/
- https://<customername>.insigniailsusa.com/
- ```
+ | Sign on URL|
+ ||
+ | `https://<customername>.insigniails.com/ils` |
+ | `https://<customername>.insigniails.com/` |
+ | `https://<customername>.insigniailsusa.com/` |
+ |
b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://<customername>.insigniailsusa.com/<uniqueid>`
To configure Azure AD single sign-on with Insignia SAML SSO, perform the followi
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Insignia SAML SSO Single Sign-On
-
-To configure single sign-on on **Insignia SAML SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Insignia SAML SSO support team](http://www.insigniasoftware.com/insignia/Techsupport.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
2. Select **New user** at the top of the screen.-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+3. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 2. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 3. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 4. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Insignia SAML SSO.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Insignia SAML SSO**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Insignia SAML SSO.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
2. In the applications list, select **Insignia SAML SSO**.
+3. In the app's overview page, find the **Manage** section and select **Users and groups**.
+4. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+6. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+7. In the **Add Assignment** dialog, click the **Assign** button.
- ![The Insignia SAML SSO link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure Insignia SAML SSO
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Insignia SAML SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Insignia SAML SSO support team](http://www.insigniasoftware.com/insignia/Techsupport.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Insignia SAML SSO test user In this section, you create a user called Britta Simon in Insignia SAML SSO. Work with [Insignia SAML SSO support team](http://www.insigniasoftware.com/insignia/Techsupport.aspx) to add the users in the Insignia SAML SSO platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Insignia SAML SSO tile in the Access Panel, you should be automatically signed in to the Insignia SAML SSO for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Insignia SAML SSO Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to Insignia SAML SSO Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Insignia SAML SSO tile in the My Apps, this will redirect to Insignia SAML SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Insignia SAML SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Iqualify Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iqualify-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with iQualify LMS | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with iQualify LMS'
description: Learn how to configure single sign-on between Azure Active Directory and iQualify LMS.
Previously updated : 03/14/2019 Last updated : 06/21/2022
-# Tutorial: Azure Active Directory integration with iQualify LMS
+# Tutorial: Azure AD SSO integration with iQualify LMS
-In this tutorial, you learn how to integrate iQualify LMS with Azure Active Directory (Azure AD).
-Integrating iQualify LMS with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate iQualify LMS with Azure Active Directory (Azure AD). When you integrate iQualify LMS with Azure AD, you can:
-* You can control in Azure AD who has access to iQualify LMS.
-* You can enable your users to be automatically signed-in to iQualify LMS (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to iQualify LMS.
+* Enable your users to be automatically signed-in to iQualify LMS with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with iQualify LMS, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* iQualify LMS single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* iQualify LMS single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* iQualify LMS supports **SP and IDP** initiated SSO
-* iQualify LMS supports **Just In Time** user provisioning
+* iQualify LMS supports **SP and IDP** initiated SSO.
+* iQualify LMS supports **Just In Time** user provisioning.
-## Adding iQualify LMS from the gallery
+## Add iQualify LMS from the gallery
To configure the integration of iQualify LMS into Azure AD, you need to add iQualify LMS from the gallery to your list of managed SaaS apps.
-**To add iQualify LMS from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **iQualify LMS**, select **iQualify LMS** from result panel then click **Add** button to add the application.
-
- ![iQualify LMS in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with iQualify LMS based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in iQualify LMS needs to be established.
-
-To configure and test Azure AD single sign-on with iQualify LMS, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **iQualify LMS** in the search box.
+1. Select **iQualify LMS** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure iQualify LMS Single Sign-On](#configure-iqualify-lms-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create iQualify LMS test user](#create-iqualify-lms-test-user)** - to have a counterpart of Britta Simon in iQualify LMS that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for iQualify LMS
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with iQualify LMS using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in iQualify LMS.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with iQualify LMS, perform the following steps:
-To configure Azure AD single sign-on with iQualify LMS, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure iQualify LMS SSO](#configure-iqualify-lms-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create iQualify LMS test user](#create-iqualify-lms-test-user)** - to have a counterpart of B.Simon in iQualify LMS that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **iQualify LMS** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **iQualify LMS** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. On the **Basic SAML Configuration** section, perform the following steps:
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ a. In the **Identifier** text box, type a URL using one the following patterns:
-4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
+ | **Identifier** |
+ ||
+ | Production Environment: `https://<yourorg>.iqualify.com/` |
+ | Test Environment: `https://<yourorg>.iqualify.io` |
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
-
- 1. In the **Identifier** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
- * Production Environment: `https://<yourorg>.iqualify.com/`
- * Test Environment: `https://<yourorg>.iqualify.io`
+ | **Reply URL** |
+ |--|
+ | Production Environment: `https://<yourorg>.iqualify.com/auth/saml2/callback` |
+ | Test Environment: `https://<yourorg>.iqualify.io/auth/saml2/callback` |
- 2. In the **Reply URL** text box, type a URL using the following pattern:
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- * Production Environment: `https://<yourorg>.iqualify.com/auth/saml2/callback`
- * Test Environment: `https://<yourorg>.iqualify.io/auth/saml2/callback`
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
-5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
-
- * Production Environment: `https://<yourorg>.iqualify.com/login`
- * Test Environment: `https://<yourorg>.iqualify.io/login`
+ | **Sign-on URL** |
+ |-|
+ | Production Environment: `https://<yourorg>.iqualify.com/login` |
+ | Test Environment: `https://<yourorg>.iqualify.io/login` |
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [iQualify LMS Client support team](https://www.iqualify.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-6. Your iQualify LMS application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open **User Attributes** dialog.
+1. Your iQualify LMS application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open **User Attributes** dialog.
- ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
+ ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png "Attributes")
-7. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
+1. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
| Name | Source Attribute| | | |
To configure Azure AD single sign-on with iQualify LMS, perform the following st
a. Click **Add new claim** to open the **Manage user claims** dialog.
- ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png)
+ ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png "Claims")
- ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png)
+ ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png "Values")
b. In the **Name** textbox, type the attribute name shown for that row.
To configure Azure AD single sign-on with iQualify LMS, perform the following st
> [!Note] > The **person_id** attribute is **Optional**
-8. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up iQualify LMS** section, copy the appropriate URL(s) as per your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
- ![The Certificate download link](common/certificatebase64.png)
+### Create an Azure AD test user
-9. On the **Set up iQualify LMS** section, copy the appropriate URL(s) as per your requirement.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- a. Login URL
+### Assign the Azure AD test user
- b. Azure AD Identifier
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to iQualify LMS.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **iQualify LMS**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure iQualify LMS Single Sign-On
+## Configure iQualify LMS SSO
1. Open a new browser window, and then sign in to your iQualify environment as an administrator. 1. Once you are logged in, click on your avatar at the top right, then click on **Account settings**
- ![Account settings](./media/iqualify-tutorial/setting1.png)
+ ![Screenshot shows the Account settings.](./media/iqualify-tutorial/settings.png "Account")
1. In the account settings area, click on the ribbon menu on the left and click on **INTEGRATIONS**
- ![INTEGRATIONS](./media/iqualify-tutorial/setting2.png)
+ ![Screenshot shows integration area of the application.](./media/iqualify-tutorial/menu.png "Profile")
1. Under INTEGRATIONS, click on the **SAML** icon.
- ![SAML icon](./media/iqualify-tutorial/setting3.png)
+ ![Screenshot shows the SAML icon under integrations.](./media/iqualify-tutorial/icon.png "Integration")
1. In the **SAML Authentication Settings** dialog box, perform the following steps:
- ![SAML Authentication Settings](./media/iqualify-tutorial/setting4.png)
+ ![Screenshot shows the SAML Authentication Settings](./media/iqualify-tutorial/details.png "Authentication")
a. In the **SAML SINGLE SIGN-ON SERVICE URL** box, paste the **Login URL** value copied from the Azure AD application configuration window.
To configure Azure AD single sign-on with iQualify LMS, perform the following st
f. Click **UPDATE**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to iQualify LMS.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **iQualify LMS**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **iQualify LMS**.
-
- ![The iQualify LMS link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create iQualify LMS test user In this section, a user called Britta Simon is created in iQualify LMS. iQualify LMS supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in iQualify LMS, a new one is created after authentication.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration using the My Apps.
-When you click the iQualify LMS tile in the Access Panel, you should get login page of your iQualify LMS application.
+When you click the iQualify LMS tile in the My Apps, you should get login page of your iQualify LMS application.
- ![login page](./media/iqualify-tutorial/login.png)
+ ![Screenshot shows the login page of application.](./media/iqualify-tutorial/login.png "Configure")
Click **Sign in with Azure AD** button and you should get automatically signed-on to your iQualify LMS application.
-For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional Resources
--- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure iQualify LMS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Novatus Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/novatus-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Novatus | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Novatus'
description: Learn how to configure single sign-on between Azure Active Directory and Novatus.
Previously updated : 03/05/2019 Last updated : 06/29/2022
-# Tutorial: Azure Active Directory integration with Novatus
+# Tutorial: Azure AD SSO integration with Novatus
-In this tutorial, you learn how to integrate Novatus with Azure Active Directory (Azure AD).
-Integrating Novatus with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Novatus with Azure Active Directory (Azure AD). When you integrate Novatus with Azure AD, you can:
-* You can control in Azure AD who has access to Novatus.
-* You can enable your users to be automatically signed-in to Novatus (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Novatus.
+* Enable your users to be automatically signed-in to Novatus with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Novatus, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Novatus single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Novatus single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Novatus supports **SP** initiated SSO
+* Novatus supports **SP** initiated SSO.
-* Novatus supports **Just In Time** user provisioning
+* Novatus supports **Just In Time** user provisioning.
-## Adding Novatus from the gallery
+## Add Novatus from the gallery
To configure the integration of Novatus into Azure AD, you need to add Novatus from the gallery to your list of managed SaaS apps.
-**To add Novatus from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Novatus**, select **Novatus** from result panel then click **Add** button to add the application.
-
- ![Novatus in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Novatus** in the search box.
+1. Select **Novatus** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Novatus based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Novatus needs to be established.
+## Configure and test Azure AD SSO for Novatus
-To configure and test Azure AD single sign-on with Novatus, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Novatus using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Novatus.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Novatus Single Sign-On](#configure-novatus-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Novatus test user](#create-novatus-test-user)** - to have a counterpart of Britta Simon in Novatus that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Novatus, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 2. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure Novatus SSO](#configure-novatus-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Novatus test user](#create-novatus-test-user)** - to have a counterpart of B.Simon in Novatus that is linked to the Azure AD representation of user.
+3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Novatus, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Novatus** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Novatus** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing the edit Basic SAML Configuration screen.](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Novatus Domain and URLs single sign-on information](common/sp-signonurl.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://sso.novatuscontracts.com/<companyname>`
To configure Azure AD single sign-on with Novatus, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Novatus Single Sign-On
-
-To configure single sign-on on **Novatus** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Novatus support team](mailto:jvinci@novatusinc.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
+### Create an Azure AD test user
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
2. Select **New user** at the top of the screen.-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+3. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 2. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 3. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 4. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Novatus.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Novatus**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Novatus.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
2. In the applications list, select **Novatus**.
+3. In the app's overview page, find the **Manage** section and select **Users and groups**.
+4. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+6. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+7. In the **Add Assignment** dialog, click the **Assign** button.
- ![The Novatus link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure Novatus SSO
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Novatus** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Novatus support team](mailto:jvinci@novatusinc.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Novatus test user
In this section, a user called Britta Simon is created in Novatus. Novatus suppo
>If you need to create a user manually, you need to contact the [Novatus support team](mailto:jvinci@novatusinc.com). >
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Novatus tile in the Access Panel, you should be automatically signed in to the Novatus for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Novatus Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Novatus Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Novatus tile in the My Apps, this will redirect to Novatus Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Novatus you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Ns1 Sso Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ns1-sso-azure-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with NS1 SSO for Azure | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with NS1 SSO for Azure'
description: Learn how to configure single sign-on between Azure Active Directory and NS1 SSO for Azure.
Previously updated : 02/12/2020 Last updated : 06/22/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with NS1 SSO for Azure
+# Tutorial: Azure AD SSO integration with NS1 SSO for Azure
In this tutorial, you'll learn how to integrate NS1 SSO for Azure with Azure Active Directory (Azure AD). When you integrate NS1 SSO for Azure with Azure AD, you can:
In this tutorial, you'll learn how to integrate NS1 SSO for Azure with Azure Act
* Enable your users to be automatically signed in to NS1 SSO for Azure with their Azure AD accounts. * Manage your accounts in one central location, the Azure portal.
-To learn more about software as a service (SaaS) app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * NS1 SSO for Azure single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. * NS1 SSO for Azure supports SP and IDP initiated SSO.
-* After you configure NS1 SSO for Azure, you can enforce session control. This protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from conditional access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
## Add NS1 SSO for Azure from the gallery To configure the integration of NS1 SSO for Azure into Azure AD, you need to add NS1 SSO for Azure from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) by using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal by using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Go to **Enterprise Applications**, and then select **All Applications**. 1. To add a new application, select **New application**. 1. In the **Add from the gallery** section, type **NS1 SSO for Azure** in the search box. 1. Select **NS1 SSO for Azure** from the results panel, and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for NS1 SSO for Azure
+## Configure and test Azure AD SSO for NS1 SSO for Azure
Configure and test Azure AD SSO with NS1 SSO for Azure by using a test user called **B.Simon**. For SSO to work, establish a linked relationship between an Azure AD user and the related user in NS1 SSO for Azure.
Here are the general steps to configure and test Azure AD SSO with NS1 SSO for A
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **NS1 SSO for Azure** application integration page, find the **Manage** section. Select **single sign-on**.
+1. In the Azure portal, on the **NS1 SSO for Azure** application integration page, find the **Manage** section. Select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Screenshot of Set up single sign-on with SAML page, with pencil icon highlighted](common/edit-urls.png)
+ ![Screenshot of set up single sign-on with SAML page, with pencil icon highlighted.](common/edit-urls.png)
-1. In the **Basic SAML Configuration** section, if you want to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. In the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type the following URL: `https://api.nsone.net/saml/metadata`
- b. In the **Reply URL** text box, type a URL that uses the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://api.nsone.net/saml/sso/<ssoid>` 1. Select **Set additional URLs**, and perform the following step if you want to configure the application in **SP** initiated mode:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. The NS1 SSO for Azure application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes & Claims** section on the application integration page. On the **Set up Single Sign-On with SAML** page, select the pencil icon to open the **User Attributes** dialog box.
- ![Screenshot of User Attributes & Claims section, with pencil icon highlighted](./media/ns1-sso-for-azure-tutorial/attribute-edit-option.png)
+ ![Screenshot of User Attributes & Claims section, with pencil icon highlighted.](./media/ns1-sso-for-azure-tutorial/attribute-edit-option.png)
1. Select the attribute name to edit the claim.
- ![Screenshot of User Attributes & Claims section, with attribute name highlighted](./media/ns1-sso-for-azure-tutorial/attribute-claim-edit.png)
+ ![Screenshot of User Attributes & Claims section, with attribute name highlighted.](./media/ns1-sso-for-azure-tutorial/attribute-claim-edit.png)
1. Select **Transformation**.
- ![Screenshot of Manage claim section, with Transformation highlighted](./media/ns1-sso-for-azure-tutorial/prefix-edit.png)
+ ![Screenshot of Manage claim section, with Transformation highlighted.](./media/ns1-sso-for-azure-tutorial/prefix-edit.png)
1. In the **Manage transformation** section, perform the following steps:
- ![Screenshot of Manage transformation section, with various fields highlighted](./media/ns1-sso-for-azure-tutorial/prefix-added.png)
+ ![Screenshot of Manage transformation section, with various fields highlighted.](./media/ns1-sso-for-azure-tutorial/prefix-added.png)
1. Select **ExactMailPrefix()** as **Transformation**.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, select the copy button. This copies the **App Federation Metadata Url** and saves it on your computer.
- ![Screenshot of the SAML Signing Certificate, with the copy button highlighted](common/copy-metadataurl.png)
+ ![Screenshot of the SAML Signing Certificate, with the copy button highlighted.](common/copy-metadataurl.png)
### Create an Azure AD test user
In this section, you enable B.Simon to use Azure single sign-on by granting acce
1. In the Azure portal, select **Enterprise Applications** > **All applications**. 1. In the applications list, select **NS1 SSO for Azure**. 1. In the app's overview page, find the **Manage** section, and select **Users and groups**.-
- ![Screenshot of the Manage section, with Users and groups highlighted](common/users-groups-blade.png)
- 1. Select **Add user**. In the **Add Assignment** dialog box, select **Users and groups**.-
- ![Screenshot of Users and groups page, with Add user highlighted](common/add-assign-user.png)
- 1. In the **Users and groups** dialog box, select **B.Simon** from the users list. Then choose the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Then choose the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog box, select **Assign**.
In this section, you create a user called B.Simon in NS1 SSO for Azure. Work wit
## Test SSO
-In this section, you test your Azure AD single sign-on configuration by using Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
-When you select the NS1 SSO for Azure tile in Access Panel, you should be automatically signed in to the NS1 SSO for Azure for which you set up SSO. For more information, see [Introduction to Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to NS1 SSO for Azure Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to NS1 SSO for Azure Sign-on URL directly and initiate the login flow from there.
-- [Tutorials for integrating SaaS applications with Azure Active Directory](./tutorial-list.md)
+#### IDP initiated:
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the NS1 SSO for Azure for which you set up the SSO.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the NS1 SSO for Azure tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the NS1 SSO for Azure for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [Try NS1 SSO for Azure with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+Once you configure NS1 SSO for Azure you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sevone Network Monitoring System Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sevone-network-monitoring-system-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with SevOne Network Monitoring System (NMS)'
+description: Learn how to configure single sign-on between Azure Active Directory and SevOne Network Monitoring System (NMS).
++++++++ Last updated : 06/28/2022++++
+# Tutorial: Azure AD SSO integration with SevOne Network Monitoring System (NMS)
+
+In this tutorial, you'll learn how to integrate SevOne Network Monitoring System (NMS) with Azure Active Directory (Azure AD). When you integrate SevOne Network Monitoring System (NMS) with Azure AD, you can:
+
+* Control in Azure AD who has access to SevOne Network Monitoring System (NMS).
+* Enable your users to be automatically signed-in to SevOne Network Monitoring System (NMS) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SevOne Network Monitoring System (NMS) single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* SevOne Network Monitoring System (NMS) supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add SevOne Network Monitoring System (NMS) from the gallery
+
+To configure the integration of SevOne Network Monitoring System (NMS) into Azure AD, you need to add SevOne Network Monitoring System (NMS) from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SevOne Network Monitoring System (NMS)** in the search box.
+1. Select **SevOne Network Monitoring System (NMS)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for SevOne Network Monitoring System (NMS)
+
+Configure and test Azure AD SSO with SevOne Network Monitoring System (NMS) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at SevOne Network Monitoring System (NMS).
+
+To configure and test Azure AD SSO with SevOne Network Monitoring System (NMS), perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SevOne Network Monitoring System (NMS) SSO](#configure-sevone-network-monitoring-system-nms-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SevOne Network Monitoring System (NMS) test user](#create-sevone-network-monitoring-system-nms-test-user)** - to have a counterpart of B.Simon in SevOne Network Monitoring System (NMS) that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **SevOne Network Monitoring System (NMS)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://azwcusehnmspas01.corp.microsoft.com/sso/callback`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://azwcusehnmspas01.corp.microsoft.com/sso/callback`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://azwcusehnmspas01.corp.microsoft.com/sso/callback`
+
+ d. In the **Relay State** text box, type the value:
+ `sevonenms`
+
+1. SevOne Network Monitoring System (NMS) application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attribute mappings.](common/default-attributes.png "Attributes")
+
+1. In addition to above, SevOne Network Monitoring System (NMS) application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | displayname | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up SevOne Network Monitoring System (NMS)** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SevOne Network Monitoring System (NMS).
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SevOne Network Monitoring System (NMS)**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure SevOne Network Monitoring System (NMS) SSO
+
+To configure single sign-on on **SevOne Network Monitoring System (NMS)** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [SevOne Network Monitoring System (NMS) support team](mailto:support@sevone.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create SevOne Network Monitoring System (NMS) test user
+
+In this section, you create a user called Britta Simon at SevOne Network Monitoring System (NMS). Work with [SevOne Network Monitoring System (NMS) support team](mailto:support@sevone.com) to add the users in the SevOne Network Monitoring System (NMS) platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to SevOne Network Monitoring System (NMS) Sign-On URL where you can initiate the login flow.
+
+* Go to SevOne Network Monitoring System (NMS) Sign-On URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the SevOne Network Monitoring System (NMS) tile in the My Apps, this will redirect to SevOne Network Monitoring System (NMS) Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure SevOne Network Monitoring System (NMS) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Weekdone Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/weekdone-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Weekdone | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Weekdone'
description: Learn how to configure single sign-on between Azure Active Directory and Weekdone.
Previously updated : 03/28/2019 Last updated : 06/28/2022
-# Tutorial: Azure Active Directory integration with Weekdone
+# Tutorial: Azure AD SSO integration with Weekdone
-In this tutorial, you learn how to integrate Weekdone with Azure Active Directory (Azure AD).
-Integrating Weekdone with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Weekdone with Azure Active Directory (Azure AD). When you integrate Weekdone with Azure AD, you can:
-* You can control in Azure AD who has access to Weekdone.
-* You can enable your users to be automatically signed-in to Weekdone (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Weekdone.
+* Enable your users to be automatically signed-in to Weekdone with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Weekdone, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Weekdone single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Weekdone single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Weekdone supports **SP** and **IDP** initiated SSO
+* Weekdone supports **SP** and **IDP** initiated SSO.
-* Weekdone supports **Just In Time** user provisioning
+* Weekdone supports **Just In Time** user provisioning.
-## Adding Weekdone from the gallery
+## Add Weekdone from the gallery
To configure the integration of Weekdone into Azure AD, you need to add Weekdone from the gallery to your list of managed SaaS apps.
-**To add Weekdone from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Weekdone**, select **Weekdone** from result panel then click **Add** button to add the application.
-
- ![Weekdone in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Weekdone based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Weekdone needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Weekdone** in the search box.
+1. Select **Weekdone** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Weekdone, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Weekdone
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Weekdone Single Sign-On](#configure-weekdone-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Weekdone test user](#create-weekdone-test-user)** - to have a counterpart of Britta Simon in Weekdone that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Weekdone using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Weekdone.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Weekdone, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Weekdone SSO](#configure-weekdone-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Weekdone test user](#create-weekdone-test-user)** - to have a counterpart of B.Simon in Weekdone that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Weekdone, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Weekdone** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **Weekdone** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
-
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://weekdone.com/a/<tenant>/metadata`
To configure Azure AD single sign-on with Weekdone, perform the following steps:
b. In the **Reply URL** text box, type a URL using the following pattern: `https://weekdone.com/a/<tenantname>`
-5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
+1. Click **Set additional URLs** and perform the following step, if you wish to configure the application in **SP** initiated mode:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://weekdone.com/a/<tenantname>`
To configure Azure AD single sign-on with Weekdone, perform the following steps:
6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
7. On the **Set up Weekdone** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Weekdone Single Sign-On
-
-To configure single sign-on on **Weekdone** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Weekdone support team](mailto:hello@weekdone.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Weekdone.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Weekdone**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Weekdone.
-2. In the applications list, select **Weekdone**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Weekdone**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The Weekdone link in the Applications list](common/all-applications.png)
+## Configure Weekdone SSO
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Weekdone** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Weekdone support team](mailto:hello@weekdone.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Weekdone test user
In this section, a user called Britta Simon is created in Weekdone. Weekdone sup
>[!NOTE] >If you need to create a user manually, you need to contact the [Weekdone Client support team](mailto:hello@weekdone.com).
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Weekdone Sign-On URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Weekdone Sign-On URL directly and initiate the login flow from there.
-When you click the Weekdone tile in the Access Panel, you should be automatically signed in to the Weekdone for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Weekdone for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Weekdone tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Weekdone for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Weekdone you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
When the feature has been registered, refresh the registration of the *Microsoft
az provider register --namespace Microsoft.ContainerService ```
-## Create an AKS cluster with API Server VNet Integration using Managed VNet
+## Create an AKS Private cluster with API Server VNet Integration using Managed VNet
AKS clusters with API Server VNet Integration can be configured in either managed VNet or bring-your-own VNet mode.
az aks create -n <cluster-name> \
Where `--enable-private-cluster` is a mandatory flag for a private cluster, and `--enable-apiserver-vnet-integration` configures API Server VNet integration for Managed VNet mode.
-## Create an AKS cluster with API Server VNet Integration using bring-your-own VNet
+## Create an AKS Private cluster with API Server VNet Integration using bring-your-own VNet
When using bring-your-own VNet, an API server subnet must be created and delegated to `Microsoft.ContainerService/managedClusters`. This grants the AKS service permissions to inject the API server pods and internal load balancer into that subnet. The subnet may not be used for any other workloads, but may be used for multiple AKS clusters located in the same virtual network. An AKS cluster will require from 2-7 IP addresses depending on cluster scale. The minimum supported API server subnet size is a /28.
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
Title: Scale an Azure Kubernetes Service (AKS) cluster
description: Learn how to scale the number of nodes in an Azure Kubernetes Service (AKS) cluster. Previously updated : 09/16/2020 Last updated : 06/29/2022 # Scale the node count in an Azure Kubernetes Service (AKS) cluster
-If the resource needs of your applications change, you can manually scale an AKS cluster to run a different number of nodes. When you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications. When you scale up, AKS waits until nodes are marked `Ready` by the Kubernetes cluster before pods are scheduled on them.
+If the resource needs of your applications change, you can manually scale an AKS cluster to run a different number of nodes. When you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications. When you scale up, AKS waits until nodes are marked **Ready** by the Kubernetes cluster before pods are scheduled on them.
## Scale the cluster nodes
+> [!NOTE]
+> Removing nodes from a node pool using the kubectl command is not supported. Doing so can create scaling issues with your AKS cluster.
+ ### [Azure CLI](#tab/azure-cli) First, get the *name* of your node pool using the [az aks show][az-aks-show] command. The following example gets the node pool name for the cluster named *myAKSCluster* in the *myResourceGroup* resource group:
You can also autoscale `User` node pools to 0 nodes, by setting the `--min-count
To scale a user pool to 0, you can use the [Update-AzAksNodePool][update-azaksnodepool] in alternative to the above `Set-AzAksCluster` command, and set 0 as your node count.
-```azurepowershell-interactive
+```azurepowershell-interactive
Update-AzAksNodePool -Name <your node pool name> -ClusterName myAKSCluster -ResourceGroupName myResourceGroup -NodeCount 0 ```
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
If you're using Azure Firewall like on this [example](limit-egress-traffic.md#re
If you are using cluster autoscaler, when you start your cluster back up your current node count may not be between the min and max range values you set. This behavior is expected. The cluster starts with the number of nodes it needs to run its workloads, which isn't impacted by your autoscaler settings. When your cluster performs scaling operations, the min and max values will impact your current node count and your cluster will eventually enter and remain in that desired range until you stop your cluster.
+## Windows containers have connectivity issues after a cluster upgrade operation
+
+For older clusters with Calico network policies applied before Windows Calico support, Windows Calico will be enabled by default after a cluster upgrade. After Windows Calico is enabled on Windows, you may have connectivity issues if the Calico network policies denied ingress/egress. You can mitigate this issue by creating a new Calico policy on the cluster that allows all ingress/egress for Windows using either PodSelector or IPBlock.
+ ## Azure Storage and AKS Troubleshooting ### Failure when setting uid and `GID` in mountOptions for Azure Disk
As a result, to mitigate this issue you can:
AKS is investigating the capability to mutate active labels on a node pool to improve this mitigation. - <!-- LINKS - internal --> [view-master-logs]: monitor-aks-reference.md#resource-logs [cluster-autoscaler]: cluster-autoscaler.md
aks Uptime Sla https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/uptime-sla.md
Title: Azure Kubernetes Service (AKS) with Uptime SLA
description: Learn about the optional Uptime SLA offering for the Azure Kubernetes Service (AKS) API Server. Previously updated : 01/08/2021 Last updated : 06/29/2022 # Azure Kubernetes Service (AKS) Uptime SLA
-Uptime SLA is a tier to enable a financially backed, higher SLA for an AKS cluster. Clusters with Uptime SLA, also regarded as Paid tier in AKS REST APIs, come with greater amount of control plane resources and automatically scale to meet the load of your cluster. Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters that use [Availability Zones][availability-zones] and 99.9% of availability for clusters that don't use Availability Zones. AKS uses master node replicas across update and fault domains to ensure SLA requirements are met.
+Uptime SLA is a tier to enable a financially backed, higher SLA for an AKS cluster. Clusters with Uptime SLA, also referred to as [Paid SKU tier][paid-sku-tier] in AKS REST APIs, come with greater amount of control plane resources and automatically scale to meet the load of your cluster. Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters that use [Availability Zones][availability-zones], and 99.9% of availability for clusters that don't use Availability Zones. AKS uses master node replicas across update and fault domains to ensure SLA requirements are met.
-AKS recommends use of Uptime SLA in production workloads to ensure availability of control plane components. Clusters on free tier by contrast come with fewer replicas and limited resources for the control plane and are not suitable for production workloads.
+AKS recommends use of Uptime SLA in production workloads to ensure availability of control plane components. By contrast, clusters on the **Free SKU tier** support fewer replicas and limited resources for the control plane and are not suitable for production workloads.
-Customers can still create unlimited number of free clusters with a service level objective (SLO) of 99.5% and opt for the preferred SLO.
+You can still create unlimited number of free clusters with a service level objective (SLO) of 99.5% and opt for the preferred SLO.
> [!IMPORTANT] > For clusters with egress lockdown, see [limit egress traffic](limit-egress-traffic.md) to open appropriate ports.
Uptime SLA is a paid feature and is enabled per cluster. Uptime SLA pricing is d
## Before you begin
-* Install the [Azure CLI](/cli/azure/install-azure-cli) version 2.8.0 or later
+[Azure CLI](/cli/azure/install-azure-cli) version 2.8.0 or later and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Creating a new cluster with Uptime SLA
-To create a new cluster with the Uptime SLA, you use the Azure CLI.
+To create a new cluster with the Uptime SLA, you use the Azure CLI. Create a new cluster in an existing resource group or create a new one. To learn more about resource groups and working with them, see [managing resource groups using the Azure CLI][manage-resource-group-cli].
-The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node enables the Uptime SLA. This operation takes several minutes to complete:
```azurecli-interactive
-# Create a resource group
-az group create --name myResourceGroup --location eastus
-```
-
-Use the [`az aks create`][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This operation takes several minutes to complete:
-
-```azurecli-interactive
-# Create an AKS cluster with uptime SLA
az aks create --resource-group myResourceGroup --name myAKSCluster --uptime-sla --node-count 1 ```
-After a few minutes, the command completes and returns JSON-formatted information about the cluster. The following JSON snippet shows the paid tier for the SKU, indicating your cluster is enabled with Uptime SLA:
+After a few minutes, the command completes and returns JSON-formatted information about the cluster. The following example output of the JSON snippet shows the paid tier for the SKU, indicating your cluster is enabled with Uptime SLA:
```output },
After a few minutes, the command completes and returns JSON-formatted informatio
## Modify an existing cluster to use Uptime SLA
-You can optionally update your existing clusters to use Uptime SLA.
-
-If you created an AKS cluster with the previous steps, delete the resource group:
-
-```azurecli-interactive
-# Delete the existing cluster by deleting the resource group
-az group delete --name myResourceGroup --yes --no-wait
-```
-
-Create a new resource group:
-
-```azurecli-interactive
-# Create a resource group
-az group create --name myResourceGroup --location eastus
-```
-
-Create a new cluster, and don't use Uptime SLA:
+You can update your existing clusters to use Uptime SLA.
-```azurecli-interactive
-# Create a new cluster without uptime SLA
-az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1
-```
+> [!NOTE]
+> Updating your cluster to enable the Uptime SLA does not disrupt its normal operation or impact its availability.
-Use the [`az aks update`][az-aks-update] command to update the existing cluster:
+The following command uses the [az aks update][az-aks-update] command to update the existing cluster:
```azurecli-interactive # Update an existing cluster to use Uptime SLA az aks update --resource-group myResourceGroup --name myAKSCluster --uptime-sla ```
-The following JSON snippet shows the paid tier for the SKU, indicating your cluster is enabled with Uptime SLA:
+This process takes several minutes to complete. When finished, the following example JSON snippet shows the paid tier for the SKU, indicating your cluster is enabled with Uptime SLA:
```output },
The following JSON snippet shows the paid tier for the SKU, indicating your clus
## Opt out of Uptime SLA
-You can update your cluster to change to the free tier and opt out of Uptime SLA.
+At any time you can opt out of using the Uptime SLA by updating your cluster to change it back to the free tier.
-```azurecli-interactive
-# Update an existing cluster to opt out of Uptime SLA
- az aks update --resource-group myResourceGroup --name myAKSCluster --no-uptime-sla
-```
-
-## Clean up
+> [!NOTE]
+> Updating your cluster to stop using the Uptime SLA does not disrupt its normal operation or impact its availability.
-To avoid charges, clean up any resources you created. To delete the cluster, use the [`az group delete`][az-group-delete] command to delete the AKS resource group:
+The following command uses the [az aks update][az-aks-update] command to update the existing cluster:
```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
+ az aks update --resource-group myResourceGroup --name myAKSCluster --no-uptime-sla
```
-## Next steps
+This process takes several minutes to complete.
-Use [Availability Zones][availability-zones] to increase high availability with your AKS cluster workloads.
+## Next steps
-Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
+- Use [Availability Zones][availability-zones] to increase high availability with your AKS cluster workloads.
+- Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
<!-- LINKS - External --> [azure-support]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
<!-- LINKS - Internal --> [vm-skus]: ../virtual-machines/sizes.md
+[paid-sku-tier]: /rest/api/aks/managed-clusters/create-or-update#managedclusterskutier
[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[manage-resource-group-cli]: /azure-resource-manager/management/manage-resource-groups-cli
[faq]: ./faq.md [availability-zones]: ./availability-zones.md [az-aks-create]: /cli/azure/aks?#az_aks_create
Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
[az-aks-update]: /cli/azure/aks#az_aks_update [az-group-delete]: /cli/azure/group#az_group_delete [private-clusters]: private-clusters.md
+[install-azure-cli]: /cli/azure/install-azure-cli
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
Both implementations use Linux *IPTables* to enforce the specified policies. Pol
| Capability | Azure | Calico | ||-|--|
-| Supported platforms | Linux | Linux, Windows Server 2019 (preview) |
-| Supported networking options | Azure CNI | Azure CNI (Windows Server 2019 and Linux) and kubenet (Linux) |
+| Supported platforms | Linux | Linux, Windows Server 2019 and 2022 |
+| Supported networking options | Azure CNI | Azure CNI (Linux, Windows Server 2019 and 2022) and kubenet (Linux) |
| Compliance with Kubernetes specification | All policy types supported | All policy types supported | | Additional features | None | Extended policy model consisting of Global Network Policy, Global Network Set, and Host Endpoint. For more information on using the `calicoctl` CLI to manage these extended features, see [calicoctl user reference][calicoctl]. | | Support | Supported by Azure support and Engineering team | Calico community support. For more information on additional paid support, see [Project Calico support options][calico-support]. |
az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAM
Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the network policy. Using *calico* as the network policy enables Calico networking on both Linux and Windows node pools.
-If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password]. To use Calico with Windows node pools, you also need to register the `Microsoft.ContainerService/EnableAKSWindowsCalico`.
-
-Register the `EnableAKSWindowsCalico` feature flag using the [az feature register][az-feature-register] command as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableAKSWindowsCalico"
-```
-
- You can check on the registration status using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAKSWindowsCalico')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password].
> [!IMPORTANT] > At this time, using Calico network policies with Windows nodes is available on new clusters using Kubernetes version 1.20 or later with Calico 3.17.2 and requires using Azure CNI networking. Windows nodes on AKS clusters with Calico enabled also have [Direct Server Return (DSR)][dsr] enabled by default. > > For clusters with only Linux node pools running Kubernetes 1.20 with earlier versions of Calico, the Calico version will automatically be upgraded to 3.17.2.
-Calico networking policies with Windows nodes is currently in preview.
-- Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following commands prompt you for a username and set it WINDOWS_USERNAME for use in a later command (remember that the commands in this article are entered into a BASH shell). ```azurecli-interactive
app-service App Service Asp Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-asp-net-migration.md
Title: Migrate .NET apps to Azure App Service
-description: Discover .NET migration resources available to Azure App Service.
+description: A collection of .NET migration resources available to Azure App Service.
Previously updated : 03/29/2021 Last updated : 06/28/2022 ms.devlang: csharp
Azure App Service provides easy-to-use tools to quickly discover on-premises .NE
These tools are developed to support different kinds of scenarios, focused on discovery, assessment, and migration. Following is list of .NET migration tools and use cases.
-## Migrate from multiple servers at-scale (preview)
+## Migrate from multiple servers at-scale
-<!-- Intent: discover how to assess and migrate at scale. -->
+> [!NOTE]
+> [Learn how to migrate .NET apps to App Service using the .NET migration tutorial.](../migrate/tutorial-migrate-webapps.md)
+>
Azure Migrate recently announced at-scale, agentless discovery, and assessment of ASP.NET web apps. You can now easily discover ASP.NET web apps running on Internet Information Services (IIS) servers in a VMware environment and assess them for migration to Azure App Service. Assessments will help you determine the web app migration readiness, migration blockers, remediation guidance, recommended SKU, and hosting costs. At-scale migration resources for are found below.
+Once you have successfully assessed readiness, you should proceed with migration of ASP.NET web apps to Azure App Services.
+
+There are existing tools which enable migration of a standalone ASP.Net web app or multiple ASP.NET web apps hosted on a single IIS server as explained in [Migrate .NET apps to Azure App Service](../migrate/tutorial-migrate-webapps.md). With introduction of At-Scale or bulk migration feature integrated with Azure Migrate we are now opening up the possibilities to migrate multiple ASP.NET application hosted on multiple on-premises IIS servers.
+
+Azure Migrate provides at-scale, agentless discovery, and assessment of ASP.NET web apps. You can discover ASP.NET web apps running on Internet Information Services (IIS) servers in a VMware environment and assess them for migration to Azure App Service. Assessments will help you determine the web app migration readiness, migration blockers, remediation guidance, recommended SKU, and hosting costs. At-scale migration resources for are found below.
+
+Bulk migration provides the following key capabilities:
+
+- Bulk Migration of ASP.NET web apps to Azure App Services multitenant or App services environment
+- Migrate ASP.NET web apps assessed as "Ready" & "Ready with Conditions"
+- Migrate up to five App Service Plans (and associated web apps) as part of a single E2E migration flow
+- Ability to change suggested SKU for the target App Service Plan (Ex: Change suggested Pv3 SKU to Standard PV2 SKU)
+- Ability to change web apps suggested web apps packing density for target app service plan (Add or Remove web apps associated with an App Service Plan)
+- Change target name for app service plans and\or web apps
+- Bulk edit migration settings\attributes
+- Download CSV with details of target web app and app service plan name
+- Track progress of migration using ARM template deployment experience
+ ### At-scale migration resources | How-tos |
Azure Migrate recently announced at-scale, agentless discovery, and assessment o
| [Create an Azure App Service assessment](../migrate/how-to-create-azure-app-service-assessment.md) | | [Tutorial to assess web apps for migration to Azure App Service](../migrate/tutorial-assess-webapps.md) | | [Discover software inventory on on-premises servers with Azure Migrate](../migrate/how-to-discover-applications.md) |
+| [Migrate .NET apps to App Service](../migrate/tutorial-migrate-webapps.md) |
| **Blog** | | [Discover and assess ASP.NET apps at-scale with Azure Migrate](https://azure.microsoft.com/blog/discover-and-assess-aspnet-apps-atscale-with-azure-migrate/) | | **FAQ** |
Azure Migrate recently announced at-scale, agentless discovery, and assessment o
## Migrate from an IIS server
-<!-- Intent: discover how to assess and migrate from a single IIS server -->
- You can migrate ASP.NET web apps from single IIS server discovered through Azure Migrate's at-scale discovery experience using [PowerShell scripts](https://github.com/Azure/App-Service-Migration-Assistant/wiki/PowerShell-Scripts) [(download)](https://appmigration.microsoft.com/api/download/psscriptpreview/AppServiceMigrationScripts.zip). Watch the video for [updates on migrating to Azure App Service](/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service). ## ASP.NET web app migration
-<!-- Intent: migrate a single web app -->
Using App Service Migration Assistant, you can [migrate your standalone on-premises ASP.NET web app onto Azure App Service](https://www.youtube.com/watch?v=9LBUmkUhmXU). App Service Migration Assistant is designed to simplify your journey to the cloud through a free, simple, and fast solution to migrate applications from on-premises to the cloud. For more information about the migration assistant tool, see the [FAQ](https://github.com/Azure/App-Service-Migration-Assistant/wiki).
app-service App Service Migration Assess Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-migration-assess-net.md
+
+ Title: Assess .NET apps
+description: Assess .NET web apps before migrating to Azure App Service
+++ Last updated : 06/28/2022+
+ms.devlang: csharp
+++
+# At-scale assessment of .NET web apps
+
+Once you've discovered ASP.NET web apps you should proceed to the next step of assessing these web apps. Assessment provides you with migration readiness and sizing recommendations based on properties defined by you. Below is the list of key assessment capabilities:
+
+- Modify assessment properties as per your requirements like target Azure region, application isolation requirements, and reserved instance pricing.
+- Provide App Service SKU recommendation and display monthly cost estimates
+- Provide per web app migration readiness information and provide detailed information on blockers and errors.
+
+You can create multiple assessments for the same web apps with different sets of assessment properties
+
+For more information on web apps assessment, see:
+- [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](https://channel9.msdn.com/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate)
+- [Create an Azure App Service assessment](../migrate/how-to-create-azure-app-service-assessment.md)
+- [Tutorial to assess web apps for migration to Azure App Service](../migrate/tutorial-assess-webapps.md)
+- [Azure App Service assessments in Azure Migrate Discovery and assessment tool](../migrate/concepts-azure-webapps-assessment-calculation.md)
+- [Assessment best practices in Azure Migrate Discovery and assessment tool](../migrate/best-practices-assessment.md)
++
+Next steps:
+[At-scale migration of .NET web apps](/learn/modules/migrate-app-service-migration-assistant/)
app-service App Service Migration Discover Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-migration-discover-net.md
+
+ Title: Discover .NET apps to Azure App Service
+description: Discover .NET migration resources available to Azure App Service.
+++ Last updated : 03/29/2021+
+ms.devlang: csharp
+++
+# At-scale discovery of .NET web apps
+
+For ASP. Net web apps discovery you need to either install a new Azure Migrate appliance or upgrade an existing Azure Migrate appliance.
+
+Once the appliance is configured, Azure Migrate initiates the discovery of web apps deployed on IIS web servers hosted within your on-premises VMware environment. Discovery of ASP.NET web apps provide the following key capabilities:
+
+- Agentless discovery of up to 20,000 web apps with a single Azure Migrate appliance
+- Provide a rich & interactive dashboard with a list of IIS web servers and underlying VM infra details. Web apps discovery surfaces information such as:
+ - web app name
+ - web server type and version
+ - URLs
+ - binding port
+ - application pool
+- If web app discovery has failed then the discovery dashboard allows easy navigation to review relevant error messages, possible causes of failure and suggested remediation actions
+
+For more information about web apps discovery please refer to:
+
+- [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](https://channel9.msdn.com/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate)
+- [Discover and assess ASP.NET apps at-scale with Azure Migrate](https://azure.microsoft.com/blog/discover-and-assess-aspnet-apps-atscale-with-azure-migrate/)
+- [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](https://channel9.msdn.com/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate)
+- [Discover software inventory on on-premises servers with Azure Migrate](../migrate/how-to-discover-applications.md)
+- [Discover web apps and SQL Server instances](../migrate/how-to-discover-sql-existing-project.md)
++
+Next steps:
+[At-scale assessment of .NET web apps](/learn/modules/migrate-app-service-migration-assistant/)
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
A few features that were available in earlier versions of App Service Environmen
- Send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25. - Monitor your traffic with Network Watcher or network security group (NSG) flow logs.-- Configure an IP-based Transport Layer Security (TLS) or Secure Sockets Layer (SSL) binding with your apps.
+- Configure individual custom domain [IP SSL bindings](..\configure-ssl-bindings.md#create-binding) with your apps.
- Configure a custom domain suffix. - Perform a backup and restore operation on a storage account behind a firewall.
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
A private endpoint is a network interface that uses a private IP address from th
1. Select **Create**. 1. On the **Basics** tab, configure a resource group, name, and region for the Private Endpoint. Select **Next**. 1. On the **Resource** tab, select **Next**.
-1. On the **Virtual Network** tab, configure a virtual network and subnet where the private endpoint network interface should be provisioned to. Configure whether the private endpoint should have a dynamic or static IP address. Last, configure if you want a new private link zone to be created to automatically manage IP addressing. Select **Next**.
+1. On the **Virtual Network** tab, configure a virtual network and subnet where the private endpoint network interface should be provisioned to. Configure whether the private endpoint should have a dynamic or static IP address. Select **Next**.
1. On the **Tags** tab, optionally configure resource tags. Select **Next**. 1. Select **Create**.
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
recommendations: false
# Form Recognizer custom template model
-Custom templateΓÇöformerly custom form-are easy-to-train models that accurately extract labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
+Custom template (formerly custom form) are easy-to-train models that accurately extract labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
Custom template models share the same labeling format and strategy as custom neural models, with support for more field types and languages.
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
The [Read API](concept-read.md) supports detecting the following languages in yo
> extracted for a given language, see previous sections.
+> [!NOTE]
+> **Detected languages vs extracted languages**
+>
+> This section lists the languages we can detect from the documents using the Read model, if present. Please note that this list differs from list of languages we support extracting text from, which is specified in the above sections for each model.
+ | Language | Code | ||| | Afrikaans | `af` |
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 06/13/2022 Last updated : 06/29/2022 <!-- markdownlint-disable MD024 -->
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
## June 2022
+### [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) June Update
+
+The June release is the latest update to the Form Recognizer Studio. There are considerable UX and accessbility improvements addressed in this update:
+
+* 🆕 **Code sample for Javascript and C#**. Studio code tab now includes sample codes written in Javascript and C# in addition to the already existing Python code.
+* 🆕 **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload UI.
+* 🆕 **New feature for custom projects**. Custom projects now support creating storage account and file directories when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
+ ### Form Recognizer v3.0 preview release
-The **2022-06-30-preview** release is the latest update to the Form Recognizer service for v3.0 capabilities. There are considerable updates across the feature APIs:
+The **2022-06-30-preview** release is the latest update to the Form Recognizer service for v3.0 capabilities and presents extensive updates across the feature APIs:
* [🆕 **Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction). * [🆕 **Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
azure-arc Create Data Controller Indirect Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-azure-portal.md
Follow the steps below to create an Azure Arc data controller using the Azure po
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: > [!NOTE]
-> The example commands below assume that you created a data controller and Kubernetes namespace with the name 'arc'. If you used a different namespace/data controller name, you can replace 'arc' with your name.
+> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.
```console
-kubectl get datacontroller/arc --namespace arc
+kubectl get datacontroller/arc-dc --namespace arc
``` ```console
azure-arc Create Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-cli.md
Once you have run the command, continue on to [Monitoring the creation status](#
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: > [!NOTE]
-> The example commands below assume that you created a data controller and Kubernetes namespace with the name `arc`. If you used a different namespace/data controller name, you can replace `arc` with your name.
+> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.
```console
-kubectl get datacontroller/arc --namespace arc
+kubectl get datacontroller/arc-dc --namespace arc
``` ```console
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Title: Create a data controller using Kubernetes tools
-description: Create a data controller using Kubernetes tools
+ Title: Create a Data Controller using Kubernetes tools
+description: Create a Data Controller using Kubernetes tools
Last updated 11/03/2021
-# Create Azure Arc-enabled data controller using Kubernetes tools
+# Create Azure Arc data controller using Kubernetes tools
-A data controller manages Azure Arc-enabled data services for a Kubernetes cluster. This article describes how to use Kubernetes tools to create a data controller.
## Prerequisites Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information.
-To create the data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
+To create the Azure Arc data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) > [!NOTE]
-> Some of the steps to create the data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
+> Some of the steps to create the Azure Arc data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
### Cleanup from past installations
-If you installed the data controller in the past on the same cluster and deleted the data controller, there may be some cluster level objects that would still need to be deleted.
+If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted.
For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`.
-Run the following commands to delete the data controller cluster level objects:
+Run the following commands to delete the Azure Arc data controller cluster level objects:
```console # Cleanup azure arc data service artifacts
kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{names
## Overview
-Creating the data controller has the following high level steps:
+Creating the Azure Arc data controller has the following high level steps:
-1. Create a namespace in which the data controller will be created.
-1. Create the deployer service account.
+ > [!IMPORTANT]
+ > Some of the steps below require Kubernetes cluster administrator permissions.
+
+1. Create the custom resource definitions for the Arc data controller, Azure SQL managed instance, and PostgreSQL Hyperscale.
+1. Create a namespace in which the data controller will be created.
1. Create the bootstrapper service including the replica set, service account, role, and role binding. 1. Create a secret for the data controller administrator username and password.
+1. Create the webhook deployment job, cluster role and cluster role binding.
1. Create the data controller.
+## Create the custom resource definitions
+
+Run the following command to create the custom resource definitions.
+
+ > [!IMPORTANT]
+ > Requires Kubernetes cluster administrator permissions.
+
+```console
+kubectl create -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
+```
+ ## Create a namespace in which the data controller will be created Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created. In this example and the remainder of the examples in this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout.
openshift.io/sa.scc.supplemental-groups: 1000700001/10000
openshift.io/sa.scc.uid-range: 1000700001/10000 ```
-If other people who are not cluster administrators will be using this namespace, create a namespace admin role and grant that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
--
-## Create the deployer service account
-
- > [!IMPORTANT]
- > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
-
-Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
-
-```console
-kubectl apply --namespace arc -f arcdata-deployer.yaml
-```
-
+If other people will be using this namespace that are not cluster administrators, we recommend creating a namespace admin role and granting that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
## Create the bootstrapper service
-The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller.
+The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller, SQL managed instances, or PostgreSQL Hyperscale server groups.
-Run the following command to create a "bootstrap" job to install the bootstrapper along with related cluster-scope and namespaced objects, such as custom resource definitions (CRDs), the service account and bootstrapper role.
+Run the following command to create a bootstrapper service, a service account for the bootstrapper service, and a role and role binding for the bootstrapper service account.
```console
-kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/bootstrap.yaml
+kubectl create --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml
```
-The [uninstall.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/uninstall.yaml) is for uninstalling the bootstrapper and related Kubernetes objects, except the CRDs.
-
-Verify that the bootstrapper pod is running using the following command.
+Verify that the bootstrapper pod is running using the following command. You may need to run it a few times until the status changes to `Running`.
```console
-kubectl get pod --namespace arc -l app=bootstrapper
+kubectl get pod --namespace arc
```
-If the status is not _Running_, run the command a few times until the status is _Running_.
-
-The bootstrap.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following:
+The bootstrapper.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment does not have access directly to the Microsoft Container Registry, you can do the following:
- Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md).-- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry.-- Change the image URL for the bootstrapper image in the bootstrap.yaml file.-- Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret.
+- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-lin) for your private container registry.
+- Add an image pull secret to the bootstrapper container. See example below.
+- Change the image location for the bootstrapper image. See example below.
+
+The example below assumes that you created a image pull secret name `arc-private-registry`.
+
+```yaml
+#Just showing only the relevant part of the bootstrapper.yaml template file here
+ spec:
+ serviceAccountName: sa-bootstrapper
+ nodeSelector:
+ kubernetes.io/os: linux
+ imagePullSecrets:
+ - name: arc-private-registry #Create this image pull secret if you are using a private container registry
+ containers:
+ - name: bootstrapper
+ image: mcr.microsoft.com/arcdata/arc-bootstrapper:v1.1.0_2021-11-02 #Change this registry location if you are using a private container registry.
+ imagePullPolicy: Always
+```
## Create secrets for the metrics and logs dashboards
kubectl create --namespace arc -f C:\arc-data-services\controller-login-secret.y
Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify during Kubernetes native tools deployment](monitor-certificates.md).
+## Create the webhook deployment job, cluster role and cluster role binding
+
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/web-hook.yaml) locally on your computer so that you can modify some of the settings.
+
+Edit the file and replace `{{namespace}}` in all places with the name of the namespace you created in the previous step. **Save the file.**
+
+Run the following command to create the cluster role and cluster role bindings.
+
+ > [!IMPORTANT]
+ > Requires Kubernetes cluster administrator permissions.
+
+```console
+kubectl create -n arc -f <path to the edited template file on your computer>
+```
+ ## Create the data controller Now you are ready to create the data controller itself.
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
Edit the following as needed:
Edit the following as needed:
- **name**: The default name of the data controller is `arc`, but you can change it if you want. - **displayName**: Set this to the same value as the name attribute at the top of the file. - **registry**: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and [pushing them to a private container registry](offline-deployment.md), enter the IP address or DNS name of your registry here.-- **dockerRegistry**: The secret to use to pull the images from a private container registry if required.
+- **dockerRegistry**: The image pull secret to use to pull the images from a private container registry if required.
- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images. - **imageTag**: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version. - **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate.
If you encounter any troubles with creation, please see the [troubleshooting gui
## Next steps - [Create a SQL managed instance using Kubernetes-native tools](./create-sql-managed-instance-using-kubernetes-native-tools.md)-- [Create a PostgreSQL Hyperscale server group using Kubernetes-native tools](./create-postgresql-hyperscale-server-group-kubernetes-native-tools.md)
+- [Create a PostgreSQL Hyperscale server group using Kubernetes-native tools](./create-postgresql-hyperscale-server-group-kubernetes-native-tools.md)
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
Run the notebook by clicking **Run All**.
Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal.
-Open the Azure portal by using this special URL: [https://portal.azure.com/?Microsoft_Azure_HybridData_Platform=BugBash](https://portal.azure.com/?Microsoft_Azure_HybridData_Platform=BugBash).
+Open the Azure portal by using this special URL: [https://ms.portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home](https://ms.portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home).
Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate and enter the desired tag in the **Image tag** field. Fill out the rest of the custom cluster configuration template fields as normal.
At this time, pre-release testing is supported for certain customers and partner
## Next steps
-[Release notes - Azure Arc-enabled data services](release-notes.md)
+[Release notes - Azure Arc-enabled data services](release-notes.md)
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
Title: Upgrade indirectly connected data controller for Azure Arc using Kubernetes tools
-description: Article describes how to upgrade an indirectly connected data controller for Azure Arc using Kubernetes tools
+ Title: Upgrade indirectly connected Azure Arc data controller using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
Last updated 05/27/2022
-# Upgrade an indirectly connected Azure Arc-enabled data controller using Kubernetes tools
+# Upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
This article explains how to upgrade an indirectly connected Azure Arc-enabled data controller with Kubernetes tools.
During a data controller upgrade, portions of the data control plane such as Cus
In this article, you'll apply a .yaml file to:
-1. Create the service account for running upgrade.
-1. Upgrade the bootstrapper.
-1. Upgrade the data controller.
+1. Specify a service account.
+1. Set the cluster roles.
+1. Set the cluster role bindings.
+1. Set the job.
> [!NOTE] > Some of the data services tiers and modes are generally available and some are in preview.
In this article, you'll apply a .yaml file to:
## Prerequisites
-Prior to beginning the upgrade of the data controller, you'll need:
+Prior to beginning the upgrade of the Azure Arc data controller, you'll need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected
You need an indirectly connected data controller with the `imageTag: v1.0.0_2021
## Install tools
-To upgrade the data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
+To upgrade the Azure Arc data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
The examples in this article use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or helm if you're familiar with those tools and Kubernetes yaml/json.
Found 2 valid versions. The current datacontroller version is <current-version>
... ```
+## Create or download .yaml file
+
+To upgrade the data controller, you'll apply a yaml file to the Kubernetes cluster. The example file for the upgrade is available in GitHub at <https://github.com/microsoft/azure_arc/blob/main/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml>.
+
+You can download the file - and other Azure Arc related demonstration files - by cloning the repository. For example:
+
+```azurecli
+git clone https://github.com/microsoft/azure-arc
+```
+
+For more information, see [Cloning a repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) in the GitHub docs.
+
+The following steps use files from the repository.
+
+In the yaml file, you'll replace ```{{namespace}}``` with your namespace.
+ ## Upgrade data controller This section shows how to upgrade an indirectly connected data controller.
This section shows how to upgrade an indirectly connected data controller.
### Upgrade
-You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the data controller.
+You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
+### Specify the service account
-### Create the service account for running upgrade
+The upgrade requires an elevated service account for the upgrade job.
- > [!IMPORTANT]
- > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
+To specify the service account:
-Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
+1. Describe the service account in a .yaml file. The following example sets a name for `ServiceAccount` as `sa-arc-upgrade-worker`:
-```console
-kubectl apply --namespace arc -f arcdata-deployer.yaml
-```
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="2-4":::
+1. Edit the file as needed.
-### Upgrade the bootstrapper
+### Set the cluster roles
-The following command creates a job for upgrading the bootstrapper and related Kubernetes objects.
+A cluster role (`ClusterRole`) grants the service account permission to perform the upgrade.
-```console
-kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/upgrade/yaml/bootstrapper-upgrade-job.yaml
-```
+1. Describe the cluster role and rules in a .yaml file. The following example defines a cluster role for `arc:cr-upgrade-worker` and allows all API groups, resources, and verbs.
+
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="7-9":::
+
+1. Edit the file as needed.
+
+### Set the cluster role binding
+
+A cluster role binding (`ClusterRoleBinding`) links the service account and the cluster role.
+
+1. Describe the cluster role binding in a .yaml file. The following example describes a cluster role binding for the service account.
+
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="20-21":::
+
+1. Edit the file as needed.
+
+### Specify the job
+
+A job creates a pod to execute the upgrade.
+
+1. Describe the job in a .yaml file. The following example creates a job called `arc-bootstrapper-upgrade-job`.
+
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="31-48":::
+
+1. Edit the file for your environment.
### Upgrade the data controller
-The following command patches the image tag to upgrade the data controller.
+Specify the image tag to upgrade the data controller to.
-```console
-kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/upgrade/yaml/data-controller-upgrade.yaml
-```
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="50-56":::
+### Apply the resources
+
+Run the following kubectl command to apply the resources to your cluster.
+
+``` bash
+kubectl apply -n <namespace> -f upgrade-indirect-k8s.yaml
+```
## Monitor the upgrade status
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Last updated 03/03/2021 -- description: "Control agent upgrades for Azure Arc-enabled Kubernetes" keywords: "Kubernetes, Arc, Azure, K8s, containers, agent, upgrade"
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
Last updated 04/05/2021 -- description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kubernetes clusters."
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Last updated 06/03/2022 -- description: "Use Cluster Connect to securely connect to Azure Arc-enabled Kubernetes clusters"
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
- If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example: ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
+ kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
``` - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
- If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example: ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
+ kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
``` - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace): ```console
- kubectl create serviceaccount admin-user
+ kubectl create serviceaccount demo-user
``` 1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example: ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --serviceaccount default:admin-user
+ kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
```
-1. Get the service account's token using the following commands:
+1. Create a service account token:
```console
- SECRET_NAME=$(kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')
+ kubectl apply -f - <<EOF
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: demo-user-secret
+ annotations:
+ kubernetes.io/service-account.name: demo-user
+ type: kubernetes.io/service-account-token
+ EOF
``` ```console
- TOKEN=$(kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
+ $TOKEN=(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
``` ### [Azure PowerShell](#tab/azure-powershell)
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace): ```console
- kubectl create serviceaccount admin-user
+ kubectl create serviceaccount demo-user
``` 1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example: ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --serviceaccount default:admin-user
+ kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
```
-1. Get the service account's token using the following commands:
+1. Create a service account token by :
```console
- $SECRET_NAME = (kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')
+ kubectl apply -f demo-user-secret.yaml
+ ```
+
+ Contents of `demo-user-secret.yaml`:
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: demo-user-secret
+ annotations:
+ kubernetes.io/service-account.name: demo-user
+ type: kubernetes.io/service-account-token
``` ```console
- $TOKEN = ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String((kubectl get secret $SECRET_NAME -o jsonpath='{$.data.token}'))))
+ $TOKEN = ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String((kubectl get secret demo-user-secret -o jsonpath='{$.data.token}'))))
```
azure-arc Conceptual Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md
Last updated 03/03/2021 -- description: "This article provides an architectural overview of Azure Arc-enabled Kubernetes agents" keywords: "Kubernetes, Arc, Azure, containers"
azure-arc Conceptual Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-azure-rbac.md
Last updated 04/05/2021 -- description: "This article provides a conceptual overview of Azure RBAC capability on Azure Arc-enabled Kubernetes"
azure-arc Conceptual Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-cluster-connect.md
Last updated 04/05/2021 -- description: "This article provides a conceptual overview of Cluster Connect capability of Azure Arc-enabled Kubernetes"
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
Last updated 05/24/2022 -- description: "This article provides a conceptual overview of GitOps and configurations capability of Azure Arc-enabled Kubernetes." keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps"
azure-arc Conceptual Connectivity Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-connectivity-modes.md
Last updated 11/23/2021 -- description: "This article provides an overview of the connectivity modes supported by Azure Arc-enabled Kubernetes" keywords: "Kubernetes, Arc, Azure, containers"
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-custom-locations.md
Last updated 05/25/2021 -- description: "This article provides a conceptual overview of Custom Locations capability of Azure Arc-enabled Kubernetes"
azure-arc Conceptual Data Exchange https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-data-exchange.md
Last updated 11/23/2021 -- description: "This article provides information on data exchanged between Azure Arc-enabled Kubernetes cluster and Azure" keywords: "Kubernetes, Arc, Azure, containers"
azure-arc Conceptual Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-extensions.md
Last updated 11/24/2021 -- description: "This article provides a conceptual overview of cluster extensions capability of Azure Arc-enabled Kubernetes"
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes"
Last updated 10/19/2021 -- description: "Use custom locations to deploy Azure PaaS services on Azure Arc-enabled Kubernetes clusters"
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Last updated 05/24/2022 -- description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes"
azure-arc Kubernetes Resource View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/kubernetes-resource-view.md
Last updated 10/31/2021 -- description: Learn how to interact with Kubernetes resources to manage an Azure Arc-enabled Kubernetes cluster from the Azure portal.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Title: "Overview of Azure Arc-enabled Kubernetes"
-- Last updated 05/03/2022 description: "This article provides an overview of Azure Arc-enabled Kubernetes."
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/plan-at-scale-deployment.md
Last updated 04/12/2021 -- description: Onboard large number of clusters to Azure Arc-enabled Kubernetes for configuration management
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
Last updated 03/03/2021 -- description: "Describes Arc validation program for Kubernetes distributions" keywords: "Kubernetes, Arc, Azure, K8s, validation"
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 05/11/2022 Last updated : 06/29/2022
The azcmagent tool is used to configure the Azure Connected Machine agent during
* **disconnect** - Disconnect the machine from Azure Arc. * **show** - View agent status and its configuration properties (Resource Group name, Subscription ID, version, etc.), which can help when troubleshooting an issue with the agent. Include the `-j` parameter to output the results in JSON format. * **config** - View and change settings to enable features and control agent behavior.
-* **check** - Validate network connectivity.
* **logs** - Create a .zip file in the current directory containing logs to assist you while troubleshooting. * **version** - Show the Connected Machine agent version. * **-useStderr** - Direct error and verbose output to stderr. Include the `-json` parameter to output the results in JSON format.
azure-fluid-relay Azure Function Token Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/azure-function-token-provider.md
fluid.url: https://fluidframework.com/docs/build/tokenproviders/
In the [Fluid Framework](https://fluidframework.com/), TokenProviders are responsible for creating and signing tokens that the `@fluidframework/azure-client` uses to make requests to the Azure Fluid Relay service. The Fluid Framework provides a simple, insecure TokenProvider for development purposes, aptly named **InsecureTokenProvider**. Each Fluid service must implement a custom TokenProvider based on the particulars service's authentication and security considerations.
-Each Azure Fluid Relay service tenant you create is assigned a **tenant ID** and its own unique **tenant secret key**. The secret key is a **shared secret**. Your app/service knows it, and the Azure Fluid Relay service knows it. TokenProviders must know the secret key to sign requests, but the secret key cannot be included in client code.
+Each Azure Fluid Relay resource you create is assigned a **tenant ID** and its own unique **tenant secret key**. The secret key is a **shared secret**. Your app/service knows it, and the Azure Fluid Relay service knows it. TokenProviders must know the secret key to sign requests, but the secret key cannot be included in client code.
## Implement an Azure Function to sign tokens
-One option for building a secure token provider is to create HTTPS endpoint and create a TokenProvider implementation that makes authenticated HTTPS requests to that endpoint to retrieve tokens. This enables you to store the *tenant secret key* in a secure location, such as [Azure Key Vault](../../key-vault/general/overview.md).
+One option for building a secure token provider is to create HTTPS endpoint and create a TokenProvider implementation that makes authenticated HTTPS requests to that endpoint to retrieve tokens. This path enables you to store the *tenant secret key* in a secure location, such as [Azure Key Vault](../../key-vault/general/overview.md).
The complete solution has two pieces:
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRe
export default httpTrigger; ```
-The `generateToken` function, found in the `@fluidframework/azure-service-utils` package, generates a token for the given user that is signed using the tenant's secret key. This enables the token to be returned to the client without exposing the secret. Instead, the token is generated server-side using the secret to provide scoped access to the given document. The example ITokenProvider below makes HTTP requests to this Azure Function to retrieve tokens.
+The `generateToken` function, found in the `@fluidframework/azure-service-utils` package, generates a token for the given user that is signed using the tenant's secret key. This method enables the token to be returned to the client without exposing the secret. Instead, the token is generated server-side using the secret to provide scoped access to the given document. The example ITokenProvider below makes HTTP requests to this Azure Function to retrieve tokens.
### Deploy the Azure Function
Azure Functions can be deployed in several ways. See the **Deploy** section of t
### Implement the TokenProvider
-TokenProviders can be implemented in many ways, but must implement two separate API calls: `fetchOrdererToken` and `fetchStorageToken`. These APIs are responsible for fetching tokens for the Fluid orderer and storage services respectively. Both functions return `TokenResponse` objects representing the token value. The Fluid Framework runtime calls these two APIs as needed to retrieve tokens.
-
+TokenProviders can be implemented in many ways, but must implement two separate API calls: `fetchOrdererToken` and `fetchStorageToken`. These APIs are responsible for fetching tokens for the Fluid orderer and storage services respectively. Both functions return `TokenResponse` objects representing the token value. The Fluid Framework runtime calls these two APIs as needed to retrieve tokens. Note that while your application code is using only one service endpoint to establish conectivity with the Azure Fluid Relay service, the azure-client internally in conjunction with the service translate that one endpoint to an orderer and storage endpoint pair. Those two endpoints are used from that point on for that session. That is why you need to implement the two separate functions for fetching tokens, one for each.
To ensure that the tenant secret key is kept secure, it is stored in a secure backend location and is only accessible from within the Azure Function. To retrieve tokens, you need to make a `GET` or `POST` request to your deployed Azure Function, providing the `tenantID` and `documentId`, and `userID`/`userName`. The Azure Function is responsible for the mapping between the tenant ID and a tenant key secret to appropriately generate and sign the token.
azure-fluid-relay Deploy Fluid Static Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/deploy-fluid-static-web-apps.md
import { AzureClient, AzureFunctionTokenProvider } from "@fluidframework/azure-c
const config = { tenantId: "myTenantId", tokenProvider: new AzureFunctionTokenProvider("https://myAzureAppUrl"+"/api/GetAzureToken", { userId: "test-user",userName: "Test User" }),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; const clientProps = {
import { AzureClient } from "@fluidframework/azure-client";
const config = { tenantId: "myTenantId", tokenProvider: new AzureFunctionTokenProvider("https://myStaticWebAppUrl/api/GetAzureToken", { userId: "test-user",userName: "Test User" }),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; const clientProps = {
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
The following table explains the binding configuration properties that you set i
The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+Queries executed by the input binding are [parameterized](/dotnet/api/microsoft.data.sqlclient.sqlparameter) in Microsoft.Data.SqlClient to reduce the risk of [SQL injection](/sql/relational-databases/security/sql-injection) from the parameter values passed into the binding.
++ ::: zone-end ## Next steps
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
The following table explains the binding configuration properties that you set i
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-python" The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+The output bindings uses the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database.
+ ::: zone-end ## Next steps
azure-government Compliance Tic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/compliance-tic.md
Previously updated : 12/01/2020
+recommendations: false
Last updated : 06/28/2022 # Trusted Internet Connections guidance
-This article explains how U.S. government agencies can use security features in Azure cloud services to help achieve compliance with the Trusted Internet Connections (TIC) initiative. It applies to both Azure and Azure Government cloud service environments and covers TIC implications for Azure infrastructure as a service (IaaS) and Azure platform as a service (PaaS) cloud service models.
+This article explains how you can use security features in Azure cloud services to help achieve compliance with the Trusted Internet Connections (TIC) initiative. It applies to both Azure and Azure Government cloud service environments, and covers TIC implications for Azure infrastructure as a service (IaaS) and Azure platform as a service (PaaS) cloud service models.
## Trusted Internet Connections overview
-The purpose of the TIC initiative is to enhance network security across the U.S. federal government. This objective was initially realized by consolidating external connections and routing all network traffic through approved devices at TIC access points. In the intervening years, cloud computing became well established, paving the way for modern security architectures and a shift away from the primary focus on perimeter security. Accordingly, the TIC initiative evolved to provide federal agencies with increased flexibility to use modern security capabilities.
+The purpose of the TIC initiative is to enhance network security across the US federal government. This objective was initially realized by consolidating external connections and routing all network traffic through approved devices at TIC access points. In the intervening years, cloud computing became well established, paving the way for modern security architectures and a shift away from the primary focus on perimeter security. Accordingly, the TIC initiative evolved to provide federal agencies with increased flexibility to use modern security capabilities.
### TIC 2.0 guidance
-The TIC initiative was originally outlined in the Office of Management and Budget (OMB) [Memorandum M-08-05](https://georgewbush-whitehouse.archives.gov/omb/memoranda/fy2008/m08-05.pdf) released in November 2007, and referred to in this article as TIC 2.0 guidance. The TIC program was envisioned to improve federal network perimeter security and incident response functions. TIC was originally designed to perform network analysis of all inbound and outbound .gov traffic to identify specific patterns in network data flows and uncover behavioral anomalies, such as botnet activity. Agencies were mandated to consolidate their external network connections and route all traffic through intrusion detection and prevention devices known as EINSTEIN. The devices are hosted at a limited number of network endpoints, which are referred to as *trusted internet connections*.
+The TIC initiative was originally outlined in the Office of Management and Budget (OMB) [Memorandum M-08-05](https://georgewbush-whitehouse.archives.gov/omb/memoranda/fy2008/m08-05.pdf) released in November 2007, and referred to in this article as TIC 2.0 guidance. The TIC program was envisioned to improve federal network perimeter security and incident response functions. TIC was originally designed to perform network analysis of all inbound and outbound .gov traffic. The goal was to identify specific patterns in network data flows and uncover behavioral anomalies, such as botnet activity. Agencies were mandated to consolidate their external network connections and route all traffic through intrusion detection and prevention devices known as EINSTEIN. The devices are hosted at a limited number of network endpoints, which are referred to as *trusted internet connections*.
The objective of TIC is for agencies to know:
The objective of TIC is for agencies to know:
Under TIC 2.0, all agency external connections must route through an OMB-approved TIC. Federal agencies are required to participate in the TIC program as a TIC Access Provider (TICAP), or by contracting services with one of the major Tier 1 internet service providers. These providers are referred to as Managed Trusted Internet Protocol Service (MTIPS) providers. TIC 2.0 includes mandatory critical capabilities that are performed by the agency and MTIPS provider. In TIC 2.0, the EINSTEIN version 2 intrusion detection and EINSTEIN version 3 accelerated (3A) intrusion prevention devices are deployed at each TICAP and MTIPS. The agency establishes a *Memorandum of Understanding* with the Department of Homeland Security (DHS) to deploy EINSTEIN capabilities to federal systems.
-As part of its responsibility to protect the .gov network, DHS requires the raw data feeds of agency net flow data to correlate incidents across the federal enterprise and perform analyses by using specialized tools. DHS routers provide the ability to collect IP network traffic as it enters or exits an interface. Network administrators can analyze the net flow data to determine the source and destination of traffic, the class of service, and other parameters. Net flow data is considered to be "non-content data" similar to the header, source IP, destination IP, and so on. Non-content data allows DHS to learn about the content: who was doing what and for how long.
+As part of its responsibility to protect the .gov network, DHS requires the raw data feeds of agency net flow data to correlate incidents across the federal enterprise and perform analyses by using specialized tools. DHS routers enable collection of IP network traffic as it enters or exits an interface. Network administrators can analyze the net flow data to determine the source and destination of traffic, the class of service, and other parameters. Net flow data is considered to be "non-content data" similar to the header, source IP, destination IP, and so on. Non-content data allows DHS to learn about the content: who was doing what and for how long.
-The TIC 2.0 initiative also includes security policies, guidelines, and frameworks that assume an on-premises infrastructure. As government agencies move to the cloud to achieve cost savings, operational efficiency, and innovation, the implementation requirements of TIC 2.0 can slow down network traffic. The speed and agility with which government users can access their cloud-based data is limited as a result.
+The TIC 2.0 initiative also includes security policies, guidelines, and frameworks that assume an on-premises infrastructure. Government agencies move to the cloud to achieve cost savings, operational efficiency, and innovation. However, the implementation requirements of TIC 2.0 can slow down network traffic. The speed and agility with which government users can access their cloud-based data is limited as a result.
### TIC 3.0 guidance
-In September 2019, OMB released [Memorandum M-19-26](https://www.whitehouse.gov/wp-content/uploads/2019/09/M-19-26.pdf) that rescinded prior TIC-related memorandums and introduced [TIC 3.0 guidance](https://www.cisa.gov/trusted-internet-connections). The previous OMB memorandums required agency traffic to flow through a physical TIC access point, which has proven to be an obstacle to the adoption of cloud-based infrastructure. For example, TIC 2.0 focused exclusively on perimeter security by channeling all incoming and outgoing agency data through a TIC access point. In contrast, TIC 3.0 recognizes the need to account for multiple and diverse security architectures rather than a single perimeter security approach. This flexibility allows agencies to choose how to implement security capabilities in a way that fits best into their overall network architecture, risk management approach, and more.
+In September 2019, OMB released [Memorandum M-19-26](https://www.whitehouse.gov/wp-content/uploads/2019/09/M-19-26.pdf) that rescinded prior TIC-related memorandums and introduced [TIC 3.0 guidance](https://www.cisa.gov/trusted-internet-connections). The previous OMB memorandums required agency traffic to flow through a physical TIC access point, which has proven to be an obstacle to the adoption of cloud-based infrastructure. For example, TIC 2.0 focused exclusively on perimeter security by channeling all incoming and outgoing agency data through a TIC access point. In contrast, TIC 3.0 recognizes the need to account for multiple and diverse security architectures rather than a single perimeter security approach. This flexibility allows agencies to choose how to implement security capabilities in a way that fits best into their overall network architecture, risk management approach, and more.
-To enable this flexibility, the Cybersecurity & Infrastructure Security Agency (CISA) works with federal agencies to conduct pilots in diverse agency environments, which results in the development of TIC 3.0 use cases. For TIC 3.0 implementations, CISA encourages agencies to leverage [TIC 3.0 Core Guidance Documents](https://www.cisa.gov/publication/tic-30-core-guidance-documents) in conjunction with the National Institute of Standards and Technology (NIST) [Cybersecurity Framework](https://www.nist.gov/cyberframework) (CSF) and [NIST SP 800-53](https://nvd.nist.gov/800-53/Rev4) *Security and Privacy Controls for Federal Information Systems and Organizations*. These documents can help agencies design a secure network architecture and determine appropriate requirements from cloud service providers.
+To enable this flexibility, the Cybersecurity & Infrastructure Security Agency (CISA) works with federal agencies to conduct pilots in diverse agency environments, which result in the development of TIC 3.0 use cases. For TIC 3.0 implementations, CISA encourages agencies to use [TIC 3.0 Core Guidance Documents](https://www.cisa.gov/publication/tic-30-core-guidance-documents) with the National Institute of Standards and Technology (NIST) [Cybersecurity Framework](https://www.nist.gov/cyberframework) (CSF) and [NIST SP 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final) *Security and Privacy Controls for Federal Information Systems and Organizations*. These documents can help agencies design a secure network architecture and determine appropriate requirements from cloud service providers.
-TIC 3.0 complements other federal initiatives focused on cloud adoption such as the Federal Risk and Authorization Management Program (FedRAMP), which is based on the NIST SP 800-53 standard augmented by FedRAMP controls and control enhancements. Agencies can leverage existing Azure and Azure Government FedRAMP High provisional authorizations (P-ATO) issued by the FedRAMP Joint Authorization Board, as well as Azure and Azure Government support for the NIST CSF, as described in [Azure compliance documentation](../../compliance/index.yml). To assist agencies with TIC 3.0 implementation when selecting cloud-based security capabilities, CISA has mapped TIC capabilities to the NIST CSF and NIST SP 800-53. For example, TIC 3.0 security objectives can be mapped to the five functions of the NIST CSF, including Identify, Protect, Detect, Respond, and Recover. The TIC security capabilities are mapped to the NIST CSF in the TIC 3.0 Security Capabilities Catalog available from [TIC 3.0 Core Guidance Documents](https://www.cisa.gov/publication/tic-30-core-guidance-documents).
+TIC 3.0 complements other federal initiatives focused on cloud adoption such as the Federal Risk and Authorization Management Program (FedRAMP), which is based on the NIST SP 800-53 standard augmented by FedRAMP controls and control enhancements. Agencies can use existing Azure and Azure Government [FedRAMP High](/azure/compliance/offerings/offering-fedramp) provisional authorizations to operate (P-ATO) issued by the FedRAMP Joint Authorization Board. They can also use Azure and Azure Government support for the [NIST CSF](/azure/compliance/offerings/offering-nist-csf). To assist agencies with TIC 3.0 implementation when selecting cloud-based security capabilities, CISA has mapped TIC capabilities to the NIST CSF and NIST SP 800-53. For example, TIC 3.0 security objectives can be mapped to the five functions of the NIST CSF, including Identify, Protect, Detect, Respond, and Recover. The TIC security capabilities are mapped to the NIST CSF in the TIC 3.0 Security Capabilities Catalog available from [TIC 3.0 Core Guidance Documents](https://www.cisa.gov/publication/tic-30-core-guidance-documents).
-TIC 3.0 is non-prescriptive cybersecurity guidance developed to provide agencies with flexibility to implement security capabilities that match their specific risk tolerance levels. While the guidance requires agencies to comply with all applicable telemetry requirements such as the National Cybersecurity Protection System (NCPS) and Continuous Diagnosis and Mitigation (CDM), TIC 3.0 currently only requires agencies to self-attest on their adherence to the TIC guidance.
+TIC 3.0 is a non-prescriptive cybersecurity guidance developed to provide agencies with flexibility to implement security capabilities that match their specific risk tolerance levels. While the guidance requires agencies to comply with all applicable telemetry requirements such as the National Cybersecurity Protection System (NCPS) and Continuous Diagnosis and Mitigation (CDM), TIC 3.0 currently only requires agencies to self-attest on their adherence to the TIC guidance.
-With TIC 3.0, agencies have the option to maintain the legacy TIC 2.0 implementation that uses TIC access points while adopting TIC 3.0 capabilities. CISA provided guidance on how to implement the traditional TIC model in TIC 3.0, known as the [Traditional TIC Use Case](https://www.cisa.gov/publication/tic-30-core-guidance-documents).
+With TIC 3.0, agencies can maintain the legacy TIC 2.0 implementation that uses TIC access points while adopting TIC 3.0 capabilities. CISA provided guidance on how to implement the traditional TIC model in TIC 3.0, known as the [Traditional TIC Use Case](https://www.cisa.gov/publication/tic-30-core-guidance-documents).
-The rest of this article provides customer guidance that is pertinent to Azure capabilities needed for legacy TIC 2.0 implementations; however, some of this guidance is also useful for TIC 3.0 requirements.
+The rest of this article provides guidance that is pertinent to Azure capabilities needed for legacy TIC 2.0 implementations; however, some of this guidance is also useful for TIC 3.0 requirements.
## Azure networking options There are four main options to connect to Azure -- **Direct internet connection:** Connect to Azure services directly through an open internet connection. The medium and the connection are public. Application and transport-level encryption are relied on to ensure privacy. Bandwidth is limited by a site's connectivity to the internet. Use more than one active provider to ensure resiliency.-- **Virtual Private Network (VPN):** Connect to your Azure virtual network privately by using a VPN gateway. The medium is public because it traverses a site's standard internet connection, but the connection is encrypted in a tunnel to ensure privacy. Bandwidth is limited depending on the VPN devices and the configuration you choose. Azure point-to-site connections usually are limited to 100 Mbps. Site-to-site connections range from 100 Mbps to 10 Gbps.-- **Azure ExpressRoute:** ExpressRoute is a direct connection to Microsoft services. ExpressRoute uses a provider at a peering location to connect to Microsoft Enterprise edge routers. ExpressRoute uses different peering types for IaaS and PaaS/SaaS services, private peering and Microsoft peering. Bandwidth ranges from 50 Mbps to 10 Gbps.-- **Azure ExpressRoute Direct:** ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering location. ExpressRoute Direct removes a third-party connectivity provider from the required hops. Bandwidth ranges from 10 Gbps to 100 Gbps.
+- **Direct internet connection** ΓÇô Connect to Azure services directly through an open internet connection. The medium and the connection are public. Application and transport-level encryption are relied on to ensure data protection. Bandwidth is limited by a site's connectivity to the internet. Use more than one active provider to ensure resiliency.
+- **Virtual Private Network (VPN)** ΓÇô Connect to your Azure virtual network privately by using a VPN gateway. The medium is public because it traverses a site's standard internet connection, but the connection is encrypted in a tunnel to ensure data protection. Bandwidth is limited depending on the VPN devices and the configuration you choose. Azure point-to-site connections usually are limited to 100 Mbps. Site-to-site connections range from 100 Mbps to 10 Gbps.
+- **Azure ExpressRoute** ΓÇô ExpressRoute is a direct connection to Microsoft services. ExpressRoute uses a provider at a peering location to connect to Microsoft Enterprise edge routers. ExpressRoute uses different peering types for IaaS and PaaS/SaaS services, private peering and Microsoft peering. Bandwidth ranges from 50 Mbps to 10 Gbps.
+- **Azure ExpressRoute Direct** ΓÇô ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering location. ExpressRoute Direct removes a third-party connectivity provider from the required hops. Bandwidth ranges from 10 Gbps to 100 Gbps.
-To enable the connection from the *agency* to Azure or Microsoft 365, without routing traffic through the agency TIC, the agency must use an encrypted tunnel or a dedicated connection to the cloud service provider (CSP). The CSP services can ensure that connectivity to the agency cloud assets isn't offered via the public Internet for direct agency personnel access.
+To enable the connection from the *agency* to Azure or Microsoft 365, without routing traffic through the agency TIC, the agency must use:
-For Azure only, the second option (VPN) and third option (ExpressRoute) can meet these requirements when they're used in conjunction with services that limit access to the Internet.
+- An encrypted tunnel, or
+- A dedicated connection to the cloud service provider (CSP).
+
+The CSP services can ensure that connectivity to the agency cloud assets isn't offered via the public Internet for direct agency personnel access.
+
+For Azure only, the second option (VPN) and third option (ExpressRoute) can meet these requirements when they're used with services that limit access to the Internet.
Microsoft 365 is compliant with TIC guidance by using either [ExpressRoute with Microsoft Peering](../../expressroute/expressroute-circuit-peerings.md) enabled or an Internet connection that encrypts all traffic by using the Transport Layer Security (TLS) 1.2. Agency end users on the agency network can connect via their agency network and TIC infrastructure through the Internet. All remote Internet access to Microsoft 365 is blocked and routes through the agency.
Microsoft 365 is compliant with TIC guidance by using either [ExpressRoute with
Compliance with TIC policy by using Azure IaaS is relatively simple because Azure customers manage their own virtual network routing.
-The main requirement to help assure compliance with the TIC 2.0 reference architecture is to ensure your virtual network is a private extension of the agency network. To be a *private* extension, the policy requires that no traffic leave your network except via the on-premises TIC network connection. This process is known as *forced tunneling*. For TIC 2.0 compliance, the process routes all traffic from any system in the CSP environment through an on-premises gateway on an organization's network out to the Internet through the TIC.
+The main requirement to help assure compliance with the TIC 2.0 reference architecture is to ensure your virtual network is a private extension of the agency network. To be a *private* extension, the policy requires that no traffic is allowed to leave your network except via the on-premises TIC network connection. This process is known as *forced tunneling*. For TIC 2.0 compliance, the process routes all traffic from any system in the CSP environment through an on-premises gateway on an organization's network out to the Internet through the TIC.
Azure IaaS TIC compliance is divided into two major steps:
Azure IaaS TIC compliance is divided into two major steps:
### Azure IaaS TIC compliance: Configuration
-To configure a TIC-compliant architecture with Azure, you must first prevent direct Internet access to your virtual network and then force Internet traffic through the on-premises network.
+To configure a TIC-compliant architecture with Azure, you must first prevent direct Internet access to your virtual network, and then force Internet traffic through the on-premises network.
#### Prevent direct Internet access
Azure automatically creates system routes and assigns the routes to each subnet
:::image type="content" source="./media/tic-diagram-c.png" alt-text="TIC force tunneling" border="false":::
-All traffic that leaves the virtual network needs to route through the on-premises connection, to ensure that all traffic traverses the agency TIC. You create custom routes by creating user-defined routes, or by exchanging Border Gateway Protocol (BGP) routes between your on-premises network gateway and an Azure VPN gateway. For more information about user-defined routes, see [Virtual network traffic routing: User-defined routes](../../virtual-network/virtual-networks-udr-overview.md#user-defined). For more information about the BGP, see [Virtual network traffic routing: Border Gateway Protocol](../../virtual-network/virtual-networks-udr-overview.md#border-gateway-protocol).
+All traffic that leaves the virtual network needs to route through the on-premises connection, to ensure that all traffic traverses the agency TIC. You create custom routes by creating user-defined routes, or by exchanging Border Gateway Protocol (BGP) routes between your on-premises network gateway and an Azure VPN gateway.
+
+- For more information about user-defined routes, see [Virtual network traffic routing: User-defined routes](../../virtual-network/virtual-networks-udr-overview.md#user-defined).
+- For more information about the BGP, see [Virtual network traffic routing: Border Gateway Protocol](../../virtual-network/virtual-networks-udr-overview.md#border-gateway-protocol).
#### Add user-defined routes
Azure offers several ways to audit TIC compliance.
#### View effective routes
-Confirm that your default route is propagated by observing the *effective routes* for a particular virtual machine, a specific NIC, or a user-defined route table in the [Azure portal](../../virtual-network/diagnose-network-routing-problem.md#diagnose-using-azure-portal) or in [Azure PowerShell](../../virtual-network/diagnose-network-routing-problem.md#diagnose-using-powershell). The **Effective Routes** show the relevant user-defined routes, BGP advertised routes, and system routes that apply to the relevant entity, as shown in the following figure:
+Confirm your default route propagation by observing the *effective routes* for a particular virtual machine, a specific NIC, or a user-defined route table in the [Azure portal](../../virtual-network/diagnose-network-routing-problem.md#diagnose-using-azure-portal) or in [Azure PowerShell](../../virtual-network/diagnose-network-routing-problem.md#diagnose-using-powershell). The **Effective Routes** show the relevant user-defined routes, BGP advertised routes, and system routes that apply to the relevant entity, as shown in the following figure:
:::image type="content" source="./media/tic-screen-1.png" alt-text="Effective routes" border="false":::
Azure PaaS services, such as Azure Storage, are accessible through an internet-r
When Azure PaaS services are integrated with a virtual network, the service is privately accessible from that virtual network. You can apply custom routing for 0.0.0.0/0 via user-defined routes or BGP. Custom routing ensures that all Internet-bound traffic routes on-premises to traverse the TIC. Integrate Azure services into virtual networks by using the following patterns: -- **Deploy a dedicated instance of a service:** An increasing number of PaaS services are deployable as dedicated instances with virtual network-attached endpoints, sometimes called *VNet injection*. You can deploy an App Service Environment in *isolated mode* to allow the network endpoint to be constrained to a virtual network. The App Service Environment can then host many Azure PaaS services, such as Azure Web Apps, Azure API Management, and Azure Functions. For more information, see [Deploy dedicated Azure services into virtual networks](../../virtual-network/virtual-network-for-azure-services.md).-- **Use virtual network service endpoints:** An increasing number of PaaS services allow the option to move their endpoint to a virtual network private IP instead of a public address. For more information, see [Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).-- **Use Azure Private Link:** Provide a shared service with a private endpoint in your virtual network. Traffic between your virtual network and the service travels across the Microsoft backbone network and does not traverse the public Internet. For more information, see [Azure Private Link](../../private-link/private-link-overview.md).
+- **Deploy a dedicated instance of a service** ΓÇô An increasing number of PaaS services are deployable as dedicated instances with virtual network-attached endpoints, sometimes called *VNet injection*. You can deploy an App Service Environment in *isolated mode* to allow the network endpoint to be constrained to a virtual network. The App Service Environment can then host many Azure PaaS services, such as Web Apps, API Management, and Functions. For more information, see [Deploy dedicated Azure services into virtual networks](../../virtual-network/virtual-network-for-azure-services.md).
+- **Use virtual network service endpoints** ΓÇô An increasing number of PaaS services allow the option to move their endpoint to a virtual network private IP instead of a public address. For more information, see [Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
+- **Use Azure Private Link** ΓÇô Provide a shared service with a private endpoint in your virtual network. Traffic between your virtual network and the service travels across the Microsoft backbone network and doesn't traverse the public Internet. For more information, see [Azure Private Link](../../private-link/private-link-overview.md).
### Virtual network integration details
The following diagram shows the general network flow for access to Azure PaaS se
:::image type="content" source="./media/tic-diagram-e.png" alt-text="PaaS connectivity options for TIC" border="false"::: 1. A private connection is made to Azure by using ExpressRoute. ExpressRoute private peering with forced tunneling is used to force all customer virtual network traffic over ExpressRoute and back to on-premises. Microsoft Peering isn't required.
-2. Azure VPN Gateway, when used in conjunction with ExpressRoute and Microsoft Peering, can overlay end-to-end IPSec encryption between the customer virtual network and the on-premises edge.
+2. Azure VPN Gateway, when used with ExpressRoute and Microsoft Peering, can overlay end-to-end IPSec encryption between the customer virtual network and the on-premises edge.
3. Network connectivity to the customer virtual network is controlled by using network security groups that allow customers to permit/deny traffic based on IP, port, and protocol. 4. Traffic to and from the customer private virtual network is monitored through Azure Network Watcher and data is analyzed using Log Analytics and Microsoft Defender for Cloud. 5. The customer virtual network extends to the PaaS service by creating a service endpoint for the customer's service.
-6. The PaaS service endpoint is secured to **default deny all** and to only allow access from specified subnets within the customer virtual network. Securing service resources to a virtual network provides improved security by fully removing public Internet access to resources and allowing traffic only from your virtual network.
+6. The PaaS service endpoint is secured to **default deny all** and to only allow access from specified subnets within the customer virtual network. Securing service resources to a virtual network provides improved security by fully removing public Internet access to resources and allowing traffic only from your virtual network.
7. Other Azure services that need to access resources within the customer virtual network should either be: - Deployed directly into the virtual network, or - Allowed selectively based on the guidance from the respective Azure service.
Virtual network injection enables customers to selectively deploy dedicated inst
#### Option B: Use virtual network service endpoints (service tunnel)
-An increasing number of Azure multitenant services offer *service endpoints*. Service endpoints are an alternate method for integrating to Azure virtual networks. Virtual network service endpoints extend your virtual network IP address space and the identity of your virtual network to the service over a direct connection. Traffic from the virtual network to the Azure service always stays within the Azure backbone network.
+An increasing number of Azure multi-tenant services offer *service endpoints*. Service endpoints are an alternate method for integrating to Azure virtual networks. Virtual network service endpoints extend your virtual network IP address space and the identity of your virtual network to the service over a direct connection. Traffic from the virtual network to the Azure service always stays within the Azure backbone network.
After you enable a service endpoint for a service, use policies exposed by the service to restrict connections for the service to that virtual network. Access checks are enforced in the platform by the Azure service. Access to a locked resource is granted only if the request originates from the allowed virtual network or subnet, or from the two IPs that are used to identify your on-premises traffic if you use ExpressRoute. Use this method to effectively prevent inbound/outbound traffic from directly leaving the PaaS service.
After you enable a service endpoint for a service, use policies exposed by the s
#### Option C: Use Azure Private Link
-Customers can use [Azure Private Link](../../private-link/private-link-overview.md) to access Azure PaaS services and Azure-hosted customer/partner services over a [private endpoint](../../private-link/private-endpoint-overview.md) in their virtual network, ensuring that traffic between their virtual network and the service travels across the Microsoft global backbone network. This approach eliminates the need to expose the service to the public Internet. Customers can also create their own [private link service](../../private-link/private-link-service-overview.md) in their own virtual network and deliver it to their customers.
+You can use [Azure Private Link](../../private-link/private-link-overview.md) to access Azure PaaS services and Azure-hosted customer or partner services over a [private endpoint](../../private-link/private-endpoint-overview.md) in your virtual network, ensuring that traffic between your virtual network and the service travels across the Microsoft global backbone network. This approach eliminates the need to expose the service to the public Internet. You can also create your own [private link service](../../private-link/private-link-service-overview.md) in your own virtual network and deliver it to your customers.
-Azure private endpoint is a network interface that connects customers privately and securely to a service powered by Azure Private Link. Private endpoint uses a private IP address from customerΓÇÖs virtual network, effectively bringing the service into customerΓÇÖs virtual network.
+Azure private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network.
## Tools for network situational awareness
Azure provides cloud-native tools to help ensure that you have the situational a
### Azure Policy
-[Azure Policy](../../governance/policy/overview.md) is an Azure service that provides your organization with better ability to audit and enforce compliance initiatives. Customers can plan and test their Azure Policy rules now to assure future TIC compliance.
+[Azure Policy](../../governance/policy/overview.md) is an Azure service that provides your organization with better ability to audit and enforce compliance initiatives. You can plan and test your Azure Policy rules now to assure future TIC compliance.
Azure Policy is targeted at the subscription level. The service provides a centralized interface where you can perform compliance tasks, including:+ - Manage initiatives - Configure policy definitions - Audit compliance
Networks in regions that are monitored by Network Watcher can conduct next hop t
## Conclusions
-You can easily configure network access to help comply with TIC 2.0 guidance, as well as leverage Azure support for the NIST CSF and NIST SP 800-53 to address TIC 3.0 requirements.
+You can easily configure network access to help comply with TIC 2.0 guidance and use Azure support for the NIST CSF and NIST SP 800-53 to address TIC 3.0 requirements.
+
+## Next steps
+
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [Azure Government overview](../documentation-government-welcome.md)
+- [Azure Government security](../documentation-government-plan-security.md)
+- [Azure Government compliance](../documentation-government-plan-compliance.md)
+- [FedRAMP High](/azure/compliance/offerings/offering-fedramp)
+- [DoD Impact Level 4](/azure/compliance/offerings/offering-dod-il4)
+- [DoD Impact Level 5](/azure/compliance/offerings/offering-dod-il5)
+- [Azure Government isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md)
+- [Secure Azure Computing Architecture](./secure-azure-computing-architecture.md)
+- [Azure guidance for secure isolation](../azure-secure-isolation-guidance.md)
+- [Azure Policy overview](../../governance/policy/overview.md)
+- [Azure Policy regulatory compliance built-in initiatives](../../governance/policy/samples/index.md#regulatory-compliance)
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
Last updated 05/10/2022-+
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
description: Use data collection endpoints to uniquely configure ingestion setti
Previously updated : 3/16/2022 Last updated : 3/16/2022
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Last updated 02/09/2022 ++
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
Last updated 5/19/2022 + # Azure Monitor agent overview
To configure the agent to use private links for network communications with Azur
## Next steps - [Install the Azure Monitor agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To collect data from virtual machines using the Azure Monitor agent, you'll:
1. Create [data collection rules (DCR)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor agent sends to which destinations. 1. Associate the data collection rule to specific virtual machines.
-## How data collection rule associations work
-
-You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
-
-For example, consider an environment with a set of virtual machines running a line of business application and other virtual machines running SQL Server. You might have:
--- One default data collection rule that applies to all virtual machines.-- Separate data collection rules that collect data specifically for the line of business application and for SQL Server.
-
-The following diagram illustrates the associations for the virtual machines to the data collection rules.
-
-![A diagram showing one virtual machine hosting a line of business application and one virtual machine hosting SQL Server. Both virtual machine are associated with data collection rule named central-i t-default. The virtual machine hosting the line of business application is also associated with a data collection rule called lob-app. The virtual machine hosting SQL Server is associated with a data collection rule called s q l.](media/data-collection-rule-azure-monitor-agent/associations.png)
-
+ You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
## Create data collection rule and association
-To send data to Log Analytics, create the data collection rule in the **same region** where your Log Analytics workspace resides. You can still associate the rule to machines in other supported regions.
+To send data to Log Analytics, create the data collection rule in the **same region** as your Log Analytics workspace. You can still associate the rule to machines in other supported regions.
### [Portal](#tab/portal)
To send data to Log Analytics, create the data collection rule in the **same reg
### [API](#tab/api)
-1. Manually create the DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
+1. Create a DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
2. Create the rule using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
azure-monitor Data Sources Event Tracing Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-event-tracing-windows.md
Last updated 02/07/2022
+ms. reviewer: shseth
# Collecting Event Tracing for Windows (ETW) Events for analysis Azure Monitor Logs
azure-monitor Diagnostics Extension To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-to-application-insights.md
Title: Send Azure Diagnostics data to Application Insights
description: Update the Azure Diagnostics public configuration to send data to Application Insights. Last updated 03/31/2022+
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Title: Azure AD authentication for Application Insights (Preview)
+ Title: Azure AD authentication for Application Insights
description: Learn how to enable Azure Active Directory (Azure AD) authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources. Last updated 08/02/2021
ms.devlang: csharp, java, javascript, python
-# Azure AD authentication for Application Insights (Preview)
+# Azure AD authentication for Application Insights
-Application Insights now supports Azure Active Directory (Azure AD) authentication. By using Azure AD, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
+Application Insights now supports [Azure Active Directory (Azure AD) authentication](../../active-directory/authentication/overview-authentication.md#what-is-azure-active-directory-authentication). By using Azure AD, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
-Typically, using various authentication systems can be cumbersome and pose risk since it's difficult to manage credentials at a large scale. You can now choose to opt-out of local authentication and ensure only telemetry that is exclusively authenticated using [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your Application Insights resource. This feature is a step to enhance the security and reliability of the telemetry used to make both critical operational (alerting/autoscale etc.) and business decisions.
+Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt-out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated using [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make both critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts), [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-microsoft-azure), etc.) and business decisions.
-> [!IMPORTANT]
-> Azure AD authentication is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+## Prerequisites
-Below are SDKs/scenarios not supported in the Public Preview:
-- [Application Insights Java 2.x SDK](java-2x-agent.md) ΓÇô Azure AD authentication is only available for Application Insights Java Agent >=3.2.0. -- [ApplicationInsights JavaScript Web SDK](javascript.md). -- [Application Insights OpenCensus Python SDK](opencensus-python.md) with Python version 3.4 and 3.5.-- [Certificate/secret based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use Managed Identities instead. -- On by default Codeless monitoring (for languages) for App Service, VM/Virtual machine scale sets, Azure Functions etc.-- [Availability tests](availability-overview.md).-- [Profiler](profiler-overview.md).--
-## Prerequisites to enable Azure AD authentication ingestion
+The following are prerequisites to enable Azure AD authenticated ingestion.
- Familiarity with: - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md). - [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md). - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.md). - You have an "Owner" role to the resource group to grant access using [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+- Understand the [unsupported scenarios](#unsupported-scenarios).
## Configuring and enabling Azure AD based authentication
var config = new TelemetryConfiguration
var credential = new DefaultAzureCredential(); config.SetAzureTokenCredential(credential); + ``` Below is an example of configuring the `TelemetryConfiguration` using .NET Core:
services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions
ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/" }); ```++ ### [Node.js](#tab/nodejs) > [!NOTE]
appInsights.defaultClient.config.aadTokenCredential = credential;
``` + ### [Java](#tab/java) > [!NOTE]
appInsights.defaultClient.config.aadTokenCredential = credential;
#### System-assigned Managed Identity
-Below is an example on how to configure Java agent to use system-assigned managed identity for authentication with Azure AD.
+Below is an example of how to configure Java agent to use system-assigned managed identity for authentication with Azure AD.
```JSON {
Below is an example on how to configure Java agent to use system-assigned manage
#### User-assigned managed identity
-Below is an example on how to configure Java agent to use user-assigned managed identity for authentication with Azure AD.
+Below is an example of how to configure Java agent to use user-assigned managed identity for authentication with Azure AD.
```JSON {
Below is an example on how to configure Java agent to use user-assigned managed
#### Client secret
-Below is an example on how to configure Java agent to use service principal for authentication with Azure AD. We recommend users to use this type of authentication only during development. The ultimate goal of adding authentication feature is to eliminate secrets.
+Below is an example of how to configure Java agent to use service principal for authentication with Azure AD. We recommend users to use this type of authentication only during development. The ultimate goal of adding authentication feature is to eliminate secrets.
```JSON {
Below is an example on how to configure Java agent to use service principal for
:::image type="content" source="media/azure-ad-authentication/client-secret-cs.png" alt-text="Screenshot of Client secret with client secret." lightbox="media/azure-ad-authentication/client-secret-cs.png"::: + ### [Python](#tab/python) > [!NOTE]
is included starting with beta version [opencensus-ext-azure 1.1b0](https://pypi
Construct the appropriate [credentials](/python/api/overview/azure/identity-readme#credentials) and pass it into the constructor of the Azure Monitor exporter. Make sure your connection string is set up with the instrumentation key and ingestion endpoint of your resource.
-Below are the following types of authentication that are supported by the Opencensus Azure Monitor exporters. Managed identities are recommended to be used in production environments.
+Below are the following types of authentication that are supported by the `Opencensus` Azure Monitor exporters. Managed identities are recommended in production environments.
#### System-assigned managed identity
tracer = Tracer(
) ... ```+ ## Disable local authentication
-After the Azure AD authentication is enabled, you can choose to disable local authentication. This will allow you to ingest telemetry authenticated exclusively by Azure AD and impacts data access (for example, through API Keys).
+After the Azure AD authentication is enabled, you can choose to disable local authentication. This configuration will allow you to ingest telemetry authenticated exclusively by Azure AD and impacts data access (for example, through API Keys).
You can disable local authentication by using the Azure portal, Azure Policy, or programmatically.
You can disable local authentication by using the Azure portal, Azure Policy, or
1. From your Application Insights resource, select **Properties** under the *Configure* heading in the left-hand menu. Then select **Enabled (click to change)** if the local authentication is enabled.
- :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot of Properties under the *Configure* selected and enabled (click to change) local authentication button.":::
+ :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot of Properties under the *Configure* selected and enabled (select to change) local authentication button.":::
1. Select **Disabled** and apply changes.
You can disable local authentication by using the Azure portal, Azure Policy, or
1. Once your resource has disabled local authentication, you'll see the corresponding info in the **Overview** pane.
- :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot of overview tab with the disabled(click to change) highlighted.":::
+ :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot of overview tab with the disabled (select to change) highlighted.":::
### Azure Policy
Below is an example Azure Resource Manager template that you can use to create a
```
+## Unsupported scenarios
+
+The following SDK's and features are unsupported for use with Azure AD authenticated ingestion.
+
+- [Application Insights Java 2.x SDK](java-2x-agent.md)<br>
+ Azure AD authentication is only available for Application Insights Java Agent >=3.2.0.
+- [ApplicationInsights JavaScript Web SDK](javascript.md).
+- [Application Insights OpenCensus Python SDK](opencensus-python.md) with Python version 3.4 and 3.5.
+
+- [Certificate/secret based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use Managed Identities instead.
+- On-by-default Codeless monitoring (for languages) for App Service, VM/Virtual machine scale sets, Azure Functions etc.
+- [Availability tests](availability-overview.md).
+- [Profiler](profiler-overview.md).
+ ## Troubleshooting This section provides distinct troubleshooting scenarios and steps that users can take to resolve any issue before they raise a support ticket.
This section provides distinct troubleshooting scenarios and steps that users ca
The ingestion service will return specific errors, regardless of the SDK language. Network traffic can be collected using a tool such as Fiddler. You should filter traffic to the IngestionEndpoint set in the Connection String.
-#### HTTP/1.1 400 Authentication not support
+#### HTTP/1.1 400 Authentication not supported
-This indicates that the Application Insights resource has been configured for Azure AD only, but the SDK hasn't been correctly configured and is sending to the incorrect API.
+This error indicates that the resource has been configured for Azure AD only. The SDK hasn't been correctly configured and is sending to the incorrect API.
> [!NOTE] > "v2/track" does not support Azure AD. When the SDK is correctly configured, telemetry will be sent to "v2.1/track".
Next steps should be to review the SDK configuration.
#### HTTP/1.1 401 Authorization required
-This indicates that the SDK has been correctly configured, but was unable to acquire a valid token. This may indicate an issue with Azure Active Directory.
+This error indicates that the SDK has been correctly configured, but was unable to acquire a valid token. This error may indicate an issue with Azure Active Directory.
Next steps should be to identify exceptions in the SDK logs or network errors from Azure Identity. #### HTTP/1.1 403 Unauthorized
-This indicates that the SDK has been configured with credentials that haven't been given permission to the Application Insights resource or subscription.
+This error indicates that the SDK has been configured with credentials that haven't been given permission to the Application Insights resource or subscription.
Next steps should be to review the Application Insights resource's access control. The SDK must be configured with a credential that has been granted the "Monitoring Metrics Publisher" role.
Next steps should be to review the Application Insights resource's access contro
The Application Insights .NET SDK emits error logs using event source. To learn more about collecting event source logs visit, [Troubleshooting no data- collect logs with PerfView](asp-net-troubleshoot-no-data.md#PerfView). If the SDK fails to get a token, the exception message is logged as:
-"Failed to get AAD Token. Error message: "
+`Failed to get AAD Token. Error message: `
### [Node.js](#tab/nodejs)
-Internal logs could be turned on using the following setup. Once this is enabled, error logs will be shown in the console, including any error related to Azure AD integration. For example, failure to generate the token when wrong credentials are supplied or errors when ingestion endpoint fails to authenticate using the provided credentials.
+Internal logs could be turned on using the following setup. Once enabled, error logs will be shown in the console including any error related to Azure AD integration. For example, failure to generate the token when wrong credentials are supplied or errors when ingestion endpoint fails to authenticate using the provided credentials.
```javascript let appInsights = require("applicationinsights");
If using fiddler, you might see the following response header: `HTTP/1.1 401 Una
#### CredentialUnavailableException
-If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid `clientId` in your User Assigned Managed Identity configuration
+If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid `clientId` in your User Assigned Managed Identity configuration
#### Failed to send telemetry
-If the following WARN message is seen in the log file, `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. This might be because of the provided credentials don't grant the access to ingest the telemetry into the component
+If the following WARN message is seen in the log file, `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. This warning might be because of the provided credentials don't grant the access to ingest the telemetry into the component
If using fiddler, you might see the following response header: `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`.
Root cause might be one of the following reasons:
#### Invalid TenantId
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier <TENANT-ID> is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong `tenantId` in your client secret configuration.
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier <TENANT-ID> is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid/wrong `tenantId` in your client secret configuration.
#### Invalid client secret
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid `clientSecret` in your client secret configuration.
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid `clientSecret` in your client secret configuration.
#### Invalid ClientId
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong "clientId" in your client secret configuration
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid/wrong "clientId" in your client secret configuration
- This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
+ This scenario can occur if the application hasn't been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
### [Python](#tab/python) #### Error starts with "credential error" (with no status code)
-Something is incorrect about the credential you're using and the client isn't able to obtain a token for authorization. It's usually due to lacking the required data for the state. An example would be passing in a system ManagedIdentityCredential but the resource isn't configured to use system-managed identity.
+Something is incorrect about the credential you're using and the client isn't able to obtain a token for authorization. It's due to lacking the required data for the state. An example would be passing in a system ManagedIdentityCredential but the resource isn't configured to use system-managed identity.
#### Error starts with "authentication error" (with no status code)
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
By default, IP addresses are temporarily collected but not stored in Application
When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup by using [GeoLite2 from MaxMind](https://dev.maxmind.com/geoip/geoip2/geolite2/). Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
+Geolocation data can be removed in the following ways.
+
+* [Remove the client IP initializer](../app/configuration-with-applicationinsights-config.md)
+* [Use a custom initializer](../app/api-filtering-sampling.md)
+ > [!NOTE] > Application Insights uses an older version of the GeoLite2 database. If you experience accuracy issues with IP to geolocation mappings, then as a workaround you can disable IP masking and utilize another geomapping service to convert the client_IP field of the underlying telemetry to a more accurate geolocation. We are currently working on an update to improve the geolocation accuracy.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Download the [applicationinsights-agent-3.3.0.jar](https://github.com/microsoft/
> If you're upgrading from 3.2.x to 3.3.0: > > - Starting from 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
+> - Exception records are no longer recorded for failed dependencies, they are only recorded for failed requests.
> > If you're upgrading from 3.1.x: >
Java 3.x includes the following instrumentation libraries.
* JMS consumers * Kafka consumers * Netty/WebFlux
+* Quartz
* Servlets * Spring scheduling
Autocollected dependencies without downstream distributed trace propagation:
### Autocollected logs
+* Log4j (including MDC/Thread Context properties)
+* Logback (including MDC properties)
+* JBoss Logging (including MDC properties)
* java.util.logging
-* Log4j, which includes MDC properties
-* SLF4J/Logback, which includes MDC properties
### Autocollected metrics
Telemetry emitted by these Azure SDKs is automatically collected by default:
* [Azure Communication Identity](/java/api/overview/azure/communication-identity-readme) 1.0.0+ * [Azure Communication Phone Numbers](/java/api/overview/azure/communication-phonenumbers-readme) 1.0.0+ * [Azure Communication SMS](/java/api/overview/azure/communication-sms-readme) 1.0.0+
-* [Azure Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.13.0+
+* [Azure Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.22.0+
* [Azure Digital Twins - Core](/java/api/overview/azure/digitaltwins-core-readme) 1.1.0+ * [Azure Event Grid](/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+ * [Azure Event Hubs](/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+
Telemetry emitted by these Azure SDKs is automatically collected by default:
* [Azure Storage - Queues](/java/api/overview/azure/storage-queue-readme) 12.9.0+ * [Azure Text Analytics](/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+
-[//]: # "the above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html"
+[//]: # "Cosmos 4.22.0+ due to https://github.com/Azure/azure-sdk-for-java/pull/25571"
+
+[//]: # "the remaining above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html"
[//]: # "and version synched manually against the oldest version in maven central built on azure-core 1.14.0" [//]: # "" [//]: # "var table = document.querySelector('#tg-sb-content > div > table')"
If you want to attach custom dimensions to your logs, use [Log4j 1.2 MDC](https:
## Troubleshooting
-See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/java-standalone-troubleshoot).
+See the dedicated [troubleshooting article](java-standalone-troubleshoot.md).
## Release notes
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
# Configuration options - Azure Monitor Application Insights for Java
-> [!WARNING]
-> **If you are upgrading from 3.0 Preview**
->
-> Please review all the configuration options below carefully, as the json structure has completely changed,
-> in addition to the file name itself which went all lowercase.
- [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Connection string and role name
If you specify a relative path, it will be resolved relative to the directory wh
The file should contain only the connection string, for example: ```
-InstrumentationKey=...
+InstrumentationKey=...;IngestionEndpoint=...;LiveEndpoint=...
``` Not setting the connection string will disable the Java agent.
+If you have multiple applications deployed in the same JVM and want them to send telemetry to different instrumentation
+keys, see [Instrumentation key overrides (preview)](#instrumentation-key-overrides-preview).
+ ## Cloud role name Cloud role name is used to label the component on the application map.
If cloud role name is not set, the Application Insights resource's name will be
You can also set the cloud role name using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME` (which will then take precedence over cloud role name specified in the json configuration).
+If you have multiple applications deployed in the same JVM and want them to send telemetry to different cloud role
+names, see [Cloud role name overrides (preview)](#cloud-role-name-overrides-preview).
+ ## Cloud role instance Cloud role instance defaults to the machine name.
Starting from version 3.2.0, if you want to set a custom dimension programmatica
} ```
-## Instrumentation keys overrides (preview)
+## Instrumentation key overrides (preview)
This feature is in preview, starting from 3.2.3.
Instrumentation key overrides allow you to override the [default instrumentation
} ```
+## Cloud role name overrides (preview)
+
+This feature is in preview, starting from 3.3.0.
+
+Cloud role name overrides allow you to override the [default cloud role name](#cloud-role-name), for example:
+* Set one cloud role name for one http path prefix `/myapp1`.
+* Set another cloud role name for another http path prefix `/myapp2/`.
+
+```json
+{
+ "preview": {
+ "roleNameOverrides": [
+ {
+ "httpPathPrefix": "/myapp1",
+ "roleName": "12345678-0000-0000-0000-0FEEDDADBEEF"
+ },
+ {
+ "httpPathPrefix": "/myapp2",
+ "roleName": "87654321-0000-0000-0000-0FEEDDADBEEF"
+ }
+ ]
+ }
+}
+```
+ ## Autocollect InProc dependencies (preview)
-Starting from 3.2.0, if you want to capture controller "InProc" dependencies, please use the following configuration:
+Starting from version 3.2.0, if you want to capture controller "InProc" dependencies, please use the following configuration:
```json {
For more information, check out the [telemetry processor](./java-standalone-tele
## Auto-collected logging
-Log4j, Logback, and java.util.logging are auto-instrumented, and logging performed via these logging frameworks
-is auto-collected.
+Log4j, Logback, JBoss Logging, and java.util.logging are auto-instrumented,
+and logging performed via these logging frameworks is auto-collected.
Logging is only captured if it first meets the level that is configured for the logging framework, and second, also meets the level that is configured for Application Insights.
You can also set the level using the environment variable `APPLICATIONINSIGHTS_I
These are the valid `level` values that you can specify in the `applicationinsights.json` file, and how they correspond to logging levels in different logging frameworks:
-| level | Log4j | Logback | JUL |
-|-|--|||
-| OFF | OFF | OFF | OFF |
-| FATAL | FATAL | ERROR | SEVERE |
-| ERROR (or SEVERE) | ERROR | ERROR | SEVERE |
-| WARN (or WARNING) | WARN | WARN | WARNING |
-| INFO | INFO | INFO | INFO |
-| CONFIG | DEBUG | DEBUG | CONFIG |
-| DEBUG (or FINE) | DEBUG | DEBUG | FINE |
-| FINER | DEBUG | DEBUG | FINER |
-| TRACE (or FINEST) | TRACE | TRACE | FINEST |
-| ALL | ALL | ALL | ALL |
+| level | Log4j | Logback | JBoss | JUL |
+|-|--||--||
+| OFF | OFF | OFF | OFF | OFF |
+| FATAL | FATAL | ERROR | FATAL | SEVERE |
+| ERROR (or SEVERE) | ERROR | ERROR | ERROR | SEVERE |
+| WARN (or WARNING) | WARN | WARN | WARN | WARNING |
+| INFO | INFO | INFO | INFO | INFO |
+| CONFIG | DEBUG | DEBUG | DEBUG | CONFIG |
+| DEBUG (or FINE) | DEBUG | DEBUG | DEBUG | FINE |
+| FINER | DEBUG | DEBUG | DEBUG | FINER |
+| TRACE (or FINEST) | TRACE | TRACE | TRACE | FINEST |
+| ALL | ALL | ALL | ALL | ALL |
> [!NOTE] > If an exception object is passed to the logger, then the log message (and exception object details)
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.3.0, you can capture request and response headers on your server (request) telemetry:
+Starting from version 3.3.0, you can capture request and response headers on your server (request) telemetry:
```json {
Starting from version 3.0.3, specific auto-collected telemetry can be suppressed
"mongo": { "enabled": false },
+ "quartz": {
+ "enabled": false
+ },
"rabbitmq": { "enabled": false },
Starting from version 3.2.0, the following preview instrumentations can be enabl
"grizzly": { "enabled": true },
- "quartz": {
- "enabled": true
- },
"springIntegration": { "enabled": true },
you can configure Application Insights Java 3.x to use an HTTP proxy:
Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if those are set (and `http.nonProxyHosts` if needed).
+## Recovery from ingestion failures
+
+When sending telemetry to the Application Insights service fails, Application Insights Java 3.x will store the telemetry
+to disk and continue retrying from disk.
+
+The default limit for disk persistence is 50 Mb. If you have high telemetry volume, or need to be able to recover from
+longer network or ingestion service outages, you can increase this limit starting from version 3.3.0:
+
+```json
+{
+ "preview": {
+ "diskPersistenceMaxSizeMb": 50
+ }
+}
+```
+ ## Self-diagnostics "Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
or configuring [telemetry processors](./java-standalone-telemetry-processors.md)
## Multiple applications in a single JVM
-This use case is supported in Application Insights Java 3.x using [Instrumentation keys overrides (preview)](./java-standalone-config.md#instrumentation-keys-overrides-preview).
+This use case is supported in Application Insights Java 3.x using [Instrumentation key overrides (preview)](./java-standalone-config.md#instrumentation-key-overrides-preview).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Azure Monitor Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md
Last updated 04/03/2022+ # Monitoring Azure Monitor data reference
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
Title: Azure Monitor for existing Operations Manager customers description: Guidance for existing users of Operations Manager to transition monitoring of certain workloads to Azure Monitor as part of a transition to the cloud.- Last updated 04/05/2022+
azure-monitor Best Practices Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md
Last updated 10/18/2021+
Since you'll typically want to alert on issues for all of your critical Azure ap
## Next steps -- [Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md)
+- [Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md)
azure-monitor Best Practices Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md
Last updated 10/18/2021+
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Last updated 03/31/2022+
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Last updated 10/18/2021+
azure-monitor Best Practices Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md
Last updated 10/18/2021-+ # Azure Monitor best practices - Planning your monitoring strategy and configuration
azure-monitor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices.md
Last updated 10/18/2021+ # Azure Monitor best practices
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Starting with agent version *ciprod03022019*, Container insights integrated agent now supports monitoring GPU (graphical processing units) usage on GPU-aware Kubernetes cluster nodes, and monitor pods/containers requesting and using GPU resources.
+>[!NOTE]
+> As per the Kubernetes [upstream announcement](https://kubernetes.io/blog/2020/12/16/third-party-device-metrics-reaches-ga/#nvidia-gpu-metrics-deprecated), Kubernetes is deprecating GPU metrics that are being reported by the kubelet, for Kubernetes ver. 1.20+. This means Container Insights will no longer be able to collect the following metrics out of the box:
+> * containerGpuDutyCycle
+> * containerGpumemoryTotalBytes
+> * containerGpumemoryUsedBytes
+>
+> To continue collecting GPU metrics through Container Insights, please migrate by December 31, 2022 to your GPU vendor specific metrics exporter and configure [Prometheus scraping](./container-insights-prometheus-integration.md) to scrape metrics from the deployed vendor specific exporter.
+ ## Supported GPU vendors Container insights supports monitoring GPU clusters from following GPU vendors:
Container insights automatically starts monitoring GPU usage on nodes, and GPU r
|Metric name |Metric dimension (tags) |Description | ||||
-|containerGpuDutyCycle |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor|Percentage of time over the past sample period (60 seconds) during which GPU was busy/actively processing for a container. Duty cycle is a number between 1 and 100. |
+|containerGpuDutyCycle* |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor|Percentage of time over the past sample period (60 seconds) during which GPU was busy/actively processing for a container. Duty cycle is a number between 1 and 100. |
|containerGpuLimits |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName |Each container can specify limits as one or more GPUs. It is not possible to request or limit a fraction of a GPU. | |containerGpuRequests |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName |Each container can request one or more GPUs. It is not possible to request or limit a fraction of a GPU.|
-|containerGpumemoryTotalBytes |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor |Amount of GPU Memory in bytes available to use for a specific container. |
-|containerGpumemoryUsedBytes |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor |Amount of GPU Memory in bytes used by a specific container. |
+|containerGpumemoryTotalBytes* |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor |Amount of GPU Memory in bytes available to use for a specific container. |
+|containerGpumemoryUsedBytes* |container.azm.ms/clusterId, container.azm.ms/clusterName, containerName, gpuId, gpuModel, gpuVendor |Amount of GPU Memory in bytes used by a specific container. |
|nodeGpuAllocatable |container.azm.ms/clusterId, container.azm.ms/clusterName, gpuVendor |Number of GPUs in a node that can be used by Kubernetes. | |nodeGpuCapacity |container.azm.ms/clusterId, container.azm.ms/clusterName, gpuVendor |Total Number of GPUs in a node. |
+\* Based on Kubernetes upstream changes, these metrics are no longer collected out of the box, as a temporary hotfix, for AKS, upgrade your GPU Node pool to the latest version or \*-2022.06.08 or higher. For Arc enabled Kubernetes, enable feature gate DisableAcceleratorUsageMetrics=false in Kubelet configuration of the node and restart the Kubelet. Once the upstream changes reach GA, this fix will not longer work, make plans to migrate to using your GPU vendor specific metrics exporter by December 31, 2022.
+ ## GPU performance charts Container insights includes pre-configured charts for the metrics listed earlier in the table as a GPU workbook for every cluster. See [Workbooks in Container insights](container-insights-reports.md) for a description of the workbooks available for Container insights.
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Container insights supports the following environments:
- [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 4.x - [Red Hat OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4.x
->[!NOTE]
-> Container insights support for Windows Server 2022 operating system in public preview.
- ## Supported Kubernetes versions The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Container insights is a feature designed to monitor the performance of container
Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Docker, Moby, and any CRI compatible runtime such as CRI-O and ContainerD.
->[!NOTE]
-> Container insights support for Windows Server 2022 operating system in public preview.
- Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications. Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md), and log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
To create a custom workbook based on any of these workbooks, select the **View W
- **GPU**: Interactive GPU usage charts for each GPU-aware Kubernetes cluster node.
+>[!NOTE]
+> As per the Kubernetes [upstream announcement](https://kubernetes.io/blog/2020/12/16/third-party-device-metrics-reaches-g)
+ ## Resource Monitoring workbooks - **Deployments**: Status of your deployments & Horizontal Pod Autoscaler(HPA) including custom HPA.
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/continuous-monitoring.md
Last updated 06/07/2022+
Ensuring that your development and operations have access to the same telemetry
## Next steps - Learn about the difference components of [Azure Monitor](overview.md).-- [Add continuous monitoring](./app/continuous-monitoring.md) to your release pipeline.
+- [Add continuous monitoring](./app/continuous-monitoring.md) to your release pipeline.
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
description: Monitoring data collected by Azure Monitor is separated into metric
documentationcenter: '' -- na Last updated 04/05/2022 + # Azure Monitor data platform
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Last updated 09/09/2021 + # Azure Monitor activity log
azure-monitor App Insights Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/app-insights-metrics.md
Title: Azure Application Insights log-based metrics | Microsoft Docs description: This article lists Azure Application Insights metrics with supported aggregations and dimensions. The details about log-based metrics include the underlying Kusto query statements. --+ Previously updated : 07/03/2019 Last updated : 07/03/2019
azure-monitor Classic Api Retirement Metrics Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/classic-api-retirement-metrics-autoscale.md
Title: Retire deployment APIs for Azure Monitor metrics and autoscale
description: Metrics and autoscale classic APIs, also called Azure Service Management (ASM) or RDFE deployment model being retired Last updated 11/19/2018+
azure-monitor Collect Custom Metrics Guestos Resource Manager Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md
Title: Collect Windows VM metrics in Azure Monitor with template
description: Send guest OS metrics to the Azure Monitor metric database store by using a Resource Manager template for a Windows virtual machine -+ Last updated 05/04/2020
azure-monitor Collect Custom Metrics Guestos Resource Manager Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md
Title: Collect Windows scale set metrics in Azure Monitor with template
description: Send guest OS metrics to the Azure Monitor metric store by using a Resource Manager template for a Windows virtual machine scale set -+ Last updated 09/09/2019
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
Title: Send classic Windows VM metrics to Azure Monitor metrics database
description: Send Guest OS metrics to the Azure Monitor data store for a Windows virtual machine (classic) -+ Last updated 09/09/2019
azure-monitor Collect Custom Metrics Guestos Vm Cloud Service Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md
Title: Send classic Cloud Services metrics to Azure Monitor metrics database
description: Describes the process for sending Guest OS performance metrics for Azure classic Cloud Services to the Azure Monitor metric store. -+ Last updated 09/09/2019
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Title: Collect custom metrics for Linux VM with the InfluxData Telegraf agent
description: Instructions on how to deploy the InfluxData Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor. -+ Last updated 06/16/2022
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
description: Overview of data collection endpoints (DCEs) in Azure Monitor inclu
Last updated 03/16/2022
+ms.reviwer: nikeist
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
Title: Data Collection Rules in Azure Monitor
description: Overview of data collection rules (DCRs) in Azure Monitor including their contents and structure and how you can create and work with them. Last updated 04/26/2022+
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
Last updated 02/22/2022
+ms.reviwer: nikeist
azure-monitor Data Collection Rule Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-transformations.md
Title: Data collection rule transformations
description: Use transformations in a data collection rule in Azure Monitor to filter and modify incoming data. Last updated 02/21/2022
+ms.reviwer: nikeist
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
Title: Metrics in Azure Monitor | Microsoft Docs description: Learn about metrics in Azure Monitor, which are lightweight monitoring data capable of supporting near real-time scenarios. documentationcenter: ''-+ --+ na
azure-monitor Diagnostic Settings Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings-policy.md
Last updated 05/09/2022+ # Create diagnostic settings at scale using Azure Policy
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Last updated 03/07/2022+ # Diagnostic settings in Azure Monitor
azure-monitor Metric Chart Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metric-chart-samples.md
Title: Azure Monitor metric chart example
description: Learn about visualizing your Azure Monitor data. -+ Last updated 01/29/2019
azure-monitor Metrics Aggregation Explained https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-aggregation-explained.md
Last updated 08/31/2021+ # Azure Monitor Metrics aggregation and display explained
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
Title: Advanced features of Metrics Explorer
description: Metrics are a series of measured values and counts that Azure collects. Learn to use Metrics Explorer to investigate the health and usage of resources. - Last updated 06/09/2022
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
Last updated 06/01/2021+ # Custom metrics in Azure Monitor (preview)
azure-monitor Metrics Dynamic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-dynamic-scope.md
Title: View multiple resources in the Azure metrics explorer
description: Learn how to visualize multiple resources by using the Azure metrics explorer. -+ Last updated 12/14/2020
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
Title: Getting started with Azure metrics explorer
description: Learn how to create your first metric chart with Azure metrics explorer. - Last updated 02/21/2022 + # Getting started with Azure Metrics Explorer
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
Title: Send metrics to the Azure Monitor metric database using REST API
description: Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API -+ Last updated 09/24/2018
azure-monitor Metrics Supported Export Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported-export-diagnostic-settings.md
description: Discussion of NULL vs. zero values when exporting metrics and a poi
Last updated 07/22/2020+ # Azure Monitor platform metrics exportable via Diagnostic Settings
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Last updated 06/01/2022 + # Supported metrics with Azure Monitor
azure-monitor Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-troubleshoot.md
Title: Troubleshooting Azure Monitor metric charts
description: Troubleshoot the issues with creating, customizing, or interpreting metric charts -+ Last updated 06/09/2022- # Troubleshooting metrics charts
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
Last updated 09/15/2021+
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
Title: Overview of Azure platform logs | Microsoft Docs
description: Overview of logs in Azure Monitor, which provide rich, frequent data about the operation of an Azure resource. - Last updated 12/19/2019-+ # Overview of Azure platform logs
azure-monitor Portal Disk Metrics Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/portal-disk-metrics-deprecation.md
Last updated 03/12/2020+ # Disk metrics deprecation in the Azure portal
azure-monitor Resource Logs Blob Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-blob-format.md
Title: Prepare for format change to Azure Monitor resource logs
description: Azure resource logs moved to use append blobs on November 1, 2018. -+ Last updated 07/06/2018
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs
description: Understand the supported services and event schemas for Azure Monitor resource logs. Last updated 06/01/2022+
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
Title: Azure resource logs supported services and schemas
description: Understand the supported services and event schemas for Azure resource logs. Last updated 05/10/2021+ # Common and service-specific schemas for Azure resource logs
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
Title: Azure resource logs
description: Learn how to stream Azure resource logs to a Log Analytics workspace in Azure Monitor. - Last updated 05/09/2022 ++ # Azure resource logs
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
Last updated 09/11/2020+
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md
description: How to authenticate requests and use the Azure Monitor REST API to
Last updated 05/09/2022 + # Azure Monitoring REST API walkthrough
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Last updated 07/15/2020+ # Stream Azure monitoring data to an event hub or external partner
azure-monitor Tutorial Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-metrics.md
Last updated 11/08/2021+ # Tutorial: Analyze metrics for an Azure resource
azure-monitor Tutorial Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md
Last updated 11/08/2021+ # Tutorial: Collect and analyze resource logs from an Azure resource
azure-monitor Ad Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-assessment.md
Last updated 09/10/2019+
azure-monitor Ad Replication Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-replication-status.md
Last updated 01/24/2018+
azure-monitor Azure Key Vault Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-key-vault-deprecated.md
Last updated 03/27/2019 +
azure-monitor Azure Networking Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-networking-analytics.md
Last updated 06/21/2018 +
azure-monitor Azure Web Apps Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-web-apps-analytics.md
Last updated 07/02/2018+
azure-monitor Capacity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/capacity-performance.md
Last updated 07/13/2017+
azure-monitor Cosmosdb Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/cosmosdb-insights-overview.md
Title: Monitor Azure Cosmos DB with Azure Monitor Cosmos DB insights| Microsoft
description: This article describes the Cosmos DB insights feature of Azure Monitor that provides Cosmos DB owners with a quick understanding of performance and utilization issues with their Cosmos DB accounts. Last updated 05/11/2020+
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/dns-analytics.md
Last updated 03/20/2018+
azure-monitor Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-insights-overview.md
Last updated 11/25/2020+
azure-monitor Network Performance Monitor Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor-expressroute.md
Last updated 11/27/2018+
azure-monitor Network Performance Monitor Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor-performance-monitor.md
Last updated 02/20/2018+
azure-monitor Network Performance Monitor Service Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor-service-connectivity.md
Last updated 02/20/2018+
azure-monitor Network Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-performance-monitor.md
Last updated 02/20/2018+
azure-monitor Redis Cache Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/redis-cache-insights-overview.md
Title: Azure Monitor for Azure Cache for Redis | Microsoft Docs
description: This article describes the Azure Monitor for Azure Redis Cache feature, which provides cache owners with a quick understanding of performance and utilization problems. Last updated 09/10/2020+
azure-monitor Scom Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/scom-assessment.md
Last updated 06/25/2018+
azure-monitor Solution Agenthealth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-agenthealth.md
Last updated 02/06/2020+
azure-monitor Solution Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-office-365.md
Last updated 03/30/2020+
azure-monitor Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solutions.md
Last updated 06/16/2022 + # Monitoring solutions in Azure Monitor
azure-monitor Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-assessment.md
Last updated 05/05/2020+
azure-monitor Surface Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/surface-hubs.md
Last updated 01/16/2018+
azure-monitor Troubleshoot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/troubleshoot-workbooks.md
description: Provides troubleshooting guidance for Azure Monitor workbook-based
Last updated 06/17/2020+ # Troubleshooting workbook-based insights
azure-monitor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/vmware.md
Last updated 05/04/2018+
azure-monitor Wire Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/wire-data.md
Last updated 03/26/2021-+ # Wire Data 2.0 (Preview) solution in Azure Monitor (Retired)
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
Title: Overview of Log Analytics in Azure Monitor description: This overview describes Log Analytics, which is a tool in the Azure portal used to edit and run log queries for analyzing data in Azure Monitor logs. Previously updated : 10/04/2020 Last updated : 06/28/2022
azure-monitor Monitor Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor.md
Last updated 04/07/2022-+ <!-- VERSION 2.2-->
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
Last updated 04/05/2022+ # What is monitored by Azure Monitor?
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Last updated 04/27/2022+
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/partners.md
Last updated 10/27/2021+ # Azure Monitor partner integrations
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
+ # Azure Policy built-in definitions for Azure Monitor
azure-monitor Resource Manager Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md
Last updated 04/05/2022 + # Resource Manager template samples for Azure Monitor
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
Last updated 11/27/2017 ++ # Roles, permissions, and security in Azure Monitor
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
+ # Azure Policy Regulatory Compliance controls for Azure Monitor
azure-monitor Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/terminology.md
Last updated 06/07/2022+
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation"
description: "What's new in Azure Monitor documentation" Last updated 04/04/2022+ # What's new in Azure Monitor documentation
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-template-specs.md
The template spec is a resource type named `Microsoft.Resources/templateSpecs`.
```azurecli az deployment group create \
- --name templateSpecRG \
+ --resource-group templateSpecRG \
--template-file "c:\Templates\azuredeploy.json" ```
To deploy a template spec, use the same deployment commands as you would use to
```azurecli az deployment group create \
- --name storageRG \
+ --resource-group storageRG \
--template-file "c:\Templates\storage.json" ```
Rather than creating a new template spec for the revised template, add a new ver
```azurecli az deployment group create \
- --name templateSpecRG \
+ --resource-group templateSpecRG \
--template-file "c:\Templates\azuredeploy.json" ```
Rather than creating a new template spec for the revised template, add a new ver
```azurecli az deployment group create \
- --name storageRG \
+ --resource-group storageRG \
--template-file "c:\Templates\storage.json" ```
cognitive-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md
Previously updated : 04/11/2022 Last updated : 06/28/2022
The code in this guide uses remote images referenced by URL. You may want to try
#### [REST](#tab/rest)
-When analyzing a local image, you put the binary image data in the HTTP request body. For a remote image, you specify the image's URL by formatting the request body like this: `{"url":"http://example.com/images/test.jpg"}`.
+When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"http://example.com/images/test.jpg"}`.
+
+To analyze a local image, you'd put the binary image data in the HTTP request body.
#### [C#](#tab/csharp)
In your main class, save a reference to the URL of the image you want to analyze
[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze_url)]
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.computervisionclient) methods, such as **AnalyzeImageInStreamAsync**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/ImageAnalysisQuickstart.cs) for scenarios involving local images.
++ #### [Java](#tab/java) In your main class, save a reference to the URL of the image you want to analyze. [!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_urlimage)]
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVision](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.computervision) methods, such as **AnalyzeImage**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java) for scenarios involving local images.
+ #### [JavaScript](#tab/javascript) In your main function, save a reference to the URL of the image you want to analyze. [!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_describe_image)]
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClient](/javascript/api/@azure/cognitiveservices-computervision/computervisionclient) methods, such as **describeImageInStream**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/ComputerVision/ImageAnalysisQuickstart.js) for scenarios involving local images.
+ #### [Python](#tab/python) Save a reference to the URL of the image you want to analyze. [!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_remoteimage)]
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClientOperationsMixin](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.operations.computervisionclientoperationsmixin) methods, such as **analyze_image_in_stream**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/ComputerVision/ImageAnalysisQuickstart.py) for scenarios involving local images.
+
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
If you want to start consuming the output generated by the container, see the fo
* Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md). * Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information.
-## Running Spatial Analysis with a recorded video file
-
-You can use Spatial Analysis with both recorded or live video. To use Spatial Analysis for recorded video, try recording a video file and save it as a mp4 file. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
- 1. Change **Secure transfer required** to **Disabled**
- 2. Change **Allow Blob public access** to **Enabled**
-
-Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported).
-
-Select on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
-
-Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
-
-The Spatial Analysis module will start consuming video file and will continuously auto replay as well.
--
-```json
-"zonecrossing": {
- "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "Replace http url here",
- "VIDEO_SOURCE_ID": "personcountgraph",
- "VIDEO_IS_LIVE": false,
- "VIDEO_DECODE_GPU_INDEX": 0,
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
- }
- },
-
-```
- ## Troubleshooting If you encounter issues when starting or running the container, see [Telemetry and troubleshooting](spatial-analysis-logging.md) for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
In this article, you learned concepts and workflow for downloading, installing,
* Spatial Analysis is a Linux container for Docker. * Container images are downloaded from the Microsoft Container Registry. * Container images run as IoT Modules in Azure IoT Edge.
-* How to configure the container and deploy it on a host machine.
+* Configure the container and deploy it on a host machine.
## Next steps
cognitive-services Spatial Analysis Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-local.md
+
+ Title: Run Spatial Analysis on a local video file
+
+description: Use this guide to learn how to run Spatial Analysis on a recorded local video.
++++++ Last updated : 06/28/2022+++
+# Run Spatial Analysis on a local video file
+
+You can use Spatial Analysis with either recorded or live video. Use this guide to learn how to run Spatial Analysis on a recorded local video.
+
+## Prerequisites
+
+* Set up a Spatial Analysis container by following the steps in [Set up the host machine and run the container](spatial-analysis-container.md).
+
+## Analyze a video file
+
+To use Spatial Analysis for recorded video, record a video file and save it as a .mp4 file. Then take the following steps:
+
+1. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
+ 1. Change **Secure transfer required** to **Disabled**
+ 1. Change **Allow Blob public access** to **Enabled**
+
+1. Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported).
+
+1. Select on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
+
+1. Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
+
+The Spatial Analysis module will start consuming video file and will continuously auto replay as well.
++
+```json
+"zonecrossing": {
+ "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "Replace http url here",
+ "VIDEO_SOURCE_ID": "personcountgraph",
+ "VIDEO_IS_LIVE": false,
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
+ }
+ },
+
+```
+
+## Next steps
+
+* [Deploy a People Counting web application](spatial-analysis-web-app.md)
+* [Configure Spatial Analysis operations](spatial-analysis-operations.md)
+* [Logging and troubleshooting](spatial-analysis-logging.md)
+* [Camera placement guide](spatial-analysis-camera-placement.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Vision Studio is UI tool that lets you explore, build, and integrate features fr
Language Studio provides you with a platform to try several service features, and see what they return in a visual manner. It also provides you with an easy-to-use experience to create custom projects and models to work on your data. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.
-### Face transparency documentation
+### Responsible AI for Face
+
+#### Face transparency documentation
* The [transparency documentation](https://aka.ms/faceraidocs) provides guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, providing support to people who believe their results were incorrect, and identifying and addressing fluctuations in accuracy due to variations in operational conditions.
-### Retirement of sensitive attributes
+#### Retirement of sensitive attributes
* We have retired facial analysis capabilities that purport to infer emotional states and identity attributes, such as gender, age, smile, facial hair, hair and makeup. * Facial detection capabilities, (including detecting blur, exposure, glasses, headpose, landmarks, noise, occlusion, facial bounding box) will remain generally available and do not require an application.
-### Fairlearn package and Microsoft's Fairness Dashboard
+#### Fairlearn package and Microsoft's Fairness Dashboard
* [The open-source Fairlearn package and MicrosoftΓÇÖs Fairness Dashboard](https://github.com/microsoft/responsible-ai-toolbox/tree/main/notebooks/cognitive-services-examples/face-verification) aims to support customers to measure the fairness of Microsoft's facial verification algorithms on their own data, allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology.
-### Limited Access policy
+#### Limited Access policy
* As a part of aligning Face to the updated Responsible AI Standard, a new [Limited Access policy](https://aka.ms/AAh91ff) has been implemented for the Face API and Computer Vision. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. See details on Limited Access for Face [here](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context) and for Computer Vision [here](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context).
+### Computer Vision 3.2-preview deprecation
+
+The preview versions of the 3.2 API are scheduled to be retired in December of 2022. Customers are encouraged to use the generally available (GA) version of the API instead. Mind the following changes when migrating from the 3.2-preview versions:
+1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls now take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they will use the latest model.
+1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
+1. Image Analysis APIs now use a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
+ ## May 2022 ### OCR (Read) API model is generally available (GA)
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
To create a project, use the `spx csr project create` command. Construct the req
Here's an example Speech CLI command that creates a project:
-```azurecli-interactive
+```azurecli
spx csr project create --name "My Project" --description "My Project Description" --language "en-US" ```
The top-level `self` property in the response body is the project's URI. Use thi
For Speech CLI help with projects, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr project ```
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
To create an endpoint and deploy a model, use the `spx csr endpoint create` comm
Here's an example Speech CLI command to create an endpoint and deploy a model:
-```azurecli-interactive
+```azurecli
spx csr endpoint create --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US" ```
The top-level `self` property in the response body is the endpoint's URI. Use th
For Speech CLI help with endpoints, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr endpoint ```
To redeploy the custom endpoint with a new model, use the `spx csr model update`
Here's an example Speech CLI command that redeploys the custom endpoint with a new model:
-```azurecli-interactive
+```azurecli
spx csr endpoint update --endpoint YourEndpointId --model YourModelId ```
You should receive a response body in the following format:
For Speech CLI help with endpoints, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr endpoint ```
To gets logs for an endpoint, use the `spx csr endpoint list` command. Construct
Here's an example Speech CLI command that gets logs for an endpoint:
-```azurecli-interactive
+```azurecli
spx csr endpoint list --endpoint YourEndpointId ```
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
To create a test, use the `spx csr evaluation create` command. Construct the req
Here's an example Speech CLI command that creates a test:
-```azurecli-interactive
+```azurecli
spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Evaluation" --description "My Evaluation Description" ```
The top-level `self` property in the response body is the evaluation's URI. Use
For Speech CLI help with evaluations, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr evaluation ```
To get test results, use the `spx csr evaluation status` command. Construct the
Here's an example Speech CLI command that gets test results:
-```azurecli-interactive
+```azurecli
spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca ```
You should receive a response body in the following format:
For Speech CLI help with evaluations, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr evaluation ```
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
To create a test, use the `spx csr evaluation create` command. Construct the req
Here's an example Speech CLI command that creates a test:
-```azurecli-interactive
+```azurecli
spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Inspection" --description "My Inspection Description" ```
The top-level `self` property in the response body is the evaluation's URI. Use
For Speech CLI help with evaluations, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr evaluation ```
To get test results, use the `spx csr evaluation status` command. Construct the
Here's an example Speech CLI command that gets test results:
-```azurecli-interactive
+```azurecli
spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca ```
You should receive a response body in the following format:
For Speech CLI help with evaluations, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr evaluation ```
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
To get the training and transcription expiration dates for a base model, use the
Here's an example Speech CLI command to get the training and transcription expiration dates for a base model:
-```azurecli-interactive
+```azurecli
spx csr model status --model https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/b0bbc1e0-78d5-468b-9b7c-a5a43b2bb83f ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr model ```
To get the transcription expiration date for your custom model, use the `spx csr
Here's an example Speech CLI command to get the transcription expiration date for your custom model:
-```azurecli-interactive
+```azurecli
spx csr model status --model https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models/YourModelId ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr model ```
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
To create a model with datasets for training, use the `spx csr model create` com
Here's an example Speech CLI command that creates a model with datasets for training:
-```azurecli-interactive
+```azurecli
spx csr model create --project YourProjectId --name "My Model" --description "My Model Description" --dataset YourDatasetId --language "en-US" ``` > [!NOTE]
The top-level `self` property in the response body is the model's URI. Use this
For Speech CLI help with models, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr model ```
To connect a model to a project, use the `spx csr model update` command. Constru
Here's an example Speech CLI command that connects a model to a project:
-```azurecli-interactive
+```azurecli
spx csr model update --model YourModelId --project YourProjectId ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr model ```
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
To create a dataset and connect it to an existing project, use the `spx csr data
Here's an example Speech CLI command that creates a dataset and connects it to an existing project:
-```azurecli-interactive
+```azurecli
spx csr dataset create --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US" ```
The top-level `self` property in the response body is the dataset's URI. Use thi
For Speech CLI help with datasets, run the following command:
-```azurecli-interactive
+```azurecli
spx help csr dataset ```
cognitive-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/authentication.md
# Authenticate requests to Azure Cognitive Services
-Each request to an Azure Cognitive Service must include an authentication header. This header passes along a subscription key or access token, which is used to validate your subscription for a service or group of services. In this article, you'll learn about three ways to authenticate a request and the requirements for each.
+Each request to an Azure Cognitive Service must include an authentication header. This header passes along a subscription key or authentication token, which is used to validate your subscription for a service or group of services. In this article, you'll learn about three ways to authenticate a request and the requirements for each.
* Authenticate with a [single-service](#authenticate-with-a-single-service-subscription-key) or [multi-service](#authenticate-with-a-multi-service-subscription-key) subscription key * Authenticate with a [token](#authenticate-with-an-access-token)
Some Azure Cognitive Services accept, and in some cases require, an access token
>[!WARNING] > The services that support access tokens may change over time, please check the API reference for a service before using this authentication method.
-Both single service and multi-service subscription keys can be exchanged for access tokens in JSON Web Token (JWT) format. Access tokens are valid for 10 minutes.
+Both single service and multi-service subscription keys can be exchanged for authentication tokens. Authentication tokens are valid for 10 minutes. They're stored in JSON Web Token (JWT) format and can be queried programmatically using the [JWT libraries](https://jwt.io/libraries).
Access tokens are included in a request as the `Authorization` header. The token value provided must be preceded by `Bearer`, for example: `Bearer YOUR_AUTH_TOKEN`.
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
Previously updated : 05/27/2022 Last updated : 06/28/2022
Currently, the following features are available to be used asynchronously:
When you send asynchronous requests, you will incur charges based on number of text records you include in your request, for each feature use. For example, if you send a text record for sentiment analysis and NER, it will be counted as sending two text records, and you will be charged for both according to your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
-## Send asynchronous API requests using the REST API
+## Submit an asynchronous job using the REST API
-To create an asynchronous API request, review the [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Analyze) for the JSON body you'll send in your request.
+To submit an asynchronous job, review the [reference documentation](/rest/api/language/text-analysis-runtime/submit-job) for the JSON body you'll send in your request.
1. Add your documents to the `analysisInput` object.
-1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisTasks` object.
+1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisLROTask` object.
1. You can optionally:
- 1. Choose a specific version of the model used on your data with the `model-version` value.
+ 1. Choose a specific [version of the model](model-lifecycle.md) used on your data.
1. Include additional Language Service features in the `tasks` object, to be performed on your data at the same time.
-Once you've created the JSON body for your request, add your key to the `Ocp-Apim-Subscription-Key` header. Then send your API request to the `/analyze` endpoint:
+Once you've created the JSON body for your request, add your key to the `Ocp-Apim-Subscription-Key` header. Then send your API request to job creation endpoint. For example:
```http
-https://your-endpoint/text/analytics/v3.1/analyze
+POST https://your-endpoint.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2022-05-01
``` A successful call will return a 202 response code. The `operation-location` in the response header will be the URL you will use to retrieve the API results. The value will look similar to the following URL: ```http
-https://your-endpoint.cognitiveservices.azure.com/text/analytics/v3.2-preview.1/analyze/jobs/12345678-1234-1234-1234-12345678
+GET {Endpoint}/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-05-01
```
-To [retrieve the results](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/AnalyzeStatus) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
+To [get the status and retrieve the results](/rest/api/language/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
## Send asynchronous API requests using the client library
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 06/13/2022 Last updated : 06/21/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
Previously updated : 06/13/2022 Last updated : 06/21/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 06/13/2022 Last updated : 06/27/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 06/13/2022 Last updated : 06/21/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/27/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations) * v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for: * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
+* There is a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and [reference documentation](/rest/api/language/) for information on structuring your API calls. All text analytics 3.2-preview.2 API users can begin migrating their workloads to this new endpoint.
+ * [Entity linking](./entity-linking/quickstart.md?pivots=rest-api)
+ * [Language detection](./language-detection/quickstart.md?pivots=rest-api)
+ * [Key phrase extraction](./key-phrase-extraction/quickstart.md?pivots=rest-api)
+ * [Named entity recognition](./named-entity-recognition/quickstart.md?pivots=rest-api)
+ * [PII detection](./personally-identifiable-information/quickstart.md?pivots=rest-api)
+ * [Sentiment analysis and opinion mining](./sentiment-opinion-mining/quickstart.md?pivots=rest-api)
+ * [Text analytics for health](./text-analytics-for-health/quickstart.md?pivots=rest-api)
## May 2022
confidential-ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/overview.md
Microsoft Azure confidential ledger (ACL) is a new and highly secure service for managing sensitive data records. It runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment which keeps potential attacks at bay. Furthermore, Azure confidential ledger runs on a minimalistic Trusted Computing Base (TCB), which ensures that no oneΓüáΓÇönot even MicrosoftΓüáΓÇöis "above" the ledger.
-As its name suggests, Azure confidential ledger utilizes the [Azure Confidential Computing platform](../confidential-computing/index.yml) and the [Confidential Consortium Framework](https://www.microsoft.com/research/project/confidential-consortium-framework) to provide a high integrity solution that is tamper-protected and evident. One ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The ledger's integrity is maintained through a consensus-based blockchain.
+As its name suggests, Azure confidential ledger utilizes the [Azure Confidential Computing platform](../confidential-computing/index.yml) and the [Confidential Consortium Framework](https://ccf.dev) to provide a high integrity solution that is tamper-protected and evident. One ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The ledger's integrity is maintained through a consensus-based blockchain.
Azure confidential ledger offers unique data integrity advantages, including immutability, tamper-proofing, and append-only operations. These features, which ensure that all records are kept intact, are ideal when critical metadata records must not be modified, such as for regulatory compliance and archival purposes.
The confidential ledger is exposed through REST APIs which can be integrated int
## Ledger security
-This section defines the security protections for the ledger. The ledger APIs use client certificate-based authentication. Currently, the ledger supports certificate-based authentication process with owner roles. We will be adding support for Azure Active Directory (AAD) based authentication and also role-based access (for example, owner, reader, and contributor).
+The ledger APIs support certificate-based authentication process with owner roles as well as Azure Active Directory (AAD) based authentication and also role-based access (for example, owner, reader, and contributor).
-The data to the ledger is sent through TLS 1.2 connection and the TLS 1.2 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
+The data to the ledger is sent through TLS 1.3 connection and the TLS 1.3 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
### Ledger storage
The Functional APIs allow direct interaction with your instantiated confidential
## Constraints -- Once a confidential ledger is created, you cannot change the ledger type.-- Azure confidential ledger does not support standard Azure Disaster Recovery at this time. However, Azure confidential ledger offers built-in redundancy within the Azure region, as the confidential ledger runs on multiple independent nodes.
+- Once a confidential ledger is created, you cannot change the ledger type (private or public).
- Azure confidential ledger deletion leads to a "hard delete", so your data will not be recoverable after deletion. - Azure confidential ledger names must be globally unique. Ledgers with the same name, irrespective of their type, are not allowed.
The Functional APIs allow direct interaction with your instantiated confidential
| Term | Definition | |--|--| | ACL | Azure confidential ledger |
-| Ledger | An immutable append record of transactions (also known as a Blockchain) |
-| Commit | A confirmation that a transaction has been locally committed to a node. A local commit by itself does not guarantee that a transaction is part of the ledger. |
-| Global commit | A confirmation that transaction was globally committed and is part of the ledger. |
+| Ledger | An immutable append-only record of transactions (also known as a Blockchain) |
+| Commit | A confirmation that a transaction has been appended to the ledger. |
| Receipt | Proof that the transaction was processed by the ledger. | ## Next steps
container-apps Storage Mounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md
A container app has access to different types of storage. A single app can take
| Storage type | Description | Usage examples | |--|--|--|
-| [Container file system](#container-file-system) | Temporary storage scoped to the environment | Writing a local app cache. |
+| [Container file system](#container-file-system) | Temporary storage scoped to the local container | Writing a local app cache. |
| [Temporary storage](#temporary-storage) | Temporary storage scoped to an individual replica | Sharing files between containers in a replica. For instance, the main app container can write log files that are processed by a sidecar container. | | [Azure Files](#azure-files) | Permanent storage | Writing files to a file share to make data accessible by other systems. |
The following ARM template snippets demonstrate how to add an Azure Files share
See the [ARM template API specification](azure-resource-manager-api-spec.md) for a full example.
container-registry Container Registry Auto Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md
Last updated 05/07/2021
When you use an Azure container registry as part of a development workflow, the registry can quickly fill up with images or other artifacts that aren't needed after a short period. You might want to delete all tags that are older than a certain duration or match a specified name filter. To delete multiple artifacts quickly, this article introduces the `acr purge` command you can run as an on-demand or [scheduled](container-registry-tasks-scheduled.md) ACR Task.
-The `acr purge` command is currently distributed in a public container image (`mcr.microsoft.com/acr/acr-cli:0.4`), built from source code in the [acr-cli](https://github.com/Azure/acr-cli) repo in GitHub. `acr purge` is currently in preview.
+The `acr purge` command is currently distributed in a public container image (`mcr.microsoft.com/acr/acr-cli:0.5`), built from source code in the [acr-cli](https://github.com/Azure/acr-cli) repo in GitHub. `acr purge` is currently in preview.
You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the ACR task examples in this article. If you'd like to use it locally, version 2.0.76 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
The `acr purge` container command deletes images by tag in a repository that mat
At a minimum, specify the following when you run `acr purge`:
-* `--filter` - A repository and a *regular expression* to filter tags in the repository. Examples: `--filter "hello-world:.*"` matches all tags in the `hello-world` repository, and `--filter "hello-world:^1.*"` matches tags beginning with `1`. Pass multiple `--filter` parameters to purge multiple repositories.
+* `--filter` - A repository name *regular expression* and a tag name *regular expression* to filter images in the registry. Examples: `--filter "hello-world:.*"` matches all tags in the `hello-world` repository, `--filter "hello-world:^1.*"` matches tags beginning with `1` in the `hello-world` repository, and `--filter ".*/cache:.*"` matches all tags in the repositories ending in `/cache`. You can also pass multiple `--filter` parameters.
* `--ago` - A Go-style [duration string](https://go.dev/pkg/time/) to indicate a duration beyond which images are deleted. The duration consists of a sequence of one or more decimal numbers, each with a unit suffix. Valid time units include "d" for days, "h" for hours, and "m" for minutes. For example, `--ago 2d3h6m` selects all filtered images last modified more than 2 days, 3 hours, and 6 minutes ago, and `--ago 1.5h` selects images last modified more than 1.5 hours ago. `acr purge` supports several optional parameters. The following two are used in examples in this article:
container-registry Container Registry Tasks Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-reference-yaml.md
Each of the following aliases points to a stable image in Microsoft Container Re
| Alias | Image | | -- | -- |
-| `acr` | `mcr.microsoft.com/acr/acr-cli:0.4` |
-| `az` | `mcr.microsoft.com/acr/azure-cli:f75cfff` |
-| `bash` | `mcr.microsoft.com/acr/bash:f75cfff` |
-| `curl` | `mcr.microsoft.com/acr/curl:f75cfff` |
+| `acr` | `mcr.microsoft.com/acr/acr-cli:0.5` |
+| `az` | `mcr.microsoft.com/acr/azure-cli:7ee1d7f` |
+| `bash` | `mcr.microsoft.com/acr/bash:7ee1d7f` |
+| `curl` | `mcr.microsoft.com/acr/curl:7ee1d7f` |
The following example task uses several aliases to [purge](container-registry-auto-purge.md) image tags older than 7 days in the repo `samples/hello-world` in the run registry:
container-registry Tutorial Deploy Connected Registry Nested Iot Edge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-deploy-connected-registry-nested-iot-edge-cli.md
Overall, the lower layer deployment file is similar to the top layer deployment
"modules": { "connected-registry": { "settings": {
- "image": "$upstream:8000/acr/connected-registry:0.5.0",
+ "image": "$upstream:8000/acr/connected-registry:0.7.0",
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/home/azureuser/connected-registry:/var/acr/data\"]}}" }, "type": "docker",
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
Burst capacity applies only to Azure Cosmos DB accounts using provisioned throug
## How burst capacity works > [!NOTE]
-> The current implementation of burst capacity is subject to change in the future. Usage of burst capacity is subject to system resource availability and is not guaranteed. Azure Cosmos DB may also use burst capacity for background maintenance tasks. If your workload requires consistent throughput beyond what you have provisioned, it's recommended to provision your RU/s accordingly without relying on burst capacity.
+> The current implementation of burst capacity is subject to change in the future. Usage of burst capacity is subject to system resource availability and is not guaranteed. Azure Cosmos DB may also use burst capacity for background maintenance tasks. If your workload requires consistent throughput beyond what you have provisioned, it's recommended to provision your RU/s accordingly without relying on burst capacity. Before enabling burst capacity, it is also recommended to evaluate if your partition layout can be [merged](merge.md) to permanently give more RU/s per physical partition without relying on burst capacity.
Let's take an example of a physical partition that has 100 RU/s of provisioned throughput and is idle for 5 minutes. With burst capacity, it can accumulate a maximum of 100 RU/s * 300 seconds = 30,000 RU of burst capacity. The capacity can be consumed at a maximum rate of 3000 RU/s, so if there's a sudden spike in request volume, the partition can burst up to 3000 RU/s for up 30,000 RU / 3000 RU/s = 10 seconds. Without burst capacity, any requests that are consumed beyond the provisioned 100 RU/s would have been rate limited (429).
After the 10 seconds is over, the burst capacity has been used up. If the worklo
## Getting started
-To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
-- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).-- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+
+Before submitting your request:
+- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+
+The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Burst Capacity**. Run the **Check eligibility for burst capacity preview** diagnostic.
++ ## Limitations
To get started using burst capacity, enroll in the preview by submitting a reque
To enroll in the preview, your Cosmos account must meet all the following criteria: - Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts. - If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
- - There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, Table API, or API for MongoDB.
+ - There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, or API for MongoDB.
- Your Cosmos account isn't using any unsupported connectors - Azure Data Factory - Azure Stream Analytics - Logic Apps - Azure Functions - Azure Search
+ - Azure Cosmos DB Spark connector
+ - Azure Cosmos DB data migration tool
+ - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
### SDK requirements (SQL and Table API only) #### SQL API
For Table API accounts, burst capacity is supported only when using the latest v
If you enroll in the preview, the following connectors will fail.
-* Azure Data Factory
-* Azure Stream Analytics
-* Logic Apps
-* Azure Functions
-* Azure Search
+* Azure Data Factory<sup>1</sup>
+* Azure Stream Analytics<sup>1</sup>
+* Logic Apps<sup>1</sup>
+* Azure Functions<sup>1</sup>
+* Azure Search<sup>1</sup>
+* Azure Cosmos DB Spark connector<sup>1</sup>
+* Azure Cosmos DB data migration tool
+* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-Support for these connectors is planned for the future.
+<sup>1</sup>Support for these connectors is planned for the future.
## Next steps
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
The following links show how to update containers analytical TTL by using PowerS
* [Azure Cosmos DB API for Mongo DB](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollection) * [Azure Cosmos DB SQL API](/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer)
-## <a id="disable-analytical-store"></a> Optional - Disable analytical store in a container
+## <a id="disable-analytical-store"></a> Optional - Disable analytical store in a SQL API container
-Analytical store can be disabled in SQL API containers using `Update-AzCosmosDBSqlContainer` PowerShell command, by updating `-AnalyticalStorageTtl` (analytical Time-To-Live) to `0`. Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
+Analytical store can be disabled in SQL API containers using Azure CLI or PowerShell.
+
+> [!NOTE]
+> Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
+
+> [!NOTE]
+> Please note that disabling analitical store is not available for MongoDB API collections.
++
+### Azure CLI
+
+Set `--analytical-storage-ttl` parameter to 0 using the `az cosmosdb sql container update` Azure CLI command.
+
+### PowerShell
+
+Set `-AnalyticalStorageTtl` paramenter to 0 using the `Update-AzCosmosDBSqlContainer` PowerShell command.
-Currently you can't be disabled in MongoDB API collections.
## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
Last updated 05/09/2022
# Merge partitions in Azure Cosmos DB (preview) [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems.
+Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container in place. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container and RU/s per partition is low. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems.
## Getting started
-To get started using merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
-- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).-- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+To get started using partition merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+
+Before submitting your request:
+- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+
+The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Partition Merge**. Run the **Check eligibility for partition merge preview** diagnostic.
++ ### Merging physical partitions
To enroll in the preview, your Cosmos account must meet all the following criter
* Logic Apps * Azure Functions * Azure Search
+ * Azure Cosmos DB Spark connector
+ * Azure Cosmos DB data migration tool
+ * Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
### Account resources and configuration * Merge is only available for SQL API and API for MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
Support for other SDKs is planned for the future.
If you enroll in the preview, the following connectors will fail.
-* Azure Data Factory
-* Azure Stream Analytics
-* Logic Apps
-* Azure Functions
-* Azure Search
+* Azure Data Factory<sup>1</sup>
+* Azure Stream Analytics<sup>1</sup>
+* Logic Apps<sup>1</sup>
+* Azure Functions<sup>1</sup>
+* Azure Search<sup>1</sup>
+* Azure Cosmos DB Spark connector<sup>1</sup>
+* Azure Cosmos DB data migration tool
+* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-Support for these connectors is planned for the future.
+<sup>1</sup>Support for these connectors is planned for the future.
## Next steps
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Use the following steps to migrate your account from periodic backup to continuo
Connect-AzAccount ```
- 1. Migrate your account from periodic to continuous backup mode with ``continuous30days`` tier or ``continuous7days`` days. If a tier value isn't provided, it's assumed to be ``continous30days``:
+ 1. Migrate your account from periodic to continuous backup mode with ``continuous30days`` tier or ``continuous7days`` days. If a tier value isn't provided, it's assumed to be ``continuous30days``:
```azurepowershell-interactive Update-AzCosmosDBAccount `
Use the following steps to migrate your account from periodic backup to continuo
az login ```
-1. Migrate the account to ``continuous30days`` or ``continuous7days`` tier. If tier value isn't provided, it's assumed to be ``continous30days``:
+1. Migrate the account to ``continuous30days`` or ``continuous7days`` tier. If tier value isn't provided, it's assumed to be ``continuous30days``:
```azurecli-interactive az cosmosdb update -n <myaccount> -g <myresourcegroup> --backup-policy-type continuous
az deployment group create -g <ResourceGroup> --template-file <ProvisionTemplate
## Change Continuous Mode tiers
-You can switch between ``Continous30Days`` and ``Continous7Days`` in Azure PowerShell, Azure CLI or the Azure portal.
+You can switch between ``Continuous30Days`` and ``Continous7Days`` in Azure PowerShell, Azure CLI or the Azure portal.
The Following Azure CLI command illustrates switching an existing account to ``Continous7Days``:
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
New-AzCosmosDBAccount `
### <a id="provision-powershell-mongodb-api"></a>API for MongoDB
-The following cmdlet is an example of continuous backup account configured with the ``Continous30days`` tier:
+The following cmdlet is an example of continuous backup account configured with the ``Continuous30days`` tier:
```azurepowershell New-AzCosmosDBAccount `
New-AzCosmosDBAccount `
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of an account with continuous backup policy configured with the ``Continous30days`` tier:
+The following cmdlet is an example of an account with continuous backup policy configured with the ``Continuous30days`` tier:
```azurepowershell New-AzCosmosDBAccount `
cosmos-db Create Sql Api Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-go.md
if err != nil {
// Create database client databaseClient, err := client.NewDatabase("<databaseName>") if err != nil {
- log.fatal("Failed to create database client:", err)
+ log.Fatal("Failed to create database client:", err)
} // Create container client containerClient, err := client.NewContainer("<databaseName>", "<containerName>") if err != nil {
- log.fatal("Failed to create a container client:", err)
+ log.Fatal("Failed to create a container client:", err)
} ```
-**Create a Cosmos database**
+**Create a Cosmos DB database**
```go
-databaseProperties := azcosmos.DatabaseProperties{ID: "<databaseName>"}
-
-databaseResp, err := client.CreateDatabase(context.TODO(), databaseProperties, nil)
-if err != nil {
- log.Fatal(err)
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createDatabase (client *azcosmos.Client, databaseName string) error {
+// databaseName := "adventureworks"
+
+ // sets the name of the database
+ databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
+
+ // creating the database
+ ctx := context.TODO()
+ databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
+ if err != nil {
+ log.Fatal(err)
+ }
+ return nil
} ``` **Create a container** ```go
-database, err := client.NewDatabase("<databaseName>") //returns struct that represents a database.
-if err != nil {
- log.Fatal(err)
-}
-
-properties := azcosmos.ContainerProperties{
- ID: "ToDoItems",
- PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
- Paths: []string{"/category"},
- },
-}
-
-resp, err := database.CreateContainer(context.TODO(), properties, nil)
-if err != nil {
- log.Fatal(err)
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createContainer (client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "/customerId"
+
+ databaseClient, err := client.NewDatabase(databaseName) // returns a struct that represents a database
+ if err != nil {
+ log.Fatal("Failed to create a database client:", err)
+ }
+
+ // Setting container properties
+ containerProperties := azcosmos.ContainerProperties{
+ ID: containerName,
+ PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
+ Paths: []string{partitionKey},
+ },
+ }
+
+ // Setting container options
+ throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
+ options := &azcosmos.CreateContainerOptions{
+ ThroughputProperties: &throughputProperties,
+ }
+
+ ctx := context.TODO()
+ containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
+ if err != nil {
+ log.Fatal(err)
+
+ }
+ log.Printf("Container [%v] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
+
+ return nil
} ``` **Create an item** ```go
-container, err := client.NewContainer("<databaseName>", "<containerName>")
-if err != nil {
- log.Fatal(err)
-}
-
-pk := azcosmos.NewPartitionKeyString("personal") //specifies the value of the partition key
-
-item := map[string]interface{}{
- "id": "1",
- "category": "personal",
- "name": "groceries",
- "description": "Pick up apples and strawberries",
- "isComplete": false,
-}
-
-marshalled, err := json.Marshal(item)
-if err != nil {
- log.Fatal(err)
-}
-
-itemResponse, err := container.CreateItem(context.TODO(), pk, marshalled, nil)
-if err != nil {
- log.Fatal(err)
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+/*
+ item = struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ }
+*/
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ b, err := json.Marshal(item)
+ if err != nil {
+ return err
+ }
+ // setting item options upon creating ie. consistency level
+ itemOptions := azcosmos.ItemOptions{
+ ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
+ }
+ ctx := context.TODO()
+ itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
+
+ if err != nil {
+ return err
+ }
+ log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
} ``` **Read an item** ```go
-getResponse, err := container.ReadItem(context.TODO(), pk, "1", nil)
-if err != nil {
- log.Fatal(err)
-}
-
-var getResponseBody map[string]interface{}
-err = json.Unmarshal(getResponse.Value, &getResponseBody)
-if err != nil {
- log.Fatal(err)
-}
-
-fmt.Println("Read item with Id 1:")
-
-for key, value := range getResponseBody {
- fmt.Printf("%s: %v\n", key, value)
+import (
+ "context"
+ "log"
+ "fmt"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("Failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Read an item
+ ctx := context.TODO()
+ itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ itemResponseBody := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{}
+
+ err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
+ if err != nil {
+ return err
+ }
+
+ b, err := json.MarshalIndent(itemResponseBody, "", " ")
+ if err != nil {
+ return err
+ }
+ fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
+ fmt.Printf("%s\n", b)
+
+ log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
} ``` **Delete an item** ```go
-delResponse, err := container.DeleteItem(context.TODO(), pk, "1", nil)
-if err != nil {
- log.Fatal(err)
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("Failed to create a container client: %s", err)
+ }
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Delete an item
+ ctx := context.TODO()
+ res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
+
+ return nil
} ```
Get your Azure Cosmos account credentials by following these steps:
After you've copied the **URI** and **PRIMARY KEY** of your account, save them to a new environment variable on the local machine running the application.
-Use the values copied from the Azure port to set the following environment variables:
+Use the values copied from the Azure portal to set the following environment variables:
# [Bash](#tab/bash) ```bash
-export AZURE_COSMOS_URL=<Your_AZURE_COSMOS_URI>
-export AZURE_COSMOS_PRIMARY_KEY=<Your_COSMOS_PRIMARY_KEY>
+export AZURE_COSMOS_ENPOINT=<Your_AZURE_COSMOS_URI>
+export AZURE_COSMOS_KEY=<Your_COSMOS_PRIMARY_KEY>
``` # [PowerShell](#tab/powershell) ```powershell
-$env:AZURE_COSMOS_URL=<Your_AZURE_COSMOS_URI>
-$env:AZURE_COSMOS_PRIMARY_KEY=<Your_AZURE_COSMOS_URI>
+$env:AZURE_COSMOS_ENDPOINT=<Your_AZURE_COSMOS_URI>
+$env:AZURE_COSMOS_KEY=<Your_AZURE_COSMOS_URI>
```
Create a new Go module by running the following command:
go mod init azcosmos ```
-Create a new file named `main.go` and copy the desired code from the sample sections above.
+```go
+
+package main
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func main() {
+ endpoint := os.Getenv("AZURE_COSMOS_ENDPOINT")
+ if endpoint == "" {
+ log.Fatal("AZURE_COSMOS_ENDPOINT could not be found")
+ }
+
+ key := os.Getenv("AZURE_COSMOS_KEY")
+ if key == "" {
+ log.Fatal("AZURE_COSMOS_KEY could not be found")
+ }
+
+ var databaseName = "adventureworks"
+ var containerName = "customer"
+ var partitionKey = "/customerId"
+
+ item := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ }
+
+ cred, err := azcosmos.NewKeyCredential(key)
+ if err != nil {
+ log.Fatal("Failed to create a credential: ", err)
+ }
+
+ // Create a CosmosDB client
+ client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
+ if err != nil {
+ log.Fatal("Failed to create cosmos db client: ", err)
+ }
+
+ err = createDatabase(client, databaseName)
+ if err != nil {
+ log.Printf("createDatabase failed: %s\n", err)
+ }
+
+ err = createContainer(client, databaseName, containerName, partitionKey)
+ if err != nil {
+ log.Printf("createContainer failed: %s\n", err)
+ }
+
+ err = createItem(client, databaseName, containerName, item.CustomerId, item)
+ if err != nil {
+ log.Printf("createItem failed: %s\n", err)
+ }
+
+ err = readItem(client, databaseName, containerName, item.CustomerId, item.ID)
+ if err != nil {
+ log.Printf("readItem failed: %s\n", err)
+ }
+
+ err = deleteItem(client, databaseName, containerName, item.CustomerId, item.ID)
+ if err != nil {
+ log.Printf("deleteItem failed: %s\n", err)
+ }
+}
+
+func createDatabase(client *azcosmos.Client, databaseName string) error {
+// databaseName := "adventureworks"
+
+ databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
+
+ // This is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+ ctx := context.TODO()
+ databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Database [%s] already exists\n", databaseName)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Database [%v] created. ActivityId %s\n", databaseName, databaseResp.ActivityID)
+ }
+ return nil
+}
+
+func createContainer(client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
+// databaseName = adventureworks
+// containerName = customer
+// partitionKey = "/customerId"
+
+ databaseClient, err := client.NewDatabase(databaseName)
+ if err != nil {
+ return err
+ }
+
+ // creating a container
+ containerProperties := azcosmos.ContainerProperties{
+ ID: containerName,
+ PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
+ Paths: []string{partitionKey},
+ },
+ }
+
+ // this is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+
+ // setting options upon container creation
+ throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
+ options := &azcosmos.CreateContainerOptions{
+ ThroughputProperties: &throughputProperties,
+ }
+ ctx := context.TODO()
+ containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Container [%s] already exists\n", containerName)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Container [%s] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
+ }
+ return nil
+}
+
+func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+
+/* item = struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ CreationDate: "2014-02-25T00:00:00",
+ }
+*/
+ // create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ b, err := json.Marshal(item)
+ if err != nil {
+ return err
+ }
+ // setting the item options upon creating ie. consistency level
+ itemOptions := azcosmos.ItemOptions{
+ ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
+ }
+
+ // this is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+
+ ctx := context.TODO()
+ itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Item with partitionkey value %s already exists\n", pk)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+ }
+
+ return nil
+}
+
+func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Read an item
+ ctx := context.TODO()
+ itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ itemResponseBody := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{}
+
+ err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
+ if err != nil {
+ return err
+ }
+
+ b, err := json.MarshalIndent(itemResponseBody, "", " ")
+ if err != nil {
+ return err
+ }
+ fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
+ fmt.Printf("%s\n", b)
+
+ log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
+}
+
+func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client:: %s", err)
+ }
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Delete an item
+ ctx := context.TODO()
+
+ res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
+
+ return nil
+}
+
+```
+Create a new file named `main.go` and copy the code from the sample section above.
Run the following command to execute the app:
Run the following command to execute the app:
go run main.go ``` - ## Clean up resources [!INCLUDE [cosmosdb-delete-resource-group](../includes/cosmos-db-delete-resource-group.md)]
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/distribute-throughput-across-partitions.md
In general, usage of this feature is recommended for scenarios when both the fol
- You're consistently seeing greater than 1-5% overall rate of 429 responses - You've a consistent, predictable hot partition
-If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements.
+If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements. If you have a small amount of RU/s per partition, you can also use the [partition merge (preview)](../merge.md) to reduce the number of partitions and ensure more RU/s per partition for the same total provisioned throughput.
## Getting started
-To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
-- Before submitting your request, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).-- The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+Before submitting your request:
+- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+
+The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Throughput redistribution across partition**. Run the **Check eligibility for throughput redistribution across partitions preview** diagnostic.
++ ## Example scenario
To enroll in the preview, your Cosmos account must meet all the following criter
- Logic Apps - Azure Functions - Azure Search-
+ - Azure Cosmos DB Spark connector
+ - Azure Cosmos DB data migration tool
+ - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
+
### SDK requirements (SQL API only) Throughput redistribution across partitions is supported only with the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use this feature for API for MongoDB accounts.
Support for other SDKs is planned for the future.
If you enroll in the preview, the following connectors will fail.
-* Azure Data Factory
-* Azure Stream Analytics
-* Logic Apps
-* Azure Functions
-* Azure Search
+* Azure Data Factory<sup>1</sup>
+* Azure Stream Analytics<sup>1</sup>
+* Logic Apps<sup>1</sup>
+* Azure Functions<sup>1</sup>
+* Azure Search<sup>1</sup>
+* Azure Cosmos DB Spark connector<sup>1</sup>
+* Azure Cosmos DB data migration tool
+* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-Support for these connectors is planned for the future.
+<sup>1</sup>Support for these connectors is planned for the future.
## Next steps
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
tags: billing
Previously updated : 06/14/2022 Last updated : 06/29/2022
You can request billing ownership of products for the subscription types listed
- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)<sup>2</sup> - [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)<sup>1</sup> - [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/)
- - Transfers are only supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer.
- - Transfers aren't supported for indirect EA customers. An indirect EA is one where a customer signs an agreement with a Microsoft partner.
+ - Subscription and reservation transfer are supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer.
+ - Only subscription transfers are supported for indirect EA customers. Reservation transfers aren't supported. An indirect EA agreement is one where a customer signs an agreement with a Microsoft partner.
- [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) - [Microsoft Partner Network](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup> - [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)<sup>1</sup>
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
Previously updated : 10/13/2021 Last updated : 06/29/2022
This article provides suggestions to troubleshoot common problems with the Azure
| If the error message contains the string "SqlException", SQL Database throws an error indicating some specific operation failed. | If the SQL error is not clear, try to alter the database to the latest compatibility level '150'. It can throw the latest version SQL errors. For more information, see the [documentation](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level#backwardCompat). <br/> For more information about troubleshooting SQL issues, search by SQL error code in [Database engine errors](/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support. | | If the error message contains the string "PdwManagedToNativeInteropException", it's usually caused by a mismatch between the source and sink column sizes. | Check the size of both the source and sink columns. For further help, contact Azure SQL support. | | If the error message contains the string "InvalidOperationException", it's usually caused by invalid input data. | To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity, which can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity](./copy-activity-fault-tolerance.md). |
+ | If the error message contains "Execution Timeout Expired", it's usually caused by query timeout. | Configure **Query timeout** in the source and **Write batch timeout** in the sink to increase timeout. |
## Error code: SqlUnauthorizedAccess
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
Previously updated : 06/09/2022 Last updated : 06/29/2022
All the linked service types are supported for parameterization.
- FTP - Generic HTTP - Generic REST
+- Google AdWords
- MySQL - OData - Oracle
databox-online Azure Stack Edge Gpu Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-shares.md
Previously updated : 05/03/2022 Last updated : 06/29/2022 # Use Azure portal to manage shares on your Azure Stack Edge Pro
Do the following steps in the Azure portal to create a share.
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
- ![Select add share](media/azure-stack-edge-gpu-manage-shares/add-share-1.png)
+ ![Screenshot of selecting the Add share option on the command bar.](media/azure-stack-edge-gpu-manage-shares/add-share-1.png)
2. In **Add Share**, specify the share settings. Provide a unique name for your share.
Do the following steps in the Azure portal to create a share.
6. This step depends on whether you're creating an SMB or an NFS share. - **If creating an SMB share** - In the **All privilege local user** field, choose from **Create new** or **Use existing**. If creating a new local user, provide the **username**, **password**, and then confirm password. This assigns the permissions to the local user. After you have assigned the permissions here, you can then use File Explorer to modify these permissions.
- ![Add SMB share](media/azure-stack-edge-gpu-manage-shares/add-smb-share.png)
+ ![Screenshot of the Add SMB share page.](media/azure-stack-edge-gpu-manage-shares/add-smb-share.png)
If you check allow only read operations for this share data, you can specify read-only users. - **If creating an NFS share** - You need to supply the **IP addresses of the allowed clients** that can access the share.
- ![Add NFS share](media/azure-stack-edge-gpu-manage-shares/add-nfs-share.png)
+ ![Screenshot of the Add NFS share page.](media/azure-stack-edge-gpu-manage-shares/add-nfs-share.png)
7. To easily access the shares from Edge compute modules, use the local mount point. Select **Use the share with Edge compute** so that the share is automatically mounted after it's created. When this option is selected, the Edge module can also use the compute with the local mount point.
Do the following steps in the Azure portal to create a share.
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
- ![Select add share 2](media/azure-stack-edge-gpu-manage-shares/add-local-share-1.png)
+ ![Screenshot of the Select add share 2 option on the command bar.](media/azure-stack-edge-gpu-manage-shares/add-local-share-1.png)
2. In **Add Share**, specify the share settings. Provide a unique name for your share.
Do the following steps in the Azure portal to create a share.
7. Select **Create**.
- ![Create local share](media/azure-stack-edge-gpu-manage-shares/add-local-share-2.png)
+ ![Screenshot of the Create local share with the Configure as Edge local share option.](media/azure-stack-edge-gpu-manage-shares/add-local-share-2.png)
You see a notification that the share creation is in progress. After the share is created with the specified settings, the **Shares** blade updates to reflect the new share.
- ![View updates Shares blade](media/azure-stack-edge-gpu-manage-shares/add-local-share-3.png)
+ ![Screenshot of the View updates Shares blade.](media/azure-stack-edge-gpu-manage-shares/add-local-share-3.png)
Select the share to view the local mountpoint for the Edge compute modules for this share.
- ![View local share details](media/azure-stack-edge-gpu-manage-shares/add-local-share-4.png)
+ ![Screenshot of the View local share details.](media/azure-stack-edge-gpu-manage-shares/add-local-share-4.png)
## Mount a share
If you created a share before you configured compute on your Azure Stack Edge Pr
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share you want to mount. The **Used for compute** column will show the status as **Disabled** for the selected share.
- ![Select share](media/azure-stack-edge-gpu-manage-shares/mount-share-1.png)
+ ![Screenshot of the Select share to mount.](media/azure-stack-edge-gpu-manage-shares/mount-share-1.png)
2. Select **Mount**.
- ![Select mount](media/azure-stack-edge-gpu-manage-shares/mount-share-2.png)
+ ![Screenshot of the Select mount option in the command bar.](media/azure-stack-edge-gpu-manage-shares/mount-share-2.png)
3. When prompted for confirmation, select **Yes**. This will mount the share.
- ![Confirm mount](media/azure-stack-edge-gpu-manage-shares/mount-share-3.png)
+ ![Screenshot of the Confirm mount dialog.](media/azure-stack-edge-gpu-manage-shares/mount-share-3.png)
4. After the share is mounted, go to the list of shares. You'll see that the **Used for compute** column shows the share status as **Enabled**.
- ![Share mounted](media/azure-stack-edge-gpu-manage-shares/mount-share-4.png)
+ ![Screenshot of the Share mounted confirmation.](media/azure-stack-edge-gpu-manage-shares/mount-share-4.png)
5. Select the share again to view the local mountpoint for the share. Edge compute module uses this local mountpoint for the share.
- ![Local mountpoint for the share](media/azure-stack-edge-gpu-manage-shares/mount-share-5.png)
+ ![Screenshot of the local mount point for the share.](media/azure-stack-edge-gpu-manage-shares/mount-share-5.png)
## Unmount a share
Do the following steps in the Azure portal to unmount a share.
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share that you want to unmount. You want to make sure that the share you unmount isn't used by any modules. If the share is used by a module, then you'll see issues with the corresponding module.
- ![Select share 2](media/azure-stack-edge-gpu-manage-shares/unmount-share-1.png)
+ ![Screenshot of select share to unmount.](media/azure-stack-edge-gpu-manage-shares/unmount-share-1.png)
2. Select **Unmount**.
- ![Select unmount](media/azure-stack-edge-gpu-manage-shares/unmount-share-2.png)
+ ![Screenshot of selecting the unmount option from the command bar.](media/azure-stack-edge-gpu-manage-shares/unmount-share-2.png)
3. When prompted for confirmation, select **Yes**. This will unmount the share.
- ![Confirm unmount](media/azure-stack-edge-gpu-manage-shares/unmount-share-3.png)
+ ![Screenshot of confirming the unmount operation.](media/azure-stack-edge-gpu-manage-shares/unmount-share-3.png)
4. After the share is unmounted, go to the list of shares. You'll see that **Used for compute** column shows the share status as **Disabled**.
- ![Share unmounted](media/azure-stack-edge-gpu-manage-shares/unmount-share-4.png)
+ ![Screenshot of the share unmounted confirmation.](media/azure-stack-edge-gpu-manage-shares/unmount-share-4.png)
## Delete a share
Use the following steps in the Azure portal to delete a share.
1. From the list of shares, select and click the share that you want to delete.
- ![Screenshot of select share 3](media/azure-stack-edge-gpu-manage-shares/delete-share-1.png)
+ ![Screenshot of select share to delete.](media/azure-stack-edge-gpu-manage-shares/delete-share-1.png)
2. Select **Delete**.
- ![Screenshot of select delete](media/azure-stack-edge-gpu-manage-shares/delete-share-2.png)
+ ![Screenshot of the delete option confirmation.](media/azure-stack-edge-gpu-manage-shares/delete-share-2.png)
3. When prompted for confirmation, select **Yes**.
- ![Confirm delete](media/azure-stack-edge-gpu-manage-shares/delete-share-3.png)
+ ![Screenshot of the deleted share confirmation.](media/azure-stack-edge-gpu-manage-shares/delete-share-3.png)
The list of shares updates to reflect the deletion.
Do the following steps in the Azure portal to refresh a share.
1. In the Azure portal, go to **Shares**. Select and click the share that you want to refresh.
- ![Select share 4](media/azure-stack-edge-gpu-manage-shares/refresh-share-1.png)
+ ![Screenshot of the share to refresh.](media/azure-stack-edge-gpu-manage-shares/refresh-share-1.png)
2. Select **Refresh**.
- ![Screenshot of select refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-2.png)
+ ![Screenshot of select refresh data.](media/azure-stack-edge-gpu-manage-shares/refresh-share-2.png)
3. When prompted for confirmation, select **Yes**. A job starts to refresh the contents of the on-premises share.
- ![Confirm refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-3.png)
+ ![Screenshot of confirmation to refresh data for the share.](media/azure-stack-edge-gpu-manage-shares/refresh-share-3.png)
4. While the refresh is in progress, the refresh option is grayed out in the context menu. Select the job notification to view the refresh job status. 5. The time to refresh depends on the number of files in the Azure container and the files on the device. Once the refresh has successfully completed, the share timestamp is updated. Even if the refresh has partial failures, the operation is considered successful and the timestamp is updated. The refresh error logs are also updated.
-![Updated timestamp](media/azure-stack-edge-gpu-manage-shares/refresh-share-4.png)
+ ![Screenshot of the updated timestamp for the refresh operation.](media/azure-stack-edge-gpu-manage-shares/refresh-share-4.png)
-If there's a failure, an alert is raised. The alert details the cause and the recommendation to fix the issue. The alert also links to a file that has the complete summary of the failures including the files that failed to update or delete.
+ If there's a failure, an alert is raised. The alert details the cause and the recommendation to fix the issue. The alert also links to a file that has the complete summary of the failures including the files that failed to update or delete.
## Sync pinned files
To automatically sync up pinned files, do the following steps in the Azure porta
2. Go to **Containers** and select **+ Container** to create a container. Name this container as *newcontainer*. Set the **Public access level** to Container.
- ![Automated sync for pinned files 1](media/azure-stack-edge-gpu-manage-shares/image-1.png)
+ ![Screenshot of the automated sync for pinned files.](media/azure-stack-edge-gpu-manage-shares/image-1.png)
3. Select the container name and set the following metadata: - Name = "Pinned" - Value = "True"
- ![Automated sync for pinned files 2](media/azure-stack-edge-gpu-manage-shares/image-2.png)
+ ![Screenshot of metadata options for automated sync for pinned files.](media/azure-stack-edge-gpu-manage-shares/image-2.png)
4. Create a new share on your device. Map it to the pinned container by choosing the existing container option. Mark the share as read only. Create a new user and specify the user name and a corresponding password for this share.
- ![Automated sync for pinned files 3](media/azure-stack-edge-gpu-manage-shares/image-3.png)
+ ![Screenshot of new share mapping using an existing container for automated sync for pinned files.](media/azure-stack-edge-gpu-manage-shares/image-3.png)
5. From the Azure portal, browse to the container that you created. Upload the file that you want to be pinned into the new container, that has the metadata set to pinned. 6. Select **Refresh data** in Azure portal for the device to download the pinning policy for that particular Azure Storage container.
- ![Automated sync for pinned files 4](media/azure-stack-edge-gpu-manage-shares/image-4.png)
+ ![Screenshot of the Refresh data option in automated sync for pinned files.](media/azure-stack-edge-gpu-manage-shares/image-4.png)
7. Access the new share that was created on the device. The file that was uploaded to the storage account is now downloaded to the local share.
Do the following steps in the Azure portal to sync your storage access key.
1. Go to **Overview** in your resource. From the list of shares, select a share associated with the storage account that you need to sync.
- ![Select share with relevant storage account](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-1.png)
+ ![Screenshot of selecting a share with relevant storage account.](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-1.png)
2. Select **Sync storage key**. Select **Yes** when prompted for confirmation.
- ![Select Sync storage key](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-2.png)
+ ![Screenshot of selecting a Sync storage key.](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-2.png)
3. Exit out of the dialog once the sync is complete.
databox-online Azure Stack Edge Pro 2 Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-prep.md
Previously updated : 03/04/2022 Last updated : 05/03/2022 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro 2 so I can use it to transfer data to Azure.
If you have an existing Azure Stack Edge resource to manage your physical device
### Create an order
-You can use the Azure Edge Hardware Center to explore and order a variety of hardware from the Azure hybrid portfolio including Azure Stack Edge Pro 2 devices.
+You can use the Azure Edge Hardware Center to explore and order various hardware from the Azure hybrid portfolio including Azure Stack Edge Pro 2 devices.
When you place an order through the Azure Edge Hardware Center, you can order multiple devices, to be shipped to more than one address, and you can reuse ship to addresses from other orders.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
description: This article lists the security alerts visible in Microsoft Defende
Previously updated : 06/22/2022 Last updated : 06/29/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **Access from an unusual location**<br>(CosmosDB_GeoAnomaly) | This Azure Cosmos DB account was accessed from a location considered unfamiliar, based on the usual access pattern. <br><br> Either a threat actor has gained access to the account, or a legitimate user has connected from a new or unusual geographic location | Initial Access | Low | | **Unusual volume of data extracted**<br>(CosmosDB_DataExfiltrationAnomaly) | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium | | **Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High |
+| **Suspicious extraction of Azure Cosmos DB account keys** (AzureCosmosDB_SuspiciousListKeys.SuspiciousPrincipal) | A suspicious source extracted Azure Cosmos DB account access keys from your subscription. If this source is not a legitimate source, this may be a high impact issue. The access key that was extracted provides full control over the associated databases and the data stored within. See the details of each specific alert to understand why the source was flagged as suspicious. | Credential Access | high |
| **SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isnΓÇÖt authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium | | **SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack wonΓÇÖt succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, itΓÇÖs an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low | -- ## <a name="alerts-azurenetlayer"></a>Alerts for Azure network layer [Further details and notes](other-threat-protections.md#network-layer)
defender-for-cloud Concept Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md
You can use this information to quickly remediate security issues and improve th
Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. To learn how to stream alerts, see [Stream alerts to a SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). > [!TIP]
-> For a comprehensive list of all Defender for Storage alerts, see the [alerts reference page](alerts-reference.md#alerts-azurecosmos). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
+> For a comprehensive list of all Defender for Azure Cosmos DB alerts, see the [alerts reference page](alerts-reference.md#alerts-azurecosmos). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
## Alert types
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 06/15/2022 Last updated : 06/28/2022 # Enable Microsoft Defender for Containers
Defender for Containers protects your clusters whether they're running in:
Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md).
-You can learn more about from the product manager by watching [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md).
-
-You can also watch [Protect Containers in GCP with Defender for Containers](episode-ten.md) to learn how to protect your containers.
+You can learn more by watching these video from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md)
+- [Protect Containers in GCP with Defender for Containers](episode-ten.md)
::: zone pivot="defender-for-container-arc,defender-for-container-eks,defender-for-container-gke" > [!NOTE]
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 06/15/2022 Last updated : 06/28/2022 # Overview of Microsoft Defender for Containers
Microsoft Defender for Containers is the cloud-native solution for securing your
[How does Defender for Containers work in each Kubernetes platform?](defender-for-containers-architecture.md)
-You can learn more from the product manager about Microsoft Defender for Containers by watching [Microsoft Defender for Containers](episode-three.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Containers](episode-three.md)
## Microsoft Defender for Containers plan availability
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for Servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Servers. Previously updated : 06/26/2022 Last updated : 06/29/2022 # Overview of Microsoft Defender for Servers
To protect machines in hybrid and multicloud environments, Defender for Cloud us
> [!TIP] > For details of which Defender for Servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers).
-You can learn more from the product manager about Defender for Servers, by watching [Microsoft Defender for Servers](episode-five.md). You can also watch [Enhanced workload protection features in Defender for Servers](episode-twelve.md), or learn how to [deploy in Defender for Servers in AWS and GCP](episode-fourteen.md).
+You can learn more by watching these videos from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Servers](episode-five.md)
+- [Enhanced workload protection features in Defender for Servers](episode-twelve.md)
+- [Deploy in Defender for Servers in AWS and GCP](episode-fourteen.md)
## What are the Microsoft Defender for server plans?
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 06/16/2022 Last updated : 06/29/2022 # Overview of Microsoft Defender for Storage
Analyzed telemetry of Azure Blob Storage includes operation types such as **Get
Defender for Storage doesn't access the Storage account data and has no impact on its performance.
-You can learn more about from the product manager by watching [Defender for Storage in the field](episode-thirteen.md)
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Defender for Storage in the field](episode-thirteen.md)
## Availability
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
Title: Use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud description: Enable, deploy, and use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines Previously updated : 06/15/2022 Last updated : 06/29/2022 # Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management
For a quick overview of threat and vulnerability management, watch this video:
> [!TIP] > As well as alerting you to vulnerabilities, threat and vulnerability management provides additional functionality for Defender for Cloud's asset inventory tool. Learn more in [Software inventory](asset-inventory.md#access-a-software-inventory).
-You can also learn more from the product manager about security posture by watching [Microsoft Defender for Servers](episode-five.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Servers](episode-five.md)
## Availability
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Title: Understand the enhanced security features of Microsoft Defender for Cloud description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud Previously updated : 06/12/2022 Last updated : 06/29/2022
Defender for Cloud is offered in two modes:
- [If a Log Analytics agent reports to multiple workspaces, is the 500 MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them) - [Is the 500 MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine) - [What data types are included in the 500 MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
+- [How can I monitor my daily usage](#how-can-i-monitor-my-daily-usage)
### How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud? Azure Subscriptions may have multiple administrators with permissions to change the pricing settings. To find out which user made a change, use the Azure Activity Log.
Defender for Cloud's billing is closely tied to the billing for Log Analytics. [
If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
-## How can I monitor my daily usage
+### How can I monitor my daily usage
You can view your data usage in two different ways, the Azure portal, or by running a script.
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
Title: Microsoft Defender for Servers
-description: Learn all about Microsoft Defender for Servers from the product manager.
+description: Learn all about Microsoft Defender for Servers.
Previously updated : 06/01/2022 Last updated : 06/28/2022 # Microsoft Defender for Servers
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
Title: Prioritize security actions by data sensitivity - Microsoft Defender for Cloud description: Use Microsoft Purview's data sensitivity classifications in Microsoft Defender for Cloud Previously updated : 06/15/2022 Last updated : 06/29/2022 # Prioritize security actions by data sensitivity
Microsoft Defender for Cloud customers using Microsoft Purview can benefit from
This page explains the integration of Microsoft Purview's data sensitivity classification labels within Defender for Cloud.
-You can learn more from the product manager about Microsoft Defender for Cloud's [integration with Azure Purview](episode-two.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Integration with Azure Purview](episode-two.md)
## Availability |Aspect|Details|
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 06/19/2022 Last updated : 06/29/2022 zone_pivot_groups: connect-aws-accounts
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
:::image type="content" source="./media/quickstart-onboard-aws/aws-account-in-overview.png" alt-text="Four AWS projects listed on Defender for Cloud's overview dashboard" lightbox="./media/quickstart-onboard-aws/aws-account-in-overview.png":::
-You can learn more from the product manager about Microsoft Defender for Cloud's new AWS connector by watching [New AWS connector](episode-one.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [New AWS connector](episode-one.md)
::: zone pivot="env-settings"
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Improving your security posture with recommendations in Microsoft Defender for Cloud description: This document walks you through how to identify security recommendations that will help you improve your security posture. Previously updated : 06/15/2022 Last updated : 06/29/2022 # Find recommendations that can improve your security posture
To get to the list of recommendations:
You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations, and look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
-You can learn more from the product manager about security posture by watching [Security posture management improvements](episode-four.md).
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Security posture management improvements](episode-four.md)
## Finding recommendations with high impact on your secure score<a name="monitor-recommendations"></a>
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 06/08/2022 Last updated : 06/29/2022
The **tabs** below show the features that are available, by environment, for Mic
### [**Azure (AKS)**](#tab/azure-aks)
-| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
|--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VM, VMSS | GA | X | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Compliance | Docker CIS | VM, VMSS | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Preview | Defender profile | Defender for Containers | Commercial clouds | | Hardening | Control plane recommendations | ACR, AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Hardening | Kubernetes data plane recommendations | AKS | GA | X | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Runtime protection| Threat detection (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime protection| Threat detection (workload) | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Runtime protection| Threat detection (workload) | AKS | Preview | - | Defender profile | Defender for Containers | Commercial clouds |
| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Discovery and provisioning | Collection of control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Auto provisioning of Defender profile | AKS | Preview | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Defender profile | AKS | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ### [**AWS (EKS)**](#tab/aws-eks)
-| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Compliance | Docker CIS | EC2 | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
| Vulnerability Assessment | Registry scan | - | - | - | - | - | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | EKS | Preview | - | Azure Policy extension | Defender for Containers |
| Runtime protection| Threat detection (control plane)| EKS | Preview | Preview | Agentless | Defender for Containers |
-| Runtime protection| Threat detection (workload) | EKS | Preview | X | Defender extension | Defender for Containers |
-| Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | X | Agentless | Free |
+| Runtime protection| Threat detection (workload) | EKS | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | - | Agentless | Free |
| Discovery and provisioning | Collection of control plane threat data | EKS | Preview | Preview | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - | | Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
The **tabs** below show the features that are available, by environment, for Mic
### [**GCP (GKE)**](#tab/gcp-gke)
-| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Compliance | Docker CIS | GCP VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
| Vulnerability Assessment | Registry scan | - | - | - | - | - | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | GKE | Preview | - | Azure Policy extension | Defender for Containers |
| Runtime protection| Threat detection (control plane)| GKE | Preview | Preview | Agentless | Defender for Containers |
-| Runtime protection| Threat detection (workload) | GKE | Preview | X | Defender extension | Defender for Containers |
-| Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | X | Agentless | Free |
+| Runtime protection| Threat detection (workload) | GKE | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | - | Agentless | Free |
| Discovery and provisioning | Collection of control plane threat data | GKE | Preview | Preview | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | X | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | - | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | - | Agentless | Defender for Containers |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ### [**On-prem/IaaS (Arc)**](#tab/iaas-arc)
-| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Compliance | Docker CIS | Arc enabled VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
-| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
| Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy extension | Defender for Containers |
| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
-| Runtime protection| Threat detection (workload) | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
-| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | X | Agentless | Free |
+| Runtime protection| Threat detection (workload) | Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free |
| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers | | Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features that are available, by environment, for Mic
| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> | <sup><a name="footnote1"></a>1</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
-<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../azure-arc/kubernetes/overview.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](https://mseng.visualstudio.com/TechnicalContent/_workitems/recentlyupdated/) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-azure-digital-twins-explorer.md
# Mandatory fields. Title: Azure Digital Twins Explorer
+ Title: Azure Digital Twins Explorer (preview)
-description: Learn about the capabilities and purpose of Azure Digital Twins Explorer and when it can be a useful tool for visualizing digital models, twins, and graphs.
+description: Learn about the capabilities and purpose of Azure Digital Twins Explorer (preview) and when it can be a useful tool for visualizing digital models, twins, and graphs.
Last updated 02/28/2022
# Azure Digital Twins Explorer (preview)
-This article contains information about the Azure Digital Twins Explorer, including its use cases and an overview of its features. For detailed steps on using each feature, see [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+This article contains information about the Azure Digital Twins Explorer, including its use cases and an overview of its features. For detailed steps on using each feature, see [Use Azure Digital Twins Explorer (preview)](how-to-use-azure-digital-twins-explorer.md).
*Azure Digital Twins Explorer* is a developer tool for visualizing and interacting with the data in your Azure Digital Twins instance, including your [models](concepts-models.md) and [twin graph](concepts-twins-graph.md).
->[!NOTE]
->This tool is currently in public preview.
- Here's a view of the explorer window, showing models and twins that have been populated for a sample graph: :::image type="content" source="media/concepts-azure-digital-twins-explorer/azure-digital-twins-explorer-demo.png" alt-text="Screenshot of Azure Digital Twins Explorer showing sample models and twins." lightbox="media/concepts-azure-digital-twins-explorer/azure-digital-twins-explorer-demo.png":::
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-maps.md
This pattern reads from the room twin directly, rather than the IoT device, whic
>[!NOTE] >There is currently a known issue in Cloud Shell affecting these command groups: `az dt route`, `az dt model`, `az dt twin`. >
- >To resolve, either run `az login` in Cloud Shell prior to running the command, or use the [local CLI](/cli/azure/install-azure-cli) instead of Cloud Shell. For more detail on this, see [Troubleshoot known issues](troubleshoot-known-issues.md#400-client-error-bad-request-in-cloud-shell).
+ >To resolve, either run `az login` in Cloud Shell prior to running the command, or use the [local CLI](/cli/azure/install-azure-cli) instead of Cloud Shell. For more detail on this, see [Azure Digital Twins known issues](troubleshoot-known-issues.md#400-client-error-bad-request-in-cloud-shell).
```azurecli-interactive az dt route create --dt-name <your-Azure-Digital-Twins-instance-hostname-or-name> --endpoint-name <Event-Grid-endpoint-name> --route-name <my-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'"
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
# Mandatory fields. Title: Use Azure Digital Twins Explorer
+ Title: Use Azure Digital Twins Explorer (preview)
-description: Learn how to use all the features of Azure Digital Twins Explorer
+description: Learn how to use all the features of Azure Digital Twins Explorer (preview)
Last updated 02/24/2022
# Use Azure Digital Twins Explorer (preview)
-[Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) is a tool for visualizing and working with Azure Digital Twins. This article describes the features of Azure Digital Twins Explorer, and how to use them to manage the data in your Azure Digital Twins instance. You can interact with the Azure Digital Twins Explorer using clicks or [keyboard shortcuts](#accessibility-and-advanced-settings).
-
->[!NOTE]
->This tool is currently in public preview.
+[Azure Digital Twins Explorer (preview)](concepts-azure-digital-twins-explorer.md) is a tool for visualizing and working with Azure Digital Twins. This article describes the features of Azure Digital Twins Explorer, and how to use them to manage the data in your Azure Digital Twins instance. You can interact with the Azure Digital Twins Explorer using clicks or [keyboard shortcuts](#accessibility-and-advanced-settings).
## How to access
digital-twins Troubleshoot Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-known-issues.md
Last updated 02/28/2022
-# Troubleshoot Azure Digital Twins known issues
+# Azure Digital Twins known issues
This article provides information about known issues associated with Azure Digital Twins.
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
Title: Get right-sized Azure recommendation for your on-premises SQL Server database(s) description: Learn how to use the Azure SQL migration extension in Azure Data Studio to get SKU recommendation to migrate SQL Server database(s) to the right-sized Azure SQL Managed Instance or SQL Server on Azure Virtual Machines. --++
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Title: What is Azure Database Migration Service? description: Overview of Azure Database Migration Service, which provides seamless migrations from many database sources to Azure Data platforms. --++
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
Title: Azure Database Migration Service tools matrix description: Learn about the services and tools available to migrate databases and to support various phases of the migration process. --++
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
Title: Migrate SSIS packages to SQL Managed Instance
description: Learn how to migrate SQL Server Integration Services (SSIS) packages and projects to an Azure SQL Managed Instance using the Azure Database Migration Service or the Data Migration Assistant. --++
dms How To Migrate Ssis Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages.md
Title: Redeploy SSIS packages to SQL single database
description: Learn how to migrate or redeploy SQL Server Integration Services packages and projects to Azure SQL Database single database using the Azure Database Migration Service and Data Migration Assistant. --++
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
Title: Monitor migration activity - Azure Database Migration Service description: Learn to use the Azure Database Migration Service to monitor migration activity. --++
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
Title: "PowerShell: Migrate SQL Server to SQL Managed Instance offline"
description: Learn to offline migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service. --++
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
Title: "PowerShell: Migrate SQL Server to SQL Managed Instance online"
description: Learn to online migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service. --++
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
Title: "PowerShell: Migrate SQL Server to SQL Database"
description: Learn to migrate a database from SQL Server to Azure SQL Database by using Azure PowerShell with the Azure Database Migration Service. --++
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md
Title: "Known issues: Online migrations from PostgreSQL to Azure Database for Po
description: Learn about known issues and migration limitations with online migrations from PostgreSQL to Azure Database for PostgreSQL using the Azure Database Migration Service. --++
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Title: Known issues and limitations with online migrations to Azure SQL Managed Instance description: Learn about known issues/migration limitations associated with online migrations to Azure SQL Managed Instance. --++
dms Known Issues Dms Hybrid Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-dms-hybrid-mode.md
Title: Known issues/migration limitations with using Hybrid mode description: Learn about known issues/migration limitations with using Azure Database Migration Service in hybrid mode. --++
dms Known Issues Mongo Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-mongo-cosmos-db.md
Title: "Known issues: Migrate from MongoDB to Azure Cosmos DB"
description: Learn about known issues and migration limitations with migrations from MongoDB to Azure Cosmos DB using the Azure Database Migration Service. --++
dms Known Issues Troubleshooting Dms Source Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms-source-connectivity.md
Title: "Issues connecting source databases"
description: Learn about how to troubleshoot known issues/errors associated with connecting Azure Database Migration Service to source databases. --++
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
Title: "Common issues - Azure Database Migration Service" description: Learn about how to troubleshoot common known issues/errors associated with using Azure Database Migration Service. --++
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Title: Migrate databases at scale using Azure PowerShell / CLI description: Learn how to use Azure PowerShell or CLI to migrate databases at scale using the capabilities of Azure SQL migration extension in Azure Data Studio with Azure Database Migration Service. --++
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Title: Migrate using Azure Data Studio description: Learn how to use the Azure SQL migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service. --++
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
Title: Prerequisites for Azure Database Migration Service description: Learn about an overview of the prerequisites for using the Azure Database Migration Service to perform database migrations. --++
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
Title: "Quickstart: Create a hybrid mode instance with Azure portal"
description: Use the Azure portal to create an instance of Azure Database Migration Service in hybrid mode. --++
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
Title: "Quickstart: Create an instance using the Azure portal"
description: Use the Azure portal to create an instance of Azure Database Migration Service. --++
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations"
description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance online migrations. --++
dms Resource Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-network-topologies.md
Title: Network topologies for SQL Managed Instance migrations description: Learn the source and target configurations for Azure SQL Managed Instance migrations using the Azure Database Migration Service.--++
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
Title: Database migration scenario status description: Learn about the status of the migration scenarios supported by Azure Database Migration Service.--++ Last updated 06/13/2022
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate Azure DB for PostgreSQL to Azure DB for PostgreSQL onl
description: Learn to perform an online migration from one Azure DB for PostgreSQL to another Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal. --++
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db-online.md
Title: "Tutorial: Migrate MongoDB online to Azure Cosmos DB API for MongoDB"
description: Learn to migrate from MongoDB on-premises to Azure Cosmos DB API for MongoDB online by using Azure Database Migration Service. --++
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db.md
Title: "Tutorial: Migrate MongoDB offline to Azure Cosmos DB API for MongoDB"
description: Migrate from MongoDB on-premises to Azure Cosmos DB API for MongoDB offline, by using Azure Database Migration Service. --++
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate PostgreSQL to Azure DB for PostgreSQL online via the A
description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal. --++
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
Title: "Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online via
description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the CLI. --++
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
Title: "Tutorial: Migrate RDS PostgreSQL online to Azure Database for PostgreSQL
description: Learn to perform an online migration from RDS PostgreSQL to Azure Database for PostgreSQL by using the Azure Database Migration Service. --++
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using
description: Migrate SQL Server to an Azure SQL Managed Instance offline using Azure Data Studio with Azure Database Migration Service (Preview) --++
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance online using
description: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with Azure Database Migration Service --++
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
Title: "Tutorial: Migrate SQL Server online to SQL Managed Instance"
description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service. --++
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
Title: "Tutorial: Migrate SQL Server offline to Azure SQL Database"
description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service. --++
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
Title: "Tutorial: Migrate SQL Server to SQL Managed Instance"
description: Learn to migrate from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service. --++
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 06/02/2022 Last updated : 06/29/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Azure DNS Private Resolver is a new service that enables you to query Azure DNS
Azure DNS Private Resolver requires an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). When you create an Azure DNS Private Resolver inside a virtual network, one or more [inbound endpoints](#inbound-endpoints) are established that can be used as the destination for DNS queries. The resolver's [outbound endpoint](#outbound-endpoints) processes DNS queries based on a [DNS forwarding ruleset](#dns-forwarding-rulesets) that you configure. DNS queries that are initiated in networks linked to a ruleset can be sent to other DNS servers.
+You don't need to change any DNS client settings on your virtual machines (VMs) to use the Azure DNS Private Resolver.
+ The DNS query process when using an Azure DNS Private Resolver is summarized below: 1. A client in a virtual network issues a DNS query.
dns Private Dns Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-scenarios.md
Azure DNS Private Zones provide name resolution within a virtual network and bet
In this scenario, you have a virtual network in Azure that has many resources in it, including virtual machines. Your requirement is to resolve any resources in the virtual network using a specific domain name (DNS zone). You also need the naming resolution to be private and not accessible from the internet. Lastly, you need Azure to automatically register VMs into the DNS zone.
-This scenario is shown below. We have a virtual network named "A" containing two VMs (VNETA-VM1 and VNETA-VM2). Each VM has a private IP associated. Once you've create a private zone, for example `contoso.com` and link virtual network "A" as a registration virtual network. Azure DNS will automatically create two A records in the zone referencing the two VMs. DNS queries from VNETA-VM1 can now resolve `VNETA-VM2.contoso.com` and will receive a DNS response that contains the private IP address of VNETA-VM2.
+This scenario is shown below. We have a virtual network named "A" containing two VMs (VNETA-VM1 and VNETA-VM2). Each VM has a private IP associated. Once you've created a private zone, for example, `contoso.com`, and link virtual network "A" as a registration virtual network, Azure DNS will automatically create two A records in the zone referencing the two VMs. DNS queries from VNETA-VM1 can now resolve `VNETA-VM2.contoso.com` and will receive a DNS response that contains the private IP address of VNETA-VM2.
You can also do a reverse DNS query (PTR) for the private IP of VNETA-VM1 (10.0.0.1) from VNETA-VM2. The DNS response will contain the name VNETA-VM1, as expected. ![Single Virtual network resolution](./media/private-dns-scenarios/single-vnet-resolution.png)
event-grid Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/network-security.md
By default, topic and domain are accessible from the internet as long as the req
For step-by-step instructions to configure IP firewall for topics and domains, see [Configure IP firewall](configure-firewall.md). ++ ## Private endpoints You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to your topics and domains securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. A private endpoint is a special network interface for an Azure service in your VNet. When you create a private endpoint for your topic or domain, it provides secure connectivity between clients on your VNet and your Event Grid resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Event Grid service uses a secure private link.
The following table describes the various states of the private endpoint connect
For publishing to be successful, the private endpoint connection state should be **approved**. If a connection is rejected, it can't be approved using the Azure portal. The only possibility is to delete the connection and create a new one instead.
-## Pricing and quotas
-**Private endpoints** is available in both basic and premium tiers of Event Grid. Event Grid allows up to 64 private endpoint connections to be created per topic or domain.
-**IP Firewall** feature is available in both basic and premium tiers of Event Grid. We allow up to 16 IP Firewall rules to be created per topic or domain.
+## Quotas and limits
+There's a limit on the number of IP firewall rules and private endpoint connections per topic or domain. See [Event Grid quotas and limits](quotas-limits.md).
## Next steps You can configure IP firewall for your Event Grid resource to restrict access over the public internet from only a select set of IP Addresses or IP Address ranges. For step-by-step instructions, see [Configure IP firewall](configure-firewall.md).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications | | **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | Supported| GlobalConnect, Megaport, Telenor, Telia Carrier | | **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo |
+| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix |
| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Megaport, NextDC | | **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | Supported | Cox Business Cloud Port, CenturyLink Cloud Connect, Megaport, Zayo | | **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| Supported | Tata Communications |
The following table shows connectivity locations and the service providers for e
| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ | | **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport, NextDC | | **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom, Chunghwa Telecom, FarEasTone |
-| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | n/a | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> **We are currently unable to support new ExpressRoute circuits in Tokyo. Please create new circuits in Tokyo2 or Osaka.* |
+| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> |
| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO, China Unicom Global, Colt, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications | | **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **du datamena** |Supported |Supported | Dubai2 | | **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin| | **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** |Supported |Supported | Singapore, Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported | Taipei |
firewall-manager Quick Secure Virtual Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-bicep.md
+
+ Title: 'Quickstart: Secure virtual hub using Azure Firewall Manager - Bicep'
+description: In this quickstart, you learn how to secure your virtual hub using Azure Firewall Manager and Bicep.
+++ Last updated : 06/28/2022+++++
+# Quickstart: Secure your virtual hub using Azure Firewall Manager - Bicep
+
+In this quickstart, you use Bicep to secure your virtual hub using Azure Firewall Manager. The deployed firewall has an application rule that allows connections to `www.microsoft.com` . Two Windows Server 2019 virtual machines are deployed to test the firewall. One jump server is used to connect to the workload server. From the workload server, you can only connect to `www.microsoft.com`.
++
+For more information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates a secured virtual hub using Azure Firewall Manager, along with the necessary resources to support the scenario.
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/fwm-docs-qs/).
++
+Multiple Azure resources are defined in the Bicep file:
+
+- [**Microsoft.Network/virtualWans**](/azure/templates/microsoft.network/virtualWans)
+- [**Microsoft.Network/virtualHubs**](/azure/templates/microsoft.network/virtualHubs)
+- [**Microsoft.Network/firewallPolicies**](/azure/templates/microsoft.network/firewallPolicies)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines)
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts)
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)
+- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `main.bicep` to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-user>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-user>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-user\>** with the administrator login username for the servers. You'll be prompted to enter **adminPassword**.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use Azure CLI or Azure PowerShell to review the deployed resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+Now, test the firewall rules to confirm that it works as expected.
+
+1. From the Azure portal, review the network settings for the **Workload-Srv** virtual machine and note the private IP address.
+2. Connect a remote desktop to **Jump-Srv** virtual machine, and sign in. From there, open a remote desktop connection to the **Workload-Srv** private IP address.
+3. Open Internet Explorer and browse to `www.microsoft.com`.
+4. Select **OK** > **Close** on the Internet Explorer security alerts.
+
+ You should see the Microsoft home page.
+
+5. Browse to `www.google.com`.
+
+ You should be blocked by the firewall.
+
+Now you've verified that the firewall rules are working, you can browse to the one allowed FQDN, but not to any others.
+
+## Clean up resources
+
+When you no longer need the resources that you created with the firewall, use Azure portal, Azure CLI, or Azure PowerShell to delete the resource group. This removes the firewall and all the related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about security partner providers](trusted-security-partners.md)
frontdoor Rule Set Server Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rule-set-server-variables.md
When you use [Rule set actions](front-door-rules-engine-actions.md), you can use
| `request_uri` | The full original request URI (with arguments).<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `request_uri` value will be `/article.aspx?id=123&title=fabrikam`.<br/> To access this server variable in a match condition, use [Request URL](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-url).| | `ssl_protocol` | The protocol of an established TLS connection.<br/> To access this server variable in a match condition, use [SSL protocol](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#ssl-protocol).| | `server_port` | The port of the server that accepted a request.<br/> To access this server variable in a match condition, use [Server port](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#server-port).|
-| `url_path` | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `uri_path` value will be `/article.aspx`.<br/> To access this server variable in a match condition, use [Request path](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-path).|
+| `url_path` | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `url_path` value will be `/article.aspx`.<br/> To access this server variable in a match condition, use [Request path](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-path).|
## Server variable format
governance Route State Change Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/route-state-change-events.md
Title: "Tutorial: Route policy state change events to Event Grid with Azure CLI" description: In this tutorial, you configure Event Grid to listen for policy state change events and call a webhook. Previously updated : 08/17/2021 Last updated : 06/29/2022 ++ # Tutorial: Route policy state change events to Event Grid with Azure CLI
uses the `Microsoft.PolicyInsights.PolicyStates` topic type for Azure Policy sta
```azurecli-interactive # Log in first with az login if you're not using Cloud Shell
-az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/subscriptions/<SubscriptionID>" --resource-group "<resource_group_name>"
+az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/subscriptions/<subscriptionID>" --resource-group "<resource_group_name>"
+```
+
+If your Event Grid system topic will be applied to the management group scope, then the Azure CLI `--source` parameter syntax is a bit different. Here's an example:
+
+```azurecli-interactive
+az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/tenants/<tenantID>/providers/Microsoft.Management/managementGroups/<management_group_name>" --resource-group "<resource_group_name>"
``` ## Create a message endpoint
groups** definition. This policy definition identifies resource groups that are
configured during policy assignment. Run the following command to create a policy assignment scoped to the resource group you created to
-hold the event grid topic:
+hold the Event Grid topic:
```azurecli-interactive # Log in first with az login if you're not using Cloud Shell
-az policy assignment create --name 'requiredtags-events' --display-name 'Require tag on RG' --scope '<ResourceGroupScope>' --policy '<policy definition ID>' --params '{ "tagName": { "value": "EventTest" } }'
+az policy assignment create --name 'requiredtags-events' --display-name 'Require tag on RG' --scope '<ResourceGroupScope>' --policy '<policy definition ID>' --params '{ \"tagName\": { \"value\": \"EventTest\" } }'
``` The preceding command uses the following information:
hdinsight Apache Hadoop Mahout Linux Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-mahout-linux-mac.md
description: Learn how to use the Apache Mahout machine learning library to gene
Previously updated : 05/14/2020 Last updated : 06/29/2022 # Generate recommendations using Apache Mahout in Azure HDInsight
hdinsight Apache Hadoop On Premises Migration Best Practices Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-infrastructure.md
description: Learn infrastructure best practices for migrating on-premises Hadoo
Previously updated : 12/06/2019 Last updated : 06/29/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - infrastructure best practices
For more information, see the article [Connect HDInsight to your on-premises net
## Next steps
-Read the next article in this series: [Storage best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-storage.md).
+Read the next article in this series: [Storage best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-storage.md).
hdinsight Troubleshoot Invalidnetworkconfigurationerrorcode Cluster Creation Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails.md
Title: InvalidNetworkConfigurationErrorCode error - Azure HDInsight
description: Various reasons for failed cluster creations with InvalidNetworkConfigurationErrorCode in Azure HDInsight Previously updated : 01/12/2021 Last updated : 06/29/2022 # Cluster creation fails with InvalidNetworkConfigurationErrorCode in Azure HDInsight
hdinsight Hdinsight Hadoop Stack Trace Error Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-stack-trace-error-messages.md
description: Index of Hadoop stack trace error messages in Azure HDInsight. Find
Previously updated : 01/03/2020 Last updated : 06/29/2022 # Index of Apache Hadoop in HDInsight troubleshooting articles
hdinsight Hdinsight Migrate Granular Access Cluster Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-migrate-granular-access-cluster-configurations.md
Title: Granular role-based access Azure HDInsight cluster configurations
description: Learn about the changes required as part of the migration to granular role-based access for HDInsight cluster configurations. Previously updated : 04/20/2020 Last updated : 06/29/2022 # Migrate to granular role-based access for cluster configurations
hdinsight Gateway Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/gateway-best-practices.md
Title: Gateway deep dive and best practices for Apache Hive in Azure HDInsight
description: Learn how to navigate the best practices for running Hive queries over the Azure HDInsight gateway Previously updated : 04/01/2020 Last updated : 06/29/2022 # Gateway deep dive and best practices for Apache Hive in Azure HDInsight
expect delays when retrieving the same results via external tools.
* [Apache Beeline on HDInsight](../hadoop/apache-hadoop-use-hive-beeline.md) * [HDInsight Gateway Timeout Troubleshooting Steps](./troubleshoot-gateway-timeout.md) * [Virtual Networks for HDInsight](../hdinsight-plan-virtual-network-deployment.md)
-* [HDInsight with Express Route](../connect-on-premises-network.md)
+* [HDInsight with Express Route](../connect-on-premises-network.md)
hdinsight Troubleshoot Gateway Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/troubleshoot-gateway-timeout.md
Title: Exception when running queries from Apache Ambari Hive View in Azure HDIn
description: Troubleshooting steps when running Apache Hive queries through Apache Ambari Hive View in Azure HDInsight. Previously updated : 12/23/2019 Last updated : 06/29/2022 # Exception when running queries from Apache Ambari Hive View in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Apache Spark Troubleshoot Illegalargumentexception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-illegalargumentexception.md
Title: IllegalArgumentException error for Apache Spark - Azure HDInsight
description: IllegalArgumentException for Apache Spark activity in Azure HDInsight for Azure Data Factory Previously updated : 07/29/2019 Last updated : 06/29/2022 # Scenario: IllegalArgumentException for Apache Spark activity in Azure HDInsight
Make sure the application jar is stored on the default/primary storage for the H
## Next steps
hdinsight Apache Storm Example Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/storm/apache-storm-example-topology.md
description: A list of example Storm topologies created and tested with Apache S
Previously updated : 12/27/2019 Last updated : 06/29/2022 # Example Apache Storm topologies and components for Apache Storm on HDInsight
healthcare-apis Get Started With Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-started-with-health-data-services.md
After the Azure Health Data Services resource group is deployed, you can enter t
To be guided through these steps, see [Deploy Azure Health Data Services workspace using Azure portal](healthcare-apis-quickstart.md).
-> [!Note]
+> [!NOTE]
> You can provision multiple data services within a workspace, and by design, they work seamlessly with one another. With the workspace, you can organize all your Azure Health Data Services instances and manage certain configuration settings that are shared among all the underlying datasets and services where it's applicable. [![Screenshot of the Azure Health Data Services workspace.](media/health-data-services-workspace.png)](media/health-data-services-workspace.png#lightbox)
For more information, see [Get started with the DICOM service](./../healthcare-a
MedTech service transforms device data into FHIR-based observation resources and then persists the transformed messages into Azure Health Data Services FHIR service. This allows for a unified approach to health data access, standardization, and trend capture enabling the discovery of operational and clinical insights, connecting new device applications, and enabling new research projects.
-To ensure that your MedTech service works properly, it must have granted access permissions to the Azure Event Hub and FHIR service. The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this Event Hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](./../healthcare-apis/authentication-authorization.md)
+To ensure that your MedTech service works properly, it must have granted access permissions to the Azure Event Hubs and FHIR service. The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this event hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](./../healthcare-apis/authentication-authorization.md)
You can also do the following: - Create a new FHIR service or use an existing one in the same or different workspace -- Create a new Event Hub or use an existing one -- Assign roles to allow the MedTech service to access [Event Hub](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-medtech-service-access) and [FHIR service](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service)-- Send data to the Event Hub, which is associated with the MedTech service
+- Create a new event hub or use an existing one
+- Assign roles to allow the MedTech service to access [Event Hubs](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-the-medtech-service-access) and [FHIR service](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service)
+- Send data to the event hub, which is associated with the MedTech service
For more information, see [Get started with the MedTech service](./../healthcare-apis/iot/get-started-with-iot.md).
This article described the basic steps to get started using Azure Health Data Se
>[Frequently asked questions about Azure Health Data Services](healthcare-apis-faqs.md) FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.-
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Title: MedTech service in the Azure portal - Azure Health Data Services
-description: In this article, you'll learn how to deploy MedTech service in the Azure portal.
+ Title: Deploy the MedTech service in the Azure portal - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service in the Azure portal.
Previously updated : 04/07/2022 Last updated : 06/29/2022
-# Deploy MedTech service in the Azure portal
+# Deploy the MedTech service in the Azure portal
In this quickstart, you'll learn how to deploy MedTech service in the Azure portal. The MedTech service will enable you to ingest data from Internet of Things (IoT) into your Fast Healthcare Interoperability Resources (FHIR&#174;) service.
It's important that you have the following prerequisites completed before you be
>* Two MedTech services accessing the same device message event hub. >* A MedTech service and a storage writer application accessing the same device message event hub.
-## Deploy MedTech service
+If you already have an active Azure account, you can use this [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json) button to deploy a MedTech service that will include the following resources and permissions:
+
+ * An Azure Event Hubs Namespace and device message event hub (the event hub is named: **devicedata**).
+ * An Azure event hub sender role (the sender role is named: **devicedatasender**).
+ * An Azure Health Data Services workspace.
+ * An Azure Health Data Services FHIR service.
+ * An Azure Health Data Services MedTech service including the necessary system managed identity permissions to the device message event hub and FHIR service.
+
+When the Azure portal launches, the following fields must be filled out:
+ * **Subscription** - Choose the Azure subscription you would like to use for the deployment.
+ * **Resource Group** - Choose an existing Resource Group or create a new Resource Group.
+ * **Region** - The Azure region of the Resource Group used for the deployment. This field will auto-fill based on the Resource Group region.
+ * **Basename** - Will be used to append the name the Azure services to be deployed.
+ * **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (could be the same or different region than your Resource Group).
+
+Leave the **Device Mapping** and **Destination Mapping** fields with their default values.
+
+Select the **Review + create** button once the fields are filled out.
++
+After the validation has passed, select the **Create** button to begin the deployment.
++
+After a successful deployment, there will be remaining configurations that will need to be completed by you for a fully functional MedTech service:
+ * Provide a working device mapping file. For more information, see [How to use device mappings](how-to-use-device-mappings.md).
+ * Provide a working destination mapping file. For more information, see [How to use FHIR destination mappings](how-to-use-fhir-mappings.md).
+ * Use the Shared access policies (SAS) key (**devicedatasender**) for connecting your device or application to the MedTech service device message event hub (**devicedata**). For more information, see [Connection string for a specific event hub in a namespace](../../event-hubs/event-hubs-get-connection-string.md#connection-string-for-a-specific-event-hub-in-a-namespace).
+
+## Deploy the MedTech service
1. Sign in the [Azure portal](https://portal.azure.com), and then enter your Health Data Services workspace resource name in the **Search** bar field.
It's important that you have the following prerequisites completed before you be
![Screenshot of add MedTech services.](media/add-iot-connector.png#lightbox)
-## Configure MedTech service to ingest data
+## Configure the MedTech service to ingest data
Under the **Basics** tab, complete the required fields under **Instance details**.
Under the **Basics** tab, complete the required fields under **Instance details*
5. Select **Next: Device mapping**.
-## Configure Device mapping properties
+## Configure the Device mapping properties
> [!TIP] > The IoMT Connector Data Mapper is an open source tool to visualize the mapping configuration for normalizing a device's input data, and then transform it to FHIR resources. Developers can use this tool to edit and test Devices and FHIR destination mappings, and to export the data to upload to an MedTech service in the Azure portal. This tool also helps developers understand their device's Device and FHIR destination mapping configurations.
Under the **Basics** tab, complete the required fields under **Instance details*
2. Select **Next: Destination >** to configure the destination properties associated with your MedTech service.
-## Configure FHIR destination mapping properties
+## Configure the FHIR destination mapping properties
Under the **Destination** tab, enter the destination properties associated with the MedTech service.
Under the **Tags** tab, enter the tag properties associated with the MedTech ser
Now that your MedTech service has been deployed, we're going to walk through the steps of assigning permissions to access the event hub and FHIR service.
-## Granting MedTech service access
+## Granting the MedTech service access
To ensure that your MedTech service works properly, it must have granted access permissions to the event hub and FHIR service.
For more information about authoring access to Event Hubs resources, see [Author
![Screenshot of FHIR service added role assignment message.](media/fhir-service-added-role-assignment.png#lightbox)
- For more information about assigning roles to the FHIR service, see [Configure Azure RBAC](.././configure-azure-rbac.md).
+ For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md).
## Next steps
-In this article, you've learned how to deploy a MedTech service in the Azure portal. For an overview of MedTech service, see
+In this article, you've learned how to deploy a MedTech service in the Azure portal. To learn more about the device and FHIR destination mapping files for the MedTech service, see
>[!div class="nextstepaction"]
->[MedTech service overview](iot-connector-overview.md)
+>[How to use Device mappings](how-to-use-device-mappings.md)
+>
+>[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
You can create a workspace from the [Azure portal](../healthcare-apis-quickstart
> [!NOTE] > There are limits to the number of workspaces and the number of MedTech service instances you can create in each Azure subscription.
-## Create the FHIR service and an Event Hub
+## Create the FHIR service and an event hub
-The MedTech service works with the Azure Event Hub and the FHIR service. You can create a new [FHIR service](../fhir/get-started-with-fhir.md) or use an existing one in the same or different workspace. Similarly, you can create a new [Event Hub](../../event-hubs/event-hubs-create.md) or use an existing one.
+The MedTech service works with Azure Event Hubs and the FHIR service. You can create a new [FHIR service](../fhir/get-started-with-fhir.md) or use an existing one in the same or different workspace. Similarly, you can create a new [Event Hub](../../event-hubs/event-hubs-create.md) or use an existing one.
## Create a MedTech service in the workspace
You can create a MedTech service from the [Azure portal](deploy-iot-connector-in
Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) and [DICOM service](../dicom/deploy-dicom-services-in-azure.md) in the workspace.
-## Assign roles to allow MedTech service to access Event Hub
+## Assign roles to allow MedTech service to access Event Hubs
-By design, the MedTech service retrieves data from the specified Event Hub using the system-managed identity. For more information on how to assign the role to the MedTech service from [Event Hub](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-medtech-service-access).
+By design, the MedTech service retrieves data from the specified event hub using the system-managed identity. For more information on how to assign the role to the MedTech service from [Event Hubs](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-the-medtech-service-access).
## Assign roles to allow MedTech service to access FHIR service
The MedTech service persists the data to the FHIR store using the system-managed
## Sending data to the MedTech service
-You can send data to the Event Hub, which is associated with the MedTech service. If you don't see any data in the FHIR service, check the mappings and role assignments for the MedTech service.
+You can send data to the event hub, which is associated with the MedTech service. If you don't see any data in the FHIR service, check the mappings and role assignments for the MedTech service.
## MedTech service mappings, data flow, ML, Power BI, and Teams notifications
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-raspberry-pi.md
Here are two examples for the `du-config.json` and the `du-diagnostics-config.js
ssh raspberrypi3 -l root ```
- 1. Create or open the `du-config.jso` file for editing by using:
+ 1. Create or open the `du-config.json` file for editing by using:
```bash nano /adu/du-config.json
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
In this quickstart you created a Key Vault and stored a certificate in it. To le
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure PowerShell Key Vault cmdlets](/powershell/module/az.keyvault/) - Review the [Key Vault security overview](../general/security-features.md)
-.md)
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
For more information, review the [Azurite documentation](https://github.com/Azur
* [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app.
- * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.3904) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. Don't install the 4.x version, which isn't supported and won't work.
+ * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4585) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. Don't install the 4.x version, which isn't supported and won't work.
These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
There are multiple ways to create a workspace:
## <a name="sub-resources"></a> Sub resources
-These sub resources are the main resources that are made in the AML workspace.
+These sub resources are the main resources that are made in the AzureML workspace.
-* VMs: provide computing power for your AML workspace and are an integral part in deploying and training models.
+* VMs: provide computing power for your AzureML workspace and are an integral part in deploying and training models.
* Load Balancer: a network load balancer is created for each compute instance and compute cluster to manage traffic even while the compute instance/cluster is stopped. * Virtual Network: these help Azure resources communicate with one another, the internet, and other on-premises networks. * Bandwidth: encapsulates all outbound data transfers across regions.
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
If you're an owner of a workspace, you can add and remove roles for the workspac
- [REST API](../role-based-access-control/role-assignments-rest.md) - [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)
+## Use Azure AD security groups to manage workspace access
+
+You can use Azure AD security groups to manage their access to workspace. This approach has following benefits:
+ * Team or project leaders can manage user access to workspace as security group owners, without needing Owner role on the workspace resource directly.
+ * You can organize, manage and revoke users' permissions on workspace and other resources as a group, without having to manage permissions on user-by-user basis.
+ * Using Azure AD groups helps you to avoid reaching the [subscription limit](https://docs.microsoft.com/azure/role-based-access-control/troubleshooting#azure-role-assignments-limit) on role assignments.
+
+To use Azure AD security groups:
+ 1. [Create a security group](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal).
+ 2. [Add a group owner](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners). This user has permissions to add or remove group members. Note that the group owner is not required to be group member, or have direct RBAC role on the workspace.
+ 3. Assign the group an RBAC role on the workspace, such as AzureML Data Scientist, Reader or Contributor.
+ 4. [Add group members](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-groups-members-azure-portal). The members consequently gain access to the workspace.
++ ## Create custom role If the built-in roles are insufficient, you can create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that workspace. You can make the role available at a specific workspace level, a specific resource group level, or a specific subscription level.
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
try:
ml_client = MLClient.from_config(credential) except Exception as ex: print(ex)
- # Enter details of your AML workspace
+ # Enter details of your AzureML workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"
- workspace = "<AML_WORKSPACE_NAME>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
ml_client = MLClient(credential, subscription_id, resource_group, workspace) ```
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Check the Azure CLI extensions you've installed:
:::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_list":::
-Ensure no conflicting extension using the `ml` namespace is installed, including the `azure-cli-ml` extension:
+Remove any existing installation of the of `ml` extension and also the CLI v1 `azure-cli-ml` extension:
:::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_remove":::
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-databricks-automl-environment.md
To use automated ML, skip to [Add the Azure ML SDK with AutoML](#add-the-azure-m
![Azure Machine Learning SDK for Databricks](./media/how-to-configure-environment/amlsdk-withoutautoml.jpg) ## Add the Azure ML SDK with AutoML to Databricks
-If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the AML SDK.
+If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the AzureML SDK.
``` %pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
## Limitations -- If you need a **Standard Load Balancer(SLB)** deployed in your cluster instead of a Basic Load Balancer(BLB), create a cluster in the AKS portal/CLI/SDK and then **attach** it to the AML workspace.
+- If you need a **Standard Load Balancer(SLB)** deployed in your cluster instead of a Basic Load Balancer(BLB), create a cluster in the AKS portal/CLI/SDK and then **attach** it to the AzureML workspace.
- If you have an Azure Policy that restricts the creation of Public IP addresses, then AKS cluster creation will fail. AKS requires a Public IP for [egress traffic](../aks/limit-egress-traffic.md). The egress traffic article also provides guidance to lock down egress traffic from the cluster through the Public IP, except for a few fully qualified domain names. There are 2 ways to enable a Public IP: - The cluster can use the Public IP created by default with the BLB or SLB, Or - The cluster can be created without a Public IP and then a Public IP is configured with a firewall with a user defined route. For more information, see [Customize cluster egress with a user-defined-route](../aks/egress-outboundtype.md).
- The AML control plane does not talk to this Public IP. It talks to the AKS control plane for deployments.
+ The AzureML control plane does not talk to this Public IP. It talks to the AKS control plane for deployments.
- To attach an AKS cluster, the service principal/user performing the operation must be assigned the __Owner or contributor__ Azure role-based access control (Azure RBAC) role on the Azure resource group that contains the cluster. The service principal/user must also be assigned [Azure Kubernetes Service Cluster Admin Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) on the cluster. -- If you **attach** an AKS cluster, which has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AML control plane IP ranges for the AKS cluster. The AML control plane is deployed across paired regions and deploys inference pods on the AKS cluster. Without access to the API server, the inference pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
+- If you **attach** an AKS cluster, which has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AzureML control plane IP ranges for the AKS cluster. The AzureML control plane is deployed across paired regions and deploys inference pods on the AKS cluster. Without access to the API server, the inference pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
Authorized IP ranges only works with Standard Load Balancer.
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
The `train.py` file contains a normal python function, which performs the traini
#### Define component using python function
-After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AML pipelines.
+After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AzureML pipelines.
:::code language="python" source="~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train_component.py":::
machine-learning How To Create Register Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-data-assets.md
Last updated 05/24/2022
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](./v1/how-to-create-register-datasets.md)
-> * [v2 (current version)](how-to-create-register-datasets.md)
+> * [v2 (current version)](how-to-create-register-data-assets.md)
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] [!INCLUDE [CLI v2](../../includes/machine-learning-CLI-v2.md)]
ml_client.data.create_or_update(my_data)
## Next steps -- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
+- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
machine-learning How To Data Ingest Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-data-ingest-adf.md
The following Python code demonstrates how to create a datastore that connects t
```python ws = Workspace.from_config()
-adlsgen2_datastore_name = '<ADLS gen2 storage account alias>' #set ADLS Gen2 storage account alias in AML
+adlsgen2_datastore_name = '<ADLS gen2 storage account alias>' #set ADLS Gen2 storage account alias in AzureML
subscription_id=os.getenv("ADL_SUBSCRIPTION", "<ADLS account subscription ID>") # subscription id of ADLS account resource_group=os.getenv("ADL_RESOURCE_GROUP", "<ADLS account resource group>") # resource group of ADLS account
from azureml.core import Workspace, Datastore, Dataset
from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig
-# retrieve data via AML datastore
+# retrieve data via AzureML datastore
datastore = Datastore.get(ws, adlsgen2_datastore) datastore_path = [(datastore, '/data/prepared-data.csv')]
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md
To enable debugging, make the following changes to the Python script(s) used by
parser.add_argument('--remote_debug', action='store_true') parser.add_argument('--remote_debug_connection_timeout', type=int, default=300,
- help=f'Defines how much time the AML compute target '
+ help=f'Defines how much time the AzureML compute target '
f'will await a connection from a debugger client (VSCODE).') parser.add_argument('--remote_debug_client_ip', type=str, help=f'Defines IP Address of VS Code client')
parser.add_argument("--output_train", type=str, help="output_train directory")
parser.add_argument('--remote_debug', action='store_true') parser.add_argument('--remote_debug_connection_timeout', type=int, default=300,
- help=f'Defines how much time the AML compute target '
+ help=f'Defines how much time the AzureML compute target '
f'will await a connection from a debugger client (VSCODE).') parser.add_argument('--remote_debug_client_ip', type=str, help=f'Defines IP Address of VS Code client')
machine-learning How To Deploy Managed Online Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoint-sdk-v2.md
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential). ```python
- # enter details of your AML workspace
+ # enter details of your AzureML workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"
- workspace = "<AML_WORKSPACE_NAME>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
``` ```python
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
from azureml.core.webservice import AksWebservice, Webservice
# If deploying to a cluster configured for dev/test, ensure that it was created with enough # cores and memory to handle this deployment configuration. Note that memory is also used by
-# things such as dependencies and AML components.
+# things such as dependencies and AzureML components.
aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True, autoscale_min_replicas=1,
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
run = experiment.submit(config=src)
Once you have a trained model, you can save/serialize it to a `.pkl` file with `pickle.dump()` and `pickle.load()`. You can also use `joblib.dump()` and `joblib.load()`.
-The following example is how you download and load a model in-memory that was trained in AML compute with `ScriptRunConfig`. This code can run in the same notebook you used the Azure ML SDK `ScriptRunConfig`.
+The following example is how you download and load a model in-memory that was trained in AzureML compute with `ScriptRunConfig`. This code can run in the same notebook you used the Azure ML SDK `ScriptRunConfig`.
```python import joblib
machine-learning How To Safely Rollout Managed Endpoints Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints-sdk-v2.md
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential). ```python
- # enter details of your AML workspace
+ # enter details of your AzureML workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"
- workspace = "<AML_WORKSPACE_NAME>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
``` ```python
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-web-service.md
When you request a certificate, you must provide the FQDN of the address that yo
## <a id="enable"></a> Enable TLS and deploy
-**For AKS deployment**, you can enable TLS termination when you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md) in AML workspace. At AKS model deployment time, you can disable TLS termination with deployment configuration object, otherwise all AKS model deployment by default will have TLS termination enabled at AKS cluster create or attach time.
+**For AKS deployment**, you can enable TLS termination when you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md) in AzureML workspace. At AKS model deployment time, you can disable TLS termination with deployment configuration object, otherwise all AKS model deployment by default will have TLS termination enabled at AKS cluster create or attach time.
For ACI deployment, you can enable TLS termination at model deployment time with deployment configuration object.
For ACI deployment, you can enable TLS termination at model deployment time with
> [!NOTE] > The information in this section also applies when you deploy a secure web service for the designer. If you aren't familiar with using the Python SDK, see [What is the Azure Machine Learning SDK for Python?](/python/api/overview/azure/ml/intro).
-When you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md) in AML workspace, you can enable TLS termination with **[AksCompute.provisioning_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none--load-balancer-type-none--load-balancer-subnet-none-)** and **[AksCompute.attach_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#attach-configuration-resource-group-none--cluster-name-none--resource-id-none--cluster-purpose-none-)** configuration objects. Both methods return a configuration object that has an **enable_ssl** method, and you can use **enable_ssl** method to enable TLS.
+When you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md) in AzureML workspace, you can enable TLS termination with **[AksCompute.provisioning_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none--load-balancer-type-none--load-balancer-subnet-none-)** and **[AksCompute.attach_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#attach-configuration-resource-group-none--cluster-name-none--resource-id-none--cluster-purpose-none-)** configuration objects. Both methods return a configuration object that has an **enable_ssl** method, and you can use **enable_ssl** method to enable TLS.
You can enable TLS either with Microsoft certificate or a custom certificate purchased from CA.
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
Title: Track, monitor, and analyze runs in studio
+ Title: Track, monitor, and analyze jobs in studio
-description: Learn how to start, monitor, and track your machine learning experiment runs with the Azure Machine Learning studio.
+description: Learn how to start, monitor, and track your machine learning experiment jobs with the Azure Machine Learning studio.
Previously updated : 04/28/2022 Last updated : 06/24/2022
-# Start, monitor, and track run history in studio
+# Start, monitor, and track job history in studio
-You can use [Azure Machine Learning studio](https://ml.azure.com) to monitor, organize, and track your runs for training and experimentation. Your ML run history is an important part of an explainable and repeatable ML development process.
+You can use [Azure Machine Learning studio](https://ml.azure.com) to monitor, organize, and track your jobs for training and experimentation. Your ML job history is an important part of an explainable and repeatable ML development process.
This article shows how to do the following tasks:
-* Add run display name.
+* Add job display name.
* Create a custom view.
-* Add a run description.
-* Tag and find runs.
-* Run search over your run history.
-* Cancel or fail runs.
-* Monitor the run status by email notification.
+* Add a job description.
+* Tag and find jobs.
+* Run search over your job history.
+* Cancel or fail jobs.
+* Monitor the job status by email notification.
> [!TIP]
-> * If you're looking for information on using the Azure Machine Learning SDK v1 or CLI v1, see [How to track, monitor, and analyze runs (v1)](./v1/how-to-track-monitor-analyze-runs.md).
-> * If you're looking for information on monitoring training runs from the CLI or SDK v2, see [Track experiments with MLflow and CLI v2](how-to-use-mlflow-cli-runs.md).
+> * If you're looking for information on using the Azure Machine Learning SDK v1 or CLI v1, see [How to track, monitor, and analyze jobs (v1)](./v1/how-to-track-monitor-analyze-runs.md).
+> * If you're looking for information on monitoring training jobs from the CLI or SDK v2, see [Track experiments with MLflow and CLI v2](how-to-use-mlflow-cli-runs.md).
> * If you're looking for information on monitoring the Azure Machine Learning service and associated Azure services, see [How to monitor Azure Machine Learning](monitor-azure-machine-learning.md). > > If you're looking for information on monitoring models deployed as web services, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
You'll need the following items:
* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). * You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-## Run Display Name
+## Job display name
-The run display name is an optional and customizable name that you can provide for your run. To edit the run display name:
+The job display name is an optional and customizable name that you can provide for your job. To edit the job display name:
-1. Navigate to the runs list.
+1. Navigate to the **Jobs** list.
-2. Select the run to edit the display name in the run details page.
+1. Select the job to edit.
-3. Select the **Edit** button to edit the run display name.
+ :::image type="content" source="media/how-to-track-monitor-analyze-runs/select-job.png" alt-text="Screenshot of Jobs list.":::
+1. Select the **Edit** button to edit the job display name.
+
+ :::image type="content" source="media/how-to-track-monitor-analyze-runs/display-name.gif" alt-text="Screenshot of how to edit the display name.":::
## Custom View
-To view your runs in the studio:
+To view your jobs in the studio:
-1. Navigate to the **Experiments** tab.
+1. Navigate to the **Jobs** tab.
-1. Select either **All experiments** to view all the runs in an experiment or select **All runs** to view all the runs submitted in the Workspace.
+1. Select either **All experiments** to view all the jobs in an experiment or select **All jobs** to view all the jobs submitted in the Workspace.
-In the **All runs'** page, you can filter the runs list by tags, experiments, compute target and more to better organize and scope your work.
+In the **All jobs'** page, you can filter the jobs list by tags, experiments, compute target and more to better organize and scope your work.
-1. Make customizations to the page by selecting runs to compare, adding charts or applying filters. These changes can be saved as a **Custom View** so you can easily return to your work. Users with workspace permissions can edit, or view the custom view. Also, share the custom view with team members for enhanced collaboration by selecting **Share view**.
+1. Make customizations to the page by selecting jobs to compare, adding charts or applying filters. These changes can be saved as a **Custom View** so you can easily return to your work. Users with workspace permissions can edit, or view the custom view. Also, share the custom view with team members for enhanced collaboration by selecting **Share view**.
-1. To view the run logs, select a specific run and in the **Outputs + logs** tab, you can find diagnostic and error logs for your run.
+1. To view the job logs, select a specific job and in the **Outputs + logs** tab, you can find diagnostic and error logs for your job.
-
+ :::image type="content" source="media/how-to-track-monitor-analyze-runs/custom-views-2.gif" alt-text="Screenshot of how to create a custom view.":::
-## Run description
+## Job description
-A run description can be added to a run to provide more context and information to the run. You can also search on these descriptions from the runs list and add the run description as a column in the runs list.
+A job description can be added to a job to provide more context and information to the job. You can also search on these descriptions from the jobs list and add the job description as a column in the jobs list.
-Navigate to the **Run Details** page for your run and select the edit or pencil icon to add, edit, or delete descriptions for your run. To persist the changes to the runs list, save the changes to your existing Custom View or a new Custom View. Markdown format is supported for run descriptions, which allows images to be embedded and deep linking as shown below.
+Navigate to the **Job Details** page for your job and select the edit or pencil icon to add, edit, or delete descriptions for your job. To persist the changes to the jobs list, save the changes to your existing Custom View or a new Custom View. Markdown format is supported for job descriptions, which allows images to be embedded and deep linking as shown below.
-## Tag and find runs
+## Tag and find jobs
-In Azure Machine Learning, you can use properties and tags to help organize and query your runs for important information.
+In Azure Machine Learning, you can use properties and tags to help organize and query your jobs for important information.
* Edit tags
- You can add, edit, or delete run tags from the studio. Navigate to the **Run Details** page for your run and select the edit, or pencil icon to add, edit, or delete tags for your runs. You can also search and filter on these tags from the runs list page.
+ You can add, edit, or delete job tags from the studio. Navigate to the **Job Details** page for your job and select the edit, or pencil icon to add, edit, or delete tags for your jobs. You can also search and filter on these tags from the jobs list page.
- :::image type="content" source="media/how-to-track-monitor-analyze-runs/run-tags.gif" alt-text="Screenshot: Add, edit, or delete run tags":::
+ :::image type="content" source="media/how-to-track-monitor-analyze-runs/run-tags.gif" alt-text="Screenshot of how to add, edit, or delete job tags.":::
* Query properties and tags
- You can query runs within an experiment to return a list of runs that match specific properties and tags.
+ You can query jobs within an experiment to return a list of jobs that match specific properties and tags.
- To search for specific runs, navigate to the **All runs** list. From there you have two options:
+ To search for specific jobs, navigate to the **All jobs** list. From there you have two options:
- 1. Use the **Add filter** button and select filter on tags to filter your runs by tag that was assigned to the run(s). <br><br>
+ 1. Use the **Add filter** button and select filter on tags to filter your jobs by tag that was assigned to the job(s). <br><br>
OR
- 1. Use the search bar to quickly find runs by searching on the run metadata like the run status, descriptions, experiment names, and submitter name.
+ 1. Use the search bar to quickly find jobs by searching on the job metadata like the job status, descriptions, experiment names, and submitter name.
-## Cancel or fail runs
+## Cancel or fail jobs
-If you notice a mistake or if your run is taking too long to finish, you can cancel the run.
+If you notice a mistake or if your job is taking too long to finish, you can cancel the job.
-To cancel a run in the studio, using the following steps:
+To cancel a job in the studio, using the following steps:
-1. Go to the running pipeline in either the **Experiments** or **Pipelines** section.
+1. Go to the running pipeline in either the **Jobs** or **Pipelines** section.
-1. Select the pipeline run number you want to cancel.
+1. Select the pipeline job number you want to cancel.
1. In the toolbar, select **Cancel**.
-## Monitor the run status by email notification
+## Monitor the job status by email notification
1. In the [Azure portal](https://portal.azure.com/), in the left navigation bar, select the **Monitor** tab.
The following notebooks demonstrate the concepts in this article:
* To learn more about the logging APIs, see the [logging API notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb).
-* For more information about managing runs with the Azure Machine Learning SDK, see the [manage runs notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.ipynb).
+* For more information about managing jobs with the Azure Machine Learning SDK, see the [manage jobs notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.ipynb).
## Next steps
-* To learn how to log metrics for your experiments, see [Log metrics during training runs](how-to-log-view-metrics.md).
+* To learn how to log metrics for your experiments, see [Log metrics during training jobs](how-to-log-view-metrics.md).
* To learn how to monitor resources and logs from Azure Machine Learning, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
Run this code on either of these environments:
You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets. + ## Set up the experiment This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the FileDataset for the input training data, creating the compute target, and defining the training environment.
dataset = dataset.register(workspace=ws,
Create a compute target for your training job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster. + ```Python cluster_name = "gpu-cluster"
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
Run this code on either of these environments:
You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets. + ## Set up the experiment This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the compute target, and defining the training environment.
shutil.copy('pytorch_train.py', project_folder)
Create a compute target for your PyTorch job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster. + ```Python # Choose a name for your CPU cluster
machine-learning How To Train Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md
To connect to the workspace, you need identifier parameters - a subscription, re
from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential
-#Enter details of your AML workspace
+#Enter details of your AzureML workspace
subscription_id = '<SUBSCRIPTION_ID>' resource_group = '<RESOURCE_GROUP>'
-workspace = '<AML_WORKSPACE_NAME>'
+workspace = '<AZUREML_WORKSPACE_NAME>'
#connect to the workspace ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
Run this code on either of these environments:
You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets. + ## Set up the experiment This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the compute target, and defining the training environment.
dataset.to_path()
Create a compute target for your TensorFlow job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster. + ```Python cluster_name = "gpu-cluster"
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
Virtual machine priority| Low priority virtual machines are cheaper but don't guarantee the compute nodes. Virtual machine type| Select CPU or GPU for virtual machine type. Virtual machine size| Select the virtual machine size for your compute.
- Min / Max nodes| To profile data, you must specify 1 or more nodes. Enter the maximum number of nodes for your compute. The default is 6 nodes for an AML Compute.
+ Min / Max nodes| To profile data, you must specify 1 or more nodes. Enter the maximum number of nodes for your compute. The default is 6 nodes for an AzureML Compute.
Advanced settings | These settings allow you to configure a user account and existing virtual network for your experiment. Select **Create**. Creation of a new compute can take a few minutes.
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
compute_target = ws.compute_targets[compute_name]
The intermediate data between the data preparation and the automated ML step can be stored in the workspace's default datastore, so we don't need to do more than call `get_default_datastore()` on the `Workspace` object.
-After that, the code checks if the AML compute target `'cpu-cluster'` already exists. If not, we specify that we want a small CPU-based compute target. If you plan to use automated ML's deep learning features (for instance, text featurization with DNN support) you should choose a compute with strong GPU support, as described in [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md).
+After that, the code checks if the AzureML compute target `'cpu-cluster'` already exists. If not, we specify that we want a small CPU-based compute target. If you plan to use automated ML's deep learning features (for instance, text featurization with DNN support) you should choose a compute with strong GPU support, as described in [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md).
The code blocks until the target is provisioned and then prints some details of the just-created compute target. Finally, the named compute target is retrieved from the workspace and assigned to `compute_target`.
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential). ```python
- # enter details of your AML workspace
+ # enter details of your AzureML workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"
- workspace = "<AML_WORKSPACE_NAME>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
``` ```python
machine-learning How To Use Batch Endpoints Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints-studio.md
OR
* Use a **datastore**:
- You can specify AML registered datastore or if your data is publicly available, specify the public path.
+ You can specify AzureML registered datastore or if your data is publicly available, specify the public path.
:::image type="content" source="media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option":::
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md<